Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
xiangy 67a3f33646 | 2 years ago | |
---|---|---|
.. | ||
Fine-Tunning Tutorials | 2 years ago | |
Pretraining Code | 2 years ago | |
Med-BERT_Structure.png | 2 years ago | |
Med-BERT_results.png | 2 years ago | |
README.md | 2 years ago |
This repository is originated from https://github.com/ZhiGroup/Med-BERT on Github, which was forked by the second author of the corresponding paper.
This repository provides the code for pre-training and fine-tuning Med-BERT, a contextualized embedding model that delivers a meaningful performance boost for real-world disease-prediction problems as compared to state-of-the-art models.
Med-Bert adapts bidirectional encoder representations from transformers (BERT) framework and pre-trains contextualized embeddings for diagnosis codes mainly in ICD-9 and ICD-10 format using structured data from an EHR dataset containing 28,490,650 patients.
The Overview of the Med-BERT model.
Please refer to our paper Med-BERT: pre-trained contextualized embeddings on large-scale structured electronic health records for disease prediction for more details.
To reproduce the steps necessary for pre-training Med-BERT
python preprocess_pretrain_data.py <data_File> <vocab/NA> <output_Prefix> <subset_size/0forAll>
python create_BERTpretrain_EHRfeatures.py --input_file=<output_Prefix.bencs.train> --output_file='output_file' --vocab_file=<output_Prefix.types>--max_predictions_per_seq=1 --max_seq_length=64
python run_EHRpretraining.py --input_file='output_file' --output_dir=<path_to_outputfolder> --do_train=True --do_eval=True --bert_config_file=config.json --train_batch_size=32 --max_seq_length=512 --max_predictions_per_seq=1 --num_train_steps=4500000 --num_warmup_steps=10000 --learning_rate=5e-5
You can find an example for the construction of the data_file under Example data as well as images showing the construction of preprocessed data and the BERT features. Additional details are available under Pretraining Tutorial
Note: We run our code using mainly GPU, while CPU and TPU options migt be available in the code they were not tested.
To see an example of how to use Med-BERT for a specific disease prediction task, you can follow the Med-BERT DHF prediction notebook
Kindly note that you need to use the following code for preparing the fine-tunning data using (create_ehr_pretrain_FTdata.py) in a similar way of preparing the pretraining data.
Python: 3.7+
Pytorch 1.5.0
Tensorflow 1.13.1+
Pandas
Pickle
tqdm
pytorch-transformers
Google BERT
Prediction results for the evaluation sets by training on different sizes of data on DHF-Cerner (top), PaCa-Cerner (middle), and PaCa-Truven (bottom). The shadows indicate the standard deviations. Please refer to our paper for more details.
Initially we really hoped to share our models but unfortunately, the pre-trained models are no longer sharable.
According to SBMI Data Service Office: "Under the terms of our contracts with data vendors, we are not permitted to share any of the data utilized in our publications, as well as large models derived from those data."
Please post a Github issue if you have any questions.
Please acknowledge the following work in papers or derivative software:
@article{rasmy2021med,
title={Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction},
author={Rasmy, Laila and Xiang, Yang and Xie, Ziqian and Tao, Cui and Zhi, Degui},
journal={NPJ digital medicine},
volume={4},
number={1},
pages={1--13},
year={2021},
publisher={Nature Publishing Group}
}
提供医学自然语言处理相关的各种资源和算法
HTML Jupyter Notebook Python
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》