We pretrain protein language model based on Megatron-LM framework, and then evaluate the pretrained model results on TAPE (Tasks Assessing Protein Embeddings), which contains a set of five biologically relevant semi-supervised learning tasks. And our pretrained model achieved good performance on these tasks.
The proposal of pre-training models such as Bert have greatly promoted the development of natural language processing, improving the performance of language models. Inspired by the similarity of amino acid sequence and text sequence, we consider applying the method of pre-training language model to biological data.
We provide pretrain and finetune code in two separate folders. If you use the pretrained model we provide, you can simply download the checkpoint and follow the finetune guide. If you want to pretrain your own model yourself, you can refer to the pretrain guide.
For the pretrained model with 200 million parameters,
you can download model checkpoint via GoogleDrive, or TsinghuaCloud.
For the pretrained model with 3 billion parameters,
you can download model checkpoint from here.
.
├── pretrain (protein language model pretrain)
│ ├── megatron (model folder)
│ ├── pretrain_tools (multi-node pretrain)
│ ├── protein_tools (data preprocess shells)
└── tape
├── conda_env (conda env in yaml format)
├── converter (converter script and model config files)
├── scripts (model generator, finetune)
└── tape (tape model)
As the structure above shows, there are two stages as follows.
PFAM
)Detailed explanations are given in each folder's readme.
Task | Metric | TAPE | ProteinLM (200M) | ProteinLM (3B) |
---|---|---|---|---|
contact prediction | P@L/5 | 0.36 | 0.52 | 0.75 |
remote homology | Top 1 Accuracy | 0.21 | 0.26 | 0.30 |
secondary structure | Accuracy (3-class) | 0.73 | 0.75 | 0.79 |
fluorescence | Spearman's rho | 0.68 | 0.68 | 0.68 |
stability | Spearman's rho | 0.73 | 0.77 | 0.79 |
If you have any problem using ProteinLM, feel free to contact us.
Our work is based on the following papers.
Besides, part of the code is based on Megatron-LM and TAPE.
Evaluating Protein Transfer Learning with TAPE
@article{DBLP:journals/corr/abs-1909-08053,
author = {Mohammad Shoeybi and
Mostofa Patwary and
Raul Puri and
Patrick LeGresley and
Jared Casper and
Bryan Catanzaro},
title = {Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism},
journal = {CoRR},
volume = {abs/1909.08053},
year = {2019},
url = {http://arxiv.org/abs/1909.08053},
archivePrefix = {arXiv},
eprint = {1909.08053},
timestamp = {Tue, 24 Sep 2019 11:33:51 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-08053.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
@article{DBLP:journals/corr/abs-1906-08230,
author = {Roshan Rao and
Nicholas Bhattacharya and
Neil Thomas and
Yan Duan and
Xi Chen and
John F. Canny and
Pieter Abbeel and
Yun S. Song},
title = {Evaluating Protein Transfer Learning with {TAPE}},
journal = {CoRR},
volume = {abs/1906.08230},
year = {2019},
url = {http://arxiv.org/abs/1906.08230},
archivePrefix = {arXiv},
eprint = {1906.08230},
timestamp = {Sat, 23 Jan 2021 01:20:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-08230.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》