PanGu-Alpha-Applications
中文|English
Introduction
This project aims to provide algorithm examples and application demonstrations from algorithm layer to application layer for Pengcheng series super large-scale pre training models, take the large model as AI infrastructure, and accelerate the application technology innovation and application ecology construction of the large model.
Navigation
Progress
- 2021.08.18
- Relase the first version of baseline for few shot model transfer.
- Task layer:cmnli baseline.
- Algorithm layer:fine-tune,prompt-tune,incontext-learning.
Framework
Model layer:Training models based on distributed training, big data and efficient algorithm. Building AI underlying infrastructure in the fields of text, multilingual, multimodal and knowledge graph.
Algorithm layer:Based on the pre-trained models, make algorithm innovation in model compression, small sample model migration and continuous learning, and build a basic algorithm module to provide underlying algorithm support for the landing application of the pre training model.
Task layer:Based on the basic algorithm module of the algorithm layer, the implementation examples of the two basic tasks NLU and NLG are constructed to provide the underlying task modeling support for the upper application.
Application layer:Applications such as dialogue robot, writing assistant, multilingual translation, Knowledge Q & A and patent aided generation are designed to provide demonstration applications for the landing application of the pre training model and promote the accelerated landing and in-depth application of the model.
Codes
PanGu-Alpha-Application
|-- README.md
|-- app
|-- com
|-- megatron
|-- method
|-- requirements.txt
`-- resource
License
adding