Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Ilia Kulikov dd0079bde7 | 1 year ago | |
---|---|---|
.circleci | 1 year ago | |
.github | 1 year ago | |
docs | 1 year ago | |
examples | 1 year ago | |
fairseq | 1 year ago | |
fairseq_cli | 1 year ago | |
scripts | 1 year ago | |
tests | 1 year ago | |
.gitignore | 2 years ago | |
.gitmodules | 3 years ago | |
.pre-commit-config.yaml | 2 years ago | |
CODE_OF_CONDUCT.md | 4 years ago | |
CONTRIBUTING.md | 2 years ago | |
LICENSE | 4 years ago | |
MANIFEST.in | 1 year ago | |
README.md | 1 year ago | |
RELEASE.md | 1 year ago | |
hubconf.py | 3 years ago | |
pyproject.toml | 1 year ago | |
release_utils.py | 1 year ago | |
setup.cfg | 2 years ago | |
setup.py | 1 year ago | |
train.py | 3 years ago |
Fairseq(-py) is a sequence modeling toolkit that allows researchers and
developers to train custom models for translation, summarization, language
modeling and other text generation tasks.
We provide reference implementations of various sequence modeling papers:
master
branch renamed to main
.
We also provide pre-trained models for translation and language modeling
with a convenient torch.hub
interface:
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'
See the PyTorch Hub tutorials for translation
and RoBERTa for more examples.
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./
# to install the latest stable release (0.10.x)
# pip install fairseq
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
--global-option="--deprecated_fused_adam" --global-option="--xentropy" \
--global-option="--fast_multihead_attn" ./
pip install pyarrow
--ipc=host
or --shm-size
nvidia-docker run
.The full documentation contains instructions
for getting started, training new models and extending fairseq with new model
types and tasks.
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below,
as well as example training and evaluation commands.
We also have more detailed READMEs to reproduce results from specific papers:
fairseq(-py) is MIT-licensed.
The license applies to the pre-trained models as well.
Please cite as:
@inproceedings{ott2019fairseq,
title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
year = {2019},
}
No Description
Python Cuda C++ Cython Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》