Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
freatraum c16a0feb0e | 1 year ago | |
---|---|---|
.github/workflows | 1 year ago | |
augmentation | 1 year ago | |
basics | 1 year ago | |
checkpoints | 1 year ago | |
configs | 1 year ago | |
data | 1 year ago | |
data_gen | 1 year ago | |
dictionaries | 1 year ago | |
docs | 1 year ago | |
inference | 1 year ago | |
modules | 1 year ago | |
onnx | 1 year ago | |
pipelines | 1 year ago | |
preprocessing | 1 year ago | |
samples | 1 year ago | |
src | 1 year ago | |
training | 1 year ago | |
tts | 1 year ago | |
utils | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 2 years ago | |
README.md | 1 year ago | |
main.py | 1 year ago | |
requirements.txt | 1 year ago | |
run.py | 1 year ago | |
test_crepe.py | 1 year ago | |
vocode.py | 1 year ago |
训练后的模型将自动保存到启智的结果里,更新多人
同步官方最新版本。
This is a cleaner version of Diffsinger, which provides:
TBD
**[ 中文教程 | Chinese Tutorial ]**
# Install PyTorch manually (1.8.2 LTS recommended)
# See instructions at https://pytorch.org/get-started/locally/
# Below is an example for CUDA 11.1
pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio==0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111
# Install other requirements
pip install -r requirements.txt
checkpoints/
folder, or train a ultra-lightweight DDSP vocoder first by yourself, then configure it according to the relevant instructions.checkpoints/
folder.This pipeline will guide you from installing dependencies to formatting your recordings and generating the final configuration file.
The following is only an example for opencpop dataset.
export PYTHONPATH=.
CUDA_VISIBLE_DEVICES=0 python data_gen/binarize.py --config configs/acoustic/nomidi.yaml
The following is only an example for opencpop dataset.
CUDA_VISIBLE_DEVICES=0 python run.py --config configs/acoustic/nomidi.yaml --exp_name $MY_DS_EXP_NAME --reset
python main.py path/to/your.ds --exp $MY_DS_EXP_NAME
See more supported arguments with python main.py -h
. See examples of *.ds files in the samples/ folder.
Please see this documentation before you run the following command:
python onnx/export/export_acoustic.py --exp $MY_DS_EXP_NAME
See more supported arguments with python onnx/export/export_acoustic.py -h
.
OpenUTAU, an open-sourced SVS editor with modern GUI, has unofficial temporary support for DiffSinger. See OpenUTAU for DiffSinger for more details.
See the original paper, the docs/ folder and releases for more details.
Below is the README inherited from the original repository.
| Interactive🤗 TTS
| Interactive🤗 SVS
This repository is the official PyTorch implementation of our AAAI-2022 paper, in which we propose DiffSinger (for Singing-Voice-Synthesis) and DiffSpeech (for Text-to-Speech).
DiffSinger/DiffSpeech at training | DiffSinger/DiffSpeech at inference |
---|---|
🎉 🎉 🎉 Updates:
🚀 News:
PortaSpeech: Portable and High-Quality Generative Text-to-Speech
was accepted by NeurIPS-2021 .conda create -n your_env_name python=3.8
source activate your_env_name
pip install -r requirements_2080.txt (GPU 2080Ti, CUDA 10.2)
or pip install -r requirements_3090.txt (GPU 3090, CUDA 11.4)
tensorboard --logdir_spec exp_name
Old audio samples can be found in our demo page. Audio samples generated by this repository are listed here:
Speech samples (test set of LJSpeech) can be found in demos_1213.
Singing samples (test set of PopCS) can be found in demos_0112.
@article{liu2021diffsinger,
title={Diffsinger: Singing voice synthesis via shallow diffusion mechanism},
author={Liu, Jinglin and Li, Chengxi and Ren, Yi and Chen, Feiyang and Liu, Peng and Zhao, Zhou},
journal={arXiv preprint arXiv:2105.02446},
volume={2},
year={2021}}
Our codes are based on the following repos:
Also thanks Keon Lee for fast implementation of our work.
No Description
Python Jupyter Notebook Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》