Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
9527aaa ecb66537a3 | 1 year ago | |
---|---|---|
assets | 1 year ago | |
ddm | 1 year ago | |
examples | 1 year ago | |
tests | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 1 year ago | |
demo.py | 1 year ago | |
setup.py | 1 year ago |
本项目是使用以下这位大佬的mindspore代码
https://github.com/lvyufeng/denoising-diffusion-mindspore
增加了一些数据copy的操作,适配启智的训练作业环境,训练afhq动物数据集图像的生成,demo.py是训练作业中的启动文件
经测试,将参数设置为image_size=128,train_batch_size=32,能够得到比较好的训练效果,如训练作业train_cat_data中版本V0010中的参数设置:
以下是原readme中的内容
Implementation of Denoising Diffusion Probabilistic Model in MindSpore. The implementation refers to lucidrains's denoising-diffusion-pytorch.
Training 50k steps with EMA.
pip install denoising-diffusion-mindspore
# Github repo(oversea)
pip install git+https://github.com/lvyufeng/denoising-diffusion-mindspore
# From OpenI repo(in China)
pip install git+https://openi.pcl.ac.cn/lvyufeng/denoising-diffusion-mindspore
from ddm import Unet, GaussianDiffusion, value_and_grad
from ddm.ops import randn
model = Unet(
dim = 64,
dim_mults = (1, 2, 4, 8)
)
diffusion = GaussianDiffusion(
model,
image_size = 128,
timesteps = 1000, # number of steps
loss_type = 'l1' # L1 or L2
)
training_images = randn((1, 3, 128, 128)) # images are normalized from 0 to 1
grad_fn = value_and_grad(diffusion, None, diffusion.trainable_params())
loss, grads = grad_fn(training_images)
# after a lot of training
sampled_images = diffusion.sample(batch_size = 1)
print(sampled_images.shape) # (4, 3, 128, 128)
Or, if you simply want to pass in a folder name and the desired image dimensions, you can use the Trainer
class to easily train a model.
from download import download
from ddm import Unet, GaussianDiffusion, Trainer
url = 'https://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz'
path = download(url, './102flowers', 'tar.gz')
model = Unet(
dim = 64,
dim_mults = (1, 2, 4, 8)
)
diffusion = GaussianDiffusion(
model,
image_size = 64,
timesteps = 10, # number of steps
sampling_timesteps = 5, # number of sampling timesteps (using ddim for faster inference [see citation for ddim paper])
loss_type = 'l1' # L1 or L2
)
trainer = Trainer(
diffusion,
path,
train_batch_size = 1,
train_lr = 8e-5,
train_num_steps = 1000, # total training steps
gradient_accumulate_every = 2, # gradient accumulation steps
ema_decay = 0.995, # exponential moving average decay
amp_level = 'O1', # turn on mixed precision
)
trainer.train()
amp_level
ofTrainer
will automaticlly set toO1
on Ascend.
@inproceedings{NEURIPS2020_4c5bcfec,
author = {Ho, Jonathan and Jain, Ajay and Abbeel, Pieter},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
pages = {6840--6851},
publisher = {Curran Associates, Inc.},
title = {Denoising Diffusion Probabilistic Models},
url = {https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf},
volume = {33},
year = {2020}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》