Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
SuMuzi 7ac47264d8 | 1 year ago | |
---|---|---|
.gitignore | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 1 year ago | |
models.py | 1 year ago | |
test | 1 year ago | |
train | 1 year ago | |
utils.py | 1 year ago |
A modern PyTorch implementation of SRGAN
It is deeply based on Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network paper published by the Twitter team (https://arxiv.org/abs/1609.04802) but I replaced activations by Swish (https://arxiv.org/abs/1710.05941)
You can start training out-of-the-box with the CIFAR-10 or CIFAR-100 datasets, to emulate the paper results however, you will need to download and clean the ImageNet dataset yourself. Results and weights are provided for the ImageNet dataset.
Contributions are welcome!
usage: train [-h] [--dataset DATASET] [--dataroot DATAROOT]
[--workers WORKERS] [--batchSize BATCHSIZE]
[--imageSize IMAGESIZE] [--upSampling UPSAMPLING]
[--nEpochs NEPOCHS] [--generatorLR GENERATORLR]
[--discriminatorLR DISCRIMINATORLR] [--cuda] [--nGPU NGPU]
[--generatorWeights GENERATORWEIGHTS]
[--discriminatorWeights DISCRIMINATORWEIGHTS] [--out OUT]
Example: ./train --cuda
This will start a training session in the GPU. First it will pre-train the generator using MSE error for 2 epochs, then it will train the full GAN (generator + discriminator) for 100 epochs, using content (mse + vgg) and adversarial loss. Although weights are already provided in the repository, this script will also generate them in the checkpoints file.
usage: test [-h] [--dataset DATASET] [--dataroot DATAROOT] [--workers WORKERS]
[--batchSize BATCHSIZE] [--imageSize IMAGESIZE]
[--upSampling UPSAMPLING] [--cuda] [--nGPU NGPU]
[--generatorWeights GENERATORWEIGHTS]
[--discriminatorWeights DISCRIMINATORWEIGHTS]
Example: ./test --cuda
This will start a testing session in the GPU. It will display mean error values and save the generated images in the output directory, all three versions: low resolution, high resolution (original) and high resolution (generated).
The following results have been obtained with the current training setup:
Other training parameters are the default of train script
Testing has been executed on 128 randomly selected ImageNet samples (disjoint from training set)
[7/8] Discriminator_Loss: 1.4123 Generator_Loss (Content/Advers/Total): 0.0901/0.6152/0.0908
See more under the output directory
High resolution / Low resolution / Recovered High Resolution
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》