Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
1 year ago | |
---|---|---|
scripts | 1 year ago | |
src | 1 year ago | |
README.md | 1 year ago | |
eval.py | 1 year ago | |
export.py | 1 year ago | |
requirements.txt | 1 year ago | |
train.py | 1 year ago |
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function.Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN,a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptualloss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks.
Paper: Christian Ledig, Lucas thesis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi
Twitter.
The SRGAN contains a generation network and a discriminator network.
Train SRGAN Dataset used: DIV2K
Validation and eval evaluationdataset used: Set5 Set14
The process of training SRGAN needs a pretrained VGG19 based on Imagenet.
Training scripts|
VGG19 pretrained model
SRGAN
├─ README.md # descriptions about SRGAN
├── scripts
├─ run_distribute_train.sh # launch ascend training(8 pcs)
├─ run_eval.sh # launch ascend eval
└─ run_stranalone_train.sh # launch ascend training(1 pcs)
├─ src
├─ ckpt # save ckpt
├─ dataset
├─ testdataset.py # dataset for evaling
└─ traindataset.py # dataset for training
├─ loss
├─ gan_loss.py #srgan losses function define
├─ Meanshift.py #operation for ganloss
└─ gan_loss.py #srresnet losses function define
├─ models
├─ dicriminator.py # discriminator define
├─ generator.py # generator define
└─ ops.py # part of network
├─ result #result
├─ trainonestep
├─ train_gan.py #training process for srgan
├─ train_psnr.py #training process for srresnet
└─ util
└─ util.py # initialization for srgan
├─ test.py # generate images
└─train.py # train script
# distributed training
Usage: sh run_distribute_train.sh [DEVICE_NUM] [DISTRIBUTE] [RANK_TABLE_FILE] [LRPATH] [GTPATH] [VGGCKPT] [VLRPATH] [VGTPATH]
# standalone training
Usage: sh run_standalone_train.sh [DEVICE_ID] [LRPATH] [GTPATH] [VGGCKPT] [VLRPATH] [VGTPATH]
Training result will be stored in scripts/srgan0/ckpt. You can find checkpoint file.
run_eval.sh
for evaluation.# evaling
sh run_eval.sh [CKPT] [EVALLRPATH] [EVALGTPATH]
Evaluation result will be stored in the scripts/result. Under this, you can find generator pictures.
Parameters | |
---|---|
Model Version | V1 |
Resource | CentOs 8.2; Ascend 910; CPU 2.60GHz, 192cores; Memory 755G |
MindSpore Version | 1.2.0 |
Dataset | DIV2K |
Training Parameters | epoch=2000+1000, batch_size = 16 |
Optimizer | Adam |
Loss Function | BCELoss MSELoss VGGLoss |
outputs | super-resolution pictures |
Accuracy | Set14 psnr 27.03 |
Speed | 1pc(Ascend): 540 ms/step; 8pcs: 1500 ms/step |
Total time | 8pcs: 8h |
Checkpoint for Fine tuning | 184M (.ckpt file) |
Scripts | srgan script |
Parameters | single Ascend |
---|---|
Model Version | V1 |
Resource | CentOs 8.2; Ascend 910; CPU 2.60GHz, 192cores; Memory 755G |
MindSpore Version | 1.2.0 |
Dataset | Set14 |
batch_size | 1 |
outputs | super-resolution pictures |
Please check the official homepage.