Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Xintao 7552a7791c | 1 month ago | |
---|---|---|
.github/workflows | 1 month ago | |
.vscode | 2 years ago | |
assets | 2 years ago | |
experiments/pretrained_models | 2 years ago | |
gfpgan | 1 year ago | |
inputs | 2 years ago | |
options | 2 years ago | |
scripts | 2 years ago | |
tests | 2 years ago | |
.gitignore | 2 years ago | |
.pre-commit-config.yaml | 2 years ago | |
CODE_OF_CONDUCT.md | 2 years ago | |
Comparisons.md | 2 years ago | |
FAQ.md | 2 years ago | |
LICENSE | 2 years ago | |
MANIFEST.in | 2 years ago | |
PaperModel.md | 2 years ago | |
README.md | 1 year ago | |
README_CN.md | 2 years ago | |
VERSION | 1 year ago | |
cog.yaml | 1 year ago | |
cog_predict.py | 1 year ago | |
inference_gfpgan.py | 1 year ago | |
requirements.txt | 1 year ago | |
setup.cfg | 2 years ago | |
setup.py | 2 years ago |
🚀 Thanks for your interest in our work. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN 😊
GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration.
It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration.
❓ Frequently Asked Questions can be found in FAQ.md.
🚩 Updates
If GFPGAN is helpful in your photos/projects, please help to ⭐ this repo or recommend it to your friends. Thanks😊
Other recommended projects:
▶️ Real-ESRGAN: A practical algorithm for general image restoration
▶️ BasicSR: An open-source image and video restoration toolbox
▶️ facexlib: A collection that provides useful face-relation functions
▶️ HandyView: A PyQt5-based image viewer that is handy for view and comparison
[Paper] [Project Page] [Demo]
Xintao Wang, Yu Li, Honglun Zhang, Ying Shan
Applied Research Center (ARC), Tencent PCG
We now provide a clean version of GFPGAN, which does not require customized CUDA extensions.
If you want to use the original model in our paper, please see PaperModel.md for installation.
Clone repo
git clone https://github.com/TencentARC/GFPGAN.git
cd GFPGAN
Install dependent packages
# Install basicsr - https://github.com/xinntao/BasicSR
# We use BasicSR for both training and inference
pip install basicsr
# Install facexlib - https://github.com/xinntao/facexlib
# We use face detection and face restoration helper in the facexlib package
pip install facexlib
pip install -r requirements.txt
python setup.py develop
# If you want to enhance the background (non-face) regions with Real-ESRGAN,
# you also need to install the realesrgan package
pip install realesrgan
We take the v1.3 version for an example. More models can be found here.
Download pre-trained models: GFPGANv1.3.pth
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
Inference!
python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...
-h show this help
-i input Input image or folder. Default: inputs/whole_imgs
-o output Output folder. Default: results
-v version GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
-s upscale The final upsampling scale of the image. Default: 2
-bg_upsampler background upsampler. Default: realesrgan
-bg_tile Tile size for background sampler, 0 for no tile during testing. Default: 400
-suffix Suffix of the restored faces
-only_center_face Only restore the center face
-aligned Input are aligned faces
-ext Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto
If you want to use the original model in our paper, please see PaperModel.md for installation and inference.
Version | Model Name | Description |
---|---|---|
V1.3 | GFPGANv1.3.pth | Based on V1.2; more natural restoration results; better results on very low-quality / high-quality inputs. |
V1.2 | GFPGANCleanv1-NoCE-C2.pth | No colorization; no CUDA extensions are required. Trained with more data with pre-processing. |
V1 | GFPGANv1.pth | The paper model, with colorization. |
The comparisons are in Comparisons.md.
Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.
Version | Strengths | Weaknesses |
---|---|---|
V1.3 | ✓ natural outputs ✓better results on very low-quality inputs ✓ work on relatively high-quality inputs ✓ can have repeated (twice) restorations |
✗ not very sharp ✗ have a slight change on identity |
V1.2 | ✓ sharper output ✓ with beauty makeup |
✗ some outputs are unnatural |
You can find more models (such as the discriminators) here: [Google Drive], OR [Tencent Cloud 腾讯微云]
We provide the training codes for GFPGAN (used in our paper).
You could improve it according to your own needs.
Tips
Procedures
(You can try a simple version ( options/train_gfpgan_v1_simple.yml
) that does not require face component landmarks.)
Dataset preparation: FFHQ
Download pre-trained models and other data. Put them in the experiments/pretrained_models
folder.
Modify the configuration file options/train_gfpgan_v1.yml
accordingly.
Training
python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch
GFPGAN is released under Apache License Version 2.0.
@InProceedings{wang2021gfpgan,
author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
If you have any question, please email xintao.wang@outlook.com
or xintaowang@tencent.com
.
No Description
Python Markdown INI other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》