Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
SakiRinn 22c3b31561 | 1 year ago | |
---|---|---|
ascend310_infer | 1 year ago | |
data | 1 year ago | |
scripts | 1 year ago | |
src | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 1 year ago | |
eval_image.py | 1 year ago | |
eval_video.py | 1 year ago | |
export.py | 1 year ago | |
postprocess.py | 1 year ago | |
preprocess.py | 1 year ago | |
requirements.txt | 1 year ago | |
train_image.py | 1 year ago | |
train_video.py | 1 year ago | |
train_video_baselines.py | 1 year ago |
Hp-vae-gan uses a single image or video sample to generate different but similar new samples.
Paper Gur S , Benaim S , Wolf L . Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample[J]. 2020.
The BibTex citing format for this repository is as follows:
@article{hp-vae-gan,
title={Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample},
journal={Github repository},
publisher={Github},
year={2022},
howpublished={\url{https://github.com/SakiRinn/mindspore-hp-vae-gan}}
}
The overall network architecture of hp-vae-gan is show below:
Just a picture or a video. It can be specified by the user.
./data
folder..
├── LICENSE
├── README.md
├── ascend310_infer
│ ├── CMakeLists.txt
│ ├── build.sh
│ ├── inc
│ │ └── utils.h
│ └── src
│ ├── main.cc
│ └── utils.cc
├── data # Sample dataset
│ ├── imgs
│ │ └── air_balloons.jpg
│ └── vids
│ └── air_balloons.mp4
├── eval_image.py
├── eval_video.py
├── export.py
├── postprocess.py
├── preprocess.py
├── requirements.txt
├── scripts
│ ├── run_eval_ascend.sh # script for evaluation on Ascend 910
│ ├── run_infer_310.sh # script for inference on Ascend 310
│ └── run_train_ascend.sh # script for training on Ascend 910
├── src
│ ├── __init__.py
│ ├── datasets
│ │ ├── __init__.py
│ │ ├── generate_frames.py
│ │ ├── image.py
│ │ └── video.py
│ ├── modules
│ │ ├── __init__.py
│ │ ├── losses.py
│ │ ├── networks_2d.py
│ │ ├── networks_3d.py
│ │ └── optimizers.py
│ ├── sinFID
│ │ ├── __init__.py
│ │ ├── c3d.py
│ │ ├── fid_score.py
│ │ └── inception.py
│ ├── tools
│ │ ├── __init__.py
│ │ ├── pt2ms.py
│ │ ├── spectral_norm.py
│ │ └── trilinear.py
│ └── utils
│ ├── __init__.py
│ ├── extract.py
│ ├── images.py
│ ├── logger.py
│ ├── progress_bar.py
│ └── saver.py
├── train_image.py
├── train_video.py
└── train_video_baselines.py
You can start training using python or shell scripts. The usage of shell scripts as follows:
sh scripts/run_train_ascend.sh IMAGE_PATH [DEVICE_ID]
IMAGE_PATH
: The filename of the training image.DEVICE_ID
: The number of the Ascend device.You can start evaluation using python or shell scripts. The usage of shell scripts as follows:
sh scripts/run_eval_ascend.sh EXPERIMENT_DIR [DEVICE_ID]
EXPERIMENT_DIR
: The directory to the training output folder.DEVICE_ID
: The number of the Ascend device.Export MindIR on local.
python export.py --exp-dir [EXP_DIR] --device-id [DEVICE_ID]
EXP_DIR
: The directory to the training output folder.DEVICE_ID
: The number of the Ascend device.Before performing inference, the mindir file must bu exported by export.py
script. We only provide an example of inference using MINDIR model.
sh scripts/run_infer_image_310.sh EXPERIMENT_DIR [DEVICE_ID]
EXPERIMENT_DIR
: The directory to the training output folder.DEVICE_ID
: The number of the Ascend device.Please check the official homepage.
No Description
Python C++ Shell Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》