Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
uestc_dengjian c6ab5d3c04 | 1 year ago | |
---|---|---|
ascend310_infer | 1 year ago | |
scripts | 1 year ago | |
src | 1 year ago | |
README.md | 1 year ago | |
README_CN.md | 1 year ago | |
default_config.yaml | 1 year ago | |
eval.py | 1 year ago | |
export.py | 1 year ago | |
postprocess.py | 1 year ago | |
preprocess.py | 1 year ago | |
requirements.txt | 1 year ago | |
test.py | 1 year ago | |
train.py | 1 year ago | |
write_mindrecords.py | 1 year ago |
The authors proposed a deep autoencoder-based approach to identify signal features from low-light images handcrafting and adaptively brighten images without over-amplifying the lighter parts in images (i.e., without saturation of image pixels) in high dynamic range. The network can then be successfully applied to naturally low-light environment and/or hardware degraded images.
Paper: Kin Gwn Lore, Adedotun Akintayo, Soumik Sarkar: LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. 2015.
The overall description of LLNet is show below:
Dataset used: dbimagenes
.
└─llnet
├─ README.md
├─ README_CN.md
├─── ascend310_infer
│ ├── build.sh # build bash
│ ├── CMakeLists.txt # CMakeLists
│ ├── inc
│ │ └── utils.h # utils head
│ └── src
│ ├── main.cc # main function of ascend310_infer
│ └── utils.cc # utils function of ascend310_infer
├─── scripts
│ ├─ run_standalone_train_for_ascend.sh # launch standalone training with Ascend 910 platform(1p)
│ ├─ run_distribute_train_for_ascend.sh # launch distributed training with Ascend 910 platform(8p)
│ ├─ run_standalone_train_for_gpu.sh # launch standalone training with GPU platform(1p)
│ ├─ run_eval_for_ascend.sh # launch evaluating with Ascend 910 platform
│ ├─ run_infer_310.sh # launch evaluating with Ascend 310 platform
│ └─ run_eval_for_gpu.sh # launch evaluating with GPU platform
├─ src
│ ├─ model_utils
│ │ ├─ config.py # parameter configuration
│ │ ├─ device_adapter.py # device adapter
│ │ ├─ local_adapter.py # local adapter
│ │ └─ moxing_adapter.py # moxing adapter
│ ├─ dataset.py # data reading
│ ├─ lr_generator.py # learning rate generator
│ └─ llnet.py # network definition
├─ default_config.yaml # parameter configuration
├─ eval.py # eval net
├─ export.py # export checkpoint for ascend310_infer
├─ postprocess.py # post process for ascend310_infer
├─ preprocess.py # pre process for ascend310_infer
├─ requirements.txt # the pyaml package required by this network
├─ test.py # the test for LLNet network
├─ train.py # train net
└─ write_mindrecords.py # write the mindrecords for train, eval, and test
Parameters for both training and evaluating can be set in default_config.yaml.
'random_seed': 1, # fix random seed
'rank': 0, # local rank of distributed
'group_size': 1, # world size of distributed
'work_nums': 8, # number of workers to read the data
'pretrain_epoch_size': 5, # the epoch number for pretrain
'finetrain_epoch_size': 300, # the epoch number for finetrain
'keep_checkpoint_max': 20, # max numbers to keep checkpoints
'save_ckpt_path': './', # save checkpoint path
'train_batch_size': 500, # input batch size for training
'val_batch_size': 1250, # input batch size for evaluation
'lr_init': [0.01, 0.01, 0.001, 0.001], # initiate learning rate for the first three layer's pretrain and finetrain
'weight_decay': 0.0, # weight decay
'momentum': 0.9, # momentum
# distribute training foar Ascend(8p)
bash run_distribute_train_ascend.sh [RANK_TABLE_FILE] [DATASET_PATH]
# standalone training for Ascend
bash run_standalone_train_for_ascend.sh [DEVICE_ID] [DATASET_PATH]
# distributed training example(8p) for Ascend
bash run_distribute_train_for_ascend.sh /home/hccl_8p_01234567.json /dataset
# standalone training example for for Ascend
bash run_standalone_train_for_ascend.sh 0 ../dataset
You can find checkpoint file together with result in log.
# Evaluation
bash run_eval_for_ascend.sh [DEVICE_ID] [DATASET_PATH] [CHECKPOINT]
# Evaluation with checkpoint
bash run_eval_for_ascend.sh 5 ../dataset ./ckpt_5/llnet-rank5-286_408.ckpt
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
PSNR=21.593(dB) SSIM=0.617
Export MindIR on local
python export.py --device_target [PLATFORM] --device_id [DEVICE_ID] --checkpoint [CHECKPOINT_FILE] --file_format [FILE_FORMAT] --file_name [FILE_NAME]
The checkpoint_file parameter is required,
PLATFORM
should be in ["Ascend", "GPU", "CPU"]
DEVICE_ID
should be in [0-7]
FILE_FORMAT
should be in ["AIR", "ONNX", "MINDIR"], the default value is MINDIR
FILE_NAME
the base name for the exported model, the default value is llnet
Before performing inference, the mindir file must bu exported by export.py
script. We only provide an example of inference using MINDIR model.
Current batch_Size can only be set to 1.
# Ascend310 inference
bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [NEED_PREPROCESS] [DEVICE_ID]
MINDIR_PATH
should be the filename of the MINDIR model.DATASET_PATH
should be the path of the dataset.NEED_PREPROCESS
can be y or n. It should by y for the first time of running.DEVICE_ID
is optional, default value is 0.Inference result is saved in current path, you can find result like this in acc.log file.
PSNR: 21.582 (dB)
SSIM: 0.604
Parameters | Ascend 910 |
---|---|
Model Version | LLNet |
Resource | Ascend 910 |
uploaded Date | 07/23/2022 (month/day/year) |
MindSpore Version | 1.5.1 |
Dataset | dbimagenes |
Training Parameters | default_config.yaml |
Optimizer | Adam |
Loss Function | MSE |
Loss | 0.0105 |
Total time | 0 h 17 m 21 s 2ps |
Checkpoint for Fine tuning | 21.5 M(.ckpt file) |
Parameters | Ascend 910 |
---|---|
Model Version | LLNet |
Resource | Ascend 910 |
uploaded Date | 07/23/2022 (month/day/year) |
MindSpore Version | 1.5.1 |
Dataset | dbimagenes |
batch_size | 1250 |
outputs | 289 pixels reconstructed |
Accuracy | PSNR = 21.593 SSIM = 0.617 |
Please check the official homepage.
No Description
Python Shell C++ Markdown Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》