Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
dengjian 35610474c3 | 2 years ago | |
---|---|---|
ascend310_infer | 2 years ago | |
scripts | 2 years ago | |
src | 2 years ago | |
README.md | 2 years ago | |
README_CN.md | 2 years ago | |
default_config.yaml | 2 years ago | |
eval.py | 2 years ago | |
export.py | 2 years ago | |
postprocess.py | 2 years ago | |
preprocess.py | 2 years ago | |
requirements.txt | 2 years ago | |
train.py | 2 years ago |
Paper: Saurabh Singh, Shankar Krishnan. Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks. 2020.
The overall network architecture of ResNetV2-50-FRN is show below:
Dataset used: imagenet
.
└─resnetv2_50_frn
├─README.md
├─scripts
├─run_standalone_train_for_ascend.sh # launch standalone training with Ascend platform(1p)
├─run_distribute_train_for_ascend.sh # launch distributed training with Ascend platform(8p)
├─run_standalone_train_for_gpu.sh # launch standalone training with gpu platform(1p)
├─run_distribute_train_for_gpu.sh # launch distributed training with gpu platform(8p)
├─run_eval_for_ascend # launch evaluating with Ascend platform
└─run_eval_for_gpu.sh # launch evaluating with gpu platform
├─src
├─ model_utils
├──config.py # parameter configuration
├──device_adapter.py # device adapter
├──local_adapter.py # local adapter
└──moxing_adapter.py # moxing adapter
├─dataset.py # data preprocessing
├─lr_generator.py # learning rate generator
└─resnetv2_50_frn.py # network definition
├─default_config.yaml # parameter configuration
├─export.py # convert checkpoint
├─eval.py # eval net
└─train.py # train net
Parameters for both training and evaluating can be set in default_config.yaml.
'random_seed': 1, # fix random seed
'rank': 0, # local rank of distributed
'group_size': 1, # world size of distributed
'work_nums': 8, # number of workers to read the data
'epoch_size': 240, # total epoch numbers
'keep_checkpoint_max': 20, # max numbers to keep checkpoints
'save_ckpt_path': './', # save checkpoint path
'train_batch_size': 32, # input batch size for training
'val_batch_size': 125, # input batch size for evaluation
'num_classes': 1000, # dataset class numbers
'lr_init': 0.025, # initiate learning rate
'weight_decay': 0.0001, # weight decay
'momentum': 0.9, # momentum
'cutout': True, # whether to cutout the input data for training
'coutout_leng': 56, # the length of cutout when cutout is True
Ascend:
# distribute training example(8p)
bash run_distribute_train_for_ascend.sh DATA_DIR
# standalone training
bash run_standalone_train_for_ascend.sh DEVICE_ID DATA_DIR
# distributed training example(8p) for Ascend
bash scripts/run_distribute_train_for_ascend.sh /dataset
# standalone training example for for Ascend
bash scripts/run_standalone_train_for_ascend.sh 0 /dataset
You can find checkpoint file together with result in log.
# Evaluation
bash run_eval_for_ascend.sh DEVICE_ID DATA_DIR PATH_CHECKPOINT
# Evaluation with checkpoint
bash scripts/run_eval_for_ascend.sh 0 /dataset ./ckpt_0/resnetv2-50-frn-rank0-240_5005.ckpt
Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
acc=77.4%(TOP1,size:224224)
acc=78.3%(TOP1,size:299299)
Parameters | Ascend 910 |
---|---|
Model Version | ResetNetV2-50-FRN |
Resource | Ascend 910 |
uploaded Date | 10/20/2021 (month/day/year) |
MindSpore Version | 1.3.0 |
Dataset | ImageNet |
Training Parameters | default_config.yaml |
Optimizer | SGD |
Loss Function | SoftmaxCrossEntropyWithLogits |
Loss | 0.9140 |
Total time | 50.5h 8ps |
Checkpoint for Fine tuning | 311.9 M(.ckpt file) |
Parameters | Ascend 910 |
---|---|
Model Version | ResetNetV2-50-FRN |
Resource | Ascend 910 |
uploaded Date | 10/20/2021 (month/day/year) |
MindSpore Version | 1.3.0 |
Dataset | ImageNet |
batch_size | 125 |
outputs | probability |
Accuracy | acc=77.4% (TOP1,size:224*224) |
Accuracy | acc=78.3%(TOP1,size:299*299) |
Please check the official homepage.
No Description
Python Shell C++ Markdown Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》