Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
EmmaHAN 062fe1a7c9 | 3 months ago | |
---|---|---|
.. | ||
README.md | 10 months ago | |
nasnet_a_4x1056_ascend.yaml | 3 months ago |
Learning Transferable Architectures for Scalable Image Recognition
Neural architecture search (NAS) shows the flexibility on model configuration. By doing neural architecture search in a pooling with convolution layer, max pooling and average pooling layer,
the normal cell and the reduction cell are selected to be part of NasNet. Figure 1 shows NasNet architecture for ImageNet, which are stacked with reduction cell and normal cell.
In conclusion, NasNet could achieve better model performance with fewer model parametes and fewer computation cost on image classification
compared with previous state-of-the-art methods on ImageNet-1K dataset.[1]
Figure 1. Architecture of Nasnet [1]
Our reproduced model performance on ImageNet-1K is reported as follows.
Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
---|---|---|---|---|---|---|
nasnet_a_4x1056 | D910x8-G | 73.65 | 91.25 | 5.33 | yaml | weights |
Please refer to the installation instruction in MindCV.
Please download the ImageNet-1K dataset for model training and validation.
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
# distributed training on multiple GPU/Ascend devices
mpirun -n 8 python train.py --config configs/nasnet/nasnet_a_4x1056_ascend.yaml --data_dir /path/to/imagenet
If the script is executed by the root user, the
--allow-run-as-root
parameter must be added tompirun
.
Similarly, you can train the model on multiple GPU devices with the above mpirun
command.
For detailed illustration of all hyper-parameters, please refer to config.py.
Note: As the global batch size (batch_size x num_devices) is an important hyper-parameter, it is recommended to keep the global batch size unchanged for reproduction or adjust the learning rate linearly to a new global batch size.
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
# standalone training on a CPU/GPU/Ascend device
python train.py --config configs/nasnet/nasnet_a_4x1056_ascend.yaml --data_dir /path/to/dataset --distribute False
To validate the accuracy of the trained model, you can use validate.py
and parse the checkpoint path with --ckpt_path
.
python validate.py -c configs/nasnet/nasnet_a_4x1056_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
Please refer to the deployment tutorial in MindCV.
[1] Zoph B, Vasudevan V, Shlens J, et al. Learning transferable architectures for scalable image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8697-8710.
A toolbox of vision models and algorithms based on MindSpore
https://github.com/mindspore-lab/mindcv
Python Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》