Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
@yang-guowei d044fb0777 | 1 year ago | |
---|---|---|
.idea | 1 year ago | |
ascend310_infer | 1 year ago | |
result | 1 year ago | |
scripts | 1 year ago | |
src | 1 year ago | |
400_799.txt | 1 year ago | |
README.md | 1 year ago | |
README_CN.md | 1 year ago | |
eval.py | 1 year ago | |
export.py | 1 year ago | |
infer_onnx.py | 1 year ago | |
mindspore_hub_conf.py | 1 year ago | |
postprocess.py | 1 year ago | |
predict.py | 1 year ago | |
preprocess.py | 1 year ago | |
requirements.txt | 1 year ago | |
train.py | 1 year ago |
MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware- aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances.Nov 20, 2019.
Paper Howard, Andrew, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang et al. "Searching for mobilenetv3." In Proceedings of the IEEE International Conference on Computer Vision, pp. 1314-1324. 2019.
The overall network architecture of MobileNetV3 is show below:
Dataset used: imagenet
├── MobileNetV3
├── Readme.md # descriptions about MobileNetV3
├── scripts
│ ├──run_train.sh # shell script for train
│ ├──run_eval.sh # shell script for evaluation
│ ├──run_infer_310.sh # shell script for inference
│ ├──run_onnx.sh # shell script for onnx inference
├── src
│ ├──config.py # parameter configuration
│ ├──dataset.py # creating dataset
│ ├──lr_generator.py # learning rate config
│ ├──mobilenetV3.py # MobileNetV3 architecture
├── train.py # training script
├── eval.py # evaluation script
├── infer_onnx.py # onnx inference script
├── export.py # export mindir script
├── preprocess.py # inference data preprocess script
├── postprocess.py # inference result calculation script
├── mindspore_hub_conf.py # mindspore hub interface
You can start training using python or shell scripts. The usage of shell scripts as follows:
# training example
python:
GPU: python train.py --dataset_path ~/imagenet/train/ --device_targe GPU
CPU: python train.py --dataset_path ~/cifar10/train/ --device_targe CPU
shell:
GPU: bash run_train.sh GPU 8 0,1,2,3,4,5,6,7 ~/imagenet/train/
CPU: bash run_train.sh CPU ~/cifar10/train/
Training result will be stored in the example path. Checkpoints will be stored at . /checkpoint
by default, and training log will be redirected to ./train/train.log
like followings.
epoch: [ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
epoch: [ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
You can start training using python or shell scripts. The usage of shell scripts as follows:
# infer example
python:
GPU: python eval.py --dataset_path ~/imagenet/val/ --checkpoint_path mobilenet_199.ckpt --device_targe GPU
CPU: python eval.py --dataset_path ~/cifar10/val/ --checkpoint_path mobilenet_199.ckpt --device_targe CPU
shell:
GPU: bash run_infer.sh GPU ~/imagenet/val/ ~/train/mobilenet-200_625.ckpt
CPU: bash run_infer.sh CPU ~/cifar10/val/ ~/train/mobilenet-200_625.ckpt
checkpoint can be produced in training process.
Inference result will be stored in the example path, you can find result like the followings in val.log
.
result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ckpt
python export.py --checkpoint_path [CKPT_PATH] --device_target [DEVICE] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
The ckpt_file parameter is required,
DEVICE
should be in ['Ascend', 'GPU', 'CPU']
FILE_FORMAT
should be in "MINDIR"
Before performing inference, the mindir file must be exported by export.py
script. We only provide an example of inference using MINDIR model.
Current batch_Size for imagenet2012 dataset can only be set to 1.
# Ascend310 inference
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
MINDIR_PATH
specifies path of used "MINDIR" model.DATA_PATH
specifies path of imagenet datasets.DEVICE_ID
is optional, default value is 0.Inference result is saved in current path, you can find result like this in acc.log file.
Eval: top1_correct=37051, tot=50000, acc=74.10%
Before Inferring, you need to export the ONNX model.
python export.py --checkpoint_path[ckpt_path] --device_target "GPU" --file_name mobilenetv3.onnx --file_format "ONNX"
python3 infer_onnx.py --onnx_path 'mobilenetv3.onnx' --dataset_path './imagenet/val'
bash ./scripts/run_onnx.sh [ONNX_PATH] [DATASET_PATH] [PLATFORM]
Infer results are output in infer_onnx.log
Note 1: the above scripts need to be run in the mobilenetv3 directory.
Note 2: the validation data set needs to be preprocessed in the form of folder. For example,
imagenet
-- val
-- n01985128
-- n02110063
-- n03041632
-- ...
Note 3: PLATFORM
only supports CPU and GPU, default value is GPU.
The reasoning results are output on the command line, and the results are as follows
ACC_TOP1 = 0.74436
ACC_TOP5 = 0.91762
Parameters | MobilenetV3 |
---|---|
Model Version | large |
Resource | NV SMX2 V100-32G |
uploaded Date | 07/05/2021 |
MindSpore Version | 1.3.0 |
Dataset | ImageNet |
Training Parameters | src/config.py |
Optimizer | Momentum |
Loss Function | SoftmaxCrossEntropy |
outputs | probability |
Loss | 1.913 |
Accuracy | ACC1[77.57%] ACC5[92.51%] |
Total time | 1433 min |
Params (M) | 5.48 M |
Checkpoint for Fine tuning | 44 M |
Scripts | Link |
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
Please check the official homepage.
用于分类caltech256数据的模型
Python C++ Shell Markdown Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》