Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
LGOODSKY 9f3efe2105 | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
README_CN.md | 1 year ago | |
edgenext.png | 1 year ago | |
edgenext_small_ascend.yaml | 1 year ago |
EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications
In the pursuit of achieving ever-increasing accuracy, large and complex neural networks are usually developed. Such models demand high computational resources and therefore cannot be deployed on edge devices. It is of great interest to build resource-efficient general purpose networks due to their usefulness in several application areas. In this work, we strive to effectively combine the strengths of both CNN and Transformer models and propose a new efficient hybrid architecture EdgeNeXt. Specifically in EdgeNeXt, we introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups and utilizes depth-wise convolution along with self-attention across channel dimensions to implicitly increase the receptive field and encode multi-scale features.
Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Train T. | Infer T. | Download | Config | Log |
---|---|---|---|---|---|---|---|---|---|
edgenext_small | D910x8-G | 79.146 | 94.394 | 5.59 | 518s/epoch | 238.6ms/step | model | cfg | log |
Please refer to the installation instruction in MindCV.
Please download the ImageNet-1K dataset for model training and validation.
Hyper-parameters. The hyper-parameter configurations for producing the reported results are stored in the yaml files in mindcv/configs/edgenext
folder. For example, to train with one of these configurations, you can run:
# train edgenext_small on 8 Ascends
mpirun -n 8 python train.py -c configs/edgenext/edgenext_small_ascend.yaml --data_dir /path/to/imagenet_dir
Note that the number of GPUs/Ascends and batch size will influence the training results. To reproduce the training result at most, it is recommended to use the same number of GPUs/Ascends with the same batch size.
Detailed adjustable parameters and their default value can be seen in config.py.
To validate the trained model, you can use validate.py
. Here is an example for edgenext_small to verify the accuracy of pretrained weights.
python validate.py --model=edgenext_small --data_dir=imagenet_dir --val_split=val --ckpt_path
Please refer to the deployment tutorial in MindCV.
No Description
Jupyter Notebook Python Markdown Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》