Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Samit 04d7c14e9d | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
README_CN.md | 1 year ago | |
pvt_large_ascend.yaml | 1 year ago | |
pvt_medium_ascend.yaml | 1 year ago | |
pvt_small_ascend.yaml | 1 year ago | |
pvt_tiny_ascend.yaml | 1 year ago |
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
PVT is a general backbone network for dense prediction without convolution operation. PVT introduces a pyramid structure in Transformer to generate multi-scale feature maps for dense prediction tasks. PVT uses a gradual reduction strategy to control the size of the feature maps through the patch embedding layer, and proposes a spatial reduction attention (SRA) layer to replace the traditional multi head attention layer in the encoder, which greatly reduces the computing/memory overhead.
Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Train T. | Infer T. | Download | Config | Log |
---|---|---|---|---|---|---|---|---|---|
PVT_tiny | D910x8-G | 74.92 | 433s/epoch | 16ms/step | model | cfg | log | ||
PVT_small | D910x8-G | 79.66 | 538s/epoch | 30ms/step | model | cfg | log | ||
PVT_medium | D910x8-G | 81.82 | 766s/epoch | 47ms/step | model | cfg | log | ||
PVT_large | D910x8-G | 81.75 | 1074s/epoch | 67ms/step | model | cfg | log |
Please refer to the installation instruction in MindCV.
Please download the ImageNet-1K dataset for model training and validation.
Hyper-parameters. The hyper-parameter configurations for producing the reported results are stored in the yaml files in mindcv/configs/pvt
folder. For example, to train with one of these configurations, you can run:
# train densenet121 on 8 GPUs
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
mpirun -n 8 python train.py -c configs/pvt/pvt_tiny_ascend.yaml --data_dir /path/to/imagenet
Note that the number of GPUs/Ascends and batch size will influence the training results. To reproduce the training result at most, it is recommended to use the same number of GPUs/Ascneds with the same batch size.
Finetuning. Here is an example for finetuning a pretrained pvt tiny on CIFAR10 dataset using Momentum optimizer.
python train.py --model=pvt_tiny --pretrained --opt=momentum --lr=0.001 dataset=cifar10 --num_classes=10 --dataset_download
Detailed adjustable parameters and their default value can be seen in config.py.
To validate the trained model, you can use validate.py
. Here is an example for pvt tiny to verify the accuracy of
pretrained weights.
python validate.py --model=model=pvt_tiny --dataset=imagenet --val_split=val --pretrained
To validate the model, you can use validate.py
. Here is an example for pvt tiny to verify the accuracy of your training.
python validate.py --model=model=pvt_tiny --dataset=imagenet --val_split=val --ckpt_path='./ckpt/model=pvt_tiny-best.ckpt'
Please refer to the deployment tutorial in MindCV.
No Description
Jupyter Notebook Python Markdown Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》