Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
jbwang1997 a828499d28 | 2 years ago | |
---|---|---|
.. | ||
README.md | 2 years ago | |
mask_rcnn_swin-s-p4-w7_fpn_fp16_ms-crop-3x_coco.py | 2 years ago | |
mask_rcnn_swin-t-p4-w7_fpn_1x_coco.py | 2 years ago | |
mask_rcnn_swin-t-p4-w7_fpn_fp16_ms-crop-3x_coco.py | 2 years ago | |
mask_rcnn_swin-t-p4-w7_fpn_ms-crop-3x_coco.py | 2 years ago | |
metafile.yml | 2 years ago | |
retinanet_swin-t-p4-w7_fpn_1x_coco.py | 2 years ago |
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures.
Backbone | Pretrain | Lr schd | Multi-scale crop | FP16 | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|---|
Swin-T | ImageNet-1K | 1x | no | no | 7.6 | 42.7 | 39.3 | config | model | log | |
Swin-T | ImageNet-1K | 3x | yes | no | 10.2 | 46.0 | 41.6 | config | model | log | |
Swin-T | ImageNet-1K | 3x | yes | yes | 7.8 | 46.0 | 41.7 | config | model | log | |
Swin-S | ImageNet-1K | 3x | yes | yes | 11.9 | 48.2 | 43.2 | config | model | log |
Please follow the example
of retinanet_swin-t-p4-w7_fpn_1x_coco.py
when you want to combine Swin Transformer with
the one-stage detector. Because there is a layer norm at the outs of Swin Transformer, you must set start_level
as 0 in FPN, so we have to set the out_indices
of backbone as [1,2,3]
.
@article{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
journal={arXiv preprint arXiv:2103.14030},
year={2021}
}
No Description
Python Pickle Shell Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》