Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
takuoko 1047daa28e | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
hornet-base-gf_8xb64_in1k.py | 1 year ago | |
hornet-base_8xb64_in1k.py | 1 year ago | |
hornet-small-gf_8xb64_in1k.py | 1 year ago | |
hornet-small_8xb64_in1k.py | 1 year ago | |
hornet-tiny-gf_8xb128_in1k.py | 1 year ago | |
hornet-tiny_8xb128_in1k.py | 1 year ago | |
metafile.yml | 1 year ago |
HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions
Recent progress in vision Transformers exhibits great success in various tasks driven by the new spatial modeling mechanism based on dot-product self-attention. In this paper, we show that the key ingredients behind the vision Transformers, namely input-adaptive, long-range and high-order spatial interactions, can also be efficiently implemented with a convolution-based framework. We present the Recursive Gated Convolution (g nConv) that performs high-order spatial interactions with gated convolutions and recursive designs. The new operation is highly flexible and customizable, which is compatible with various variants of convolution and extends the two-order interactions in self-attention to arbitrary orders without introducing significant extra computation. g nConv can serve as a plug-and-play module to improve various vision Transformers and convolution-based models. Based on the operation, we construct a new family of generic vision backbones named HorNet. Extensive experiments on ImageNet classification, COCO object detection and ADE20K semantic segmentation show HorNet outperform Swin Transformers and ConvNeXt by a significant margin with similar overall architecture and training configurations. HorNet also shows favorable scalability to more training data and a larger model size. Apart from the effectiveness in visual encoders, we also show g nConv can be applied to task-specific decoders and consistently improve dense prediction performance with less computation. Our results demonstrate that g nConv can be a new basic module for visual modeling that effectively combines the merits of both vision Transformers and CNNs. Code is available at https://github.com/raoyongming/HorNet.
Model | Pretrain | resolution | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
---|---|---|---|---|---|---|---|---|
HorNet-T* | From scratch | 224x224 | 22.41 | 3.98 | 82.84 | 96.24 | config | model |
HorNet-T-GF* | From scratch | 224x224 | 22.99 | 3.9 | 82.98 | 96.38 | config | model |
HorNet-S* | From scratch | 224x224 | 49.53 | 8.83 | 83.79 | 96.75 | config | model |
HorNet-S-GF* | From scratch | 224x224 | 50.4 | 8.71 | 83.98 | 96.77 | config | model |
HorNet-B* | From scratch | 224x224 | 87.26 | 15.59 | 84.24 | 96.94 | config | model |
HorNet-B-GF* | From scratch | 224x224 | 88.42 | 15.42 | 84.32 | 96.95 | config | model |
*Models with * are converted from the official repo. The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.
The pre-trained models on ImageNet-21k are used to fine-tune on the downstream tasks.
Model | Pretrain | resolution | Params(M) | Flops(G) | Download |
---|---|---|---|---|---|
HorNet-L* | ImageNet-21k | 224x224 | 194.54 | 34.83 | model |
HorNet-L-GF* | ImageNet-21k | 224x224 | 196.29 | 34.58 | model |
HorNet-L-GF384* | ImageNet-21k | 384x384 | 201.23 | 101.63 | model |
*Models with * are converted from the official repo.
@article{rao2022hornet,
title={HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions},
author={Rao, Yongming and Zhao, Wenliang and Tang, Yansong and Zhou, Jie and Lim, Ser-Lam and Lu, Jiwen},
journal={arXiv preprint arXiv:2207.14284},
year={2022}
}
No Description
Python Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》