CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation
Existing works on open-vocabulary semantic segmentation have utilized large-scale vision-language models, such as CLIP, to leverage their exceptional open-vocabulary recognition capabilities. However, the problem of transferring these capabilities learned from image-level supervision to the pixel-level task of segmentation and addressing arbitrary unseen categories at inference makes this task challenging. To address these issues, we aim to attentively relate objects within an image to given categories by leveraging relational information among class categories and visual semantics through aggregation, while also adapting the CLIP representations to the pixel-level task. However, we observe that direct optimization of the CLIP embeddings can harm its open-vocabulary capabilities. In this regard, we propose an alternative approach to optimize the imagetext similarity map, i.e. the cost map, using a novel cost aggregation-based method. Our framework, namely CATSeg, achieves state-of-the-art performance across all benchmarks. We provide extensive ablation studies to validate our choices. Project page.
CAT-Seg model training needs pretrained CLIP
model. We have implemented ViT-B
and ViT-L
based CLIP
model. To further use ViT-bigG
or ViT-H
ones, you need additional dependencies. Please install open_clip first. The pretrained CLIP
model state dicts are loaded from Huggingface-OpenCLIP. If you come up with ConnectionError
when downloading CLIP weights, you can manually download them from the given repo and use custom_clip_weights=/path/to/you/folder
of backbone in config file. Related tools are as shown in requirements/optional.txt:
pip install ftfy==6.0.1
pip install huggingface-hub
pip install regex
In addition to the necessary data preparation, you also need class texts for clip text encoder. Please download the class text json file first cls_texts and arrange the folder as follows:
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── VOCdevkit
│ │ ├── VOC2012
│ │ ├── VOC2010
│ │ ├── VOCaug
│ ├── ade
│ ├── coco_stuff164k
│ ├── coco.json
│ ├── pc59.json
│ ├── pc459.json
│ ├── ade150.json
│ ├── ade847.json
│ ├── voc20b.json
│ ├── voc20.json
# setup PYTHONPATH
export PYTHONPATH=`pwd`:$PYTHONPATH
# run evaluation
mim test mmsegmentation ${CONFIG} --checkpoint ${CHECKPOINT} --launcher pytorch --gpus=8
Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | Device | mIoU | mIoU(ms+flip) | config | download |
---|---|---|---|---|---|---|---|---|---|---|
CAT-Seg | R-101 & ViT-B | 384x384 | 80000 | - | - | RTX3090 | 27.2 | - | config | model |
Note:
slide
mode, the inference time is longer since the test size is much more bigger that training size of (384, 384)
.ResNet
rather than ResNetV1c
.val2017
set of COCO-stuff164k for reference, which is the training dataset of CAT-Seg. The testing was done without TTA.@inproceedings{cheng2021mask2former,
title={CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation},
author={Seokju Cho and Heeseong Shin and Sunghwan Hong and Seungjun An and Seungjun Lee and Anurag Arnab and Paul Hongsuck Seo and Seungryong Kim},
journal={CVPR},
year={2023}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》