零样本对象检测(owlvit-large-patch14)
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries.
OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
The model uses a CLIP backbone with a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective.
模型来源: https://hf-mirror.com/google/owlvit-large-patch14
模型应用开发和部署
模型服务化
本模型基于 ServiceBoot微服务引擎 进行服务化封装,参见: 《CubeAI模型开发指南》
直接源代码运行
$ sh pip-install-reqs.sh
$ serviceboot start
或
$ python3 run_model_server.py
本地容器化部署
一键式本地容器化部署和运行,参见: 《CubeAI模型独立部署指南》 或 CubeAI Docker Builder
云原生网络部署
本模型服务可一键发布至 CubeAI智立方平台 进行共享和部署,参见: 《CubeAI模型发布指南》