GroupViT
This model was released on 2022-02-22 and added to Hugging Face Transformers on 2022-06-28.
GroupViT
Section titled “GroupViT”
Overview
Section titled “Overview”The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
The abstract from the paper is the following:
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
This model was contributed by xvjiarui. The original code can be found here.
Usage tips
Section titled “Usage tips”- You may specify
output_segmentation=Truein the forward ofGroupViTModelto get the segmentation logits of input texts.
Resources
Section titled “Resources”A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GroupViT.
- The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference).
- One can also check out the HuggingFace Spaces demo to play with GroupViT.
GroupViTConfig
Section titled “GroupViTConfig”[[autodoc]] GroupViTConfig
GroupViTTextConfig
Section titled “GroupViTTextConfig”[[autodoc]] GroupViTTextConfig
GroupViTVisionConfig
Section titled “GroupViTVisionConfig”[[autodoc]] GroupViTVisionConfig
GroupViTModel
Section titled “GroupViTModel”[[autodoc]] GroupViTModel - forward - get_text_features - get_image_features
GroupViTTextModel
Section titled “GroupViTTextModel”[[autodoc]] GroupViTTextModel - forward
GroupViTVisionModel
Section titled “GroupViTVisionModel”[[autodoc]] GroupViTVisionModel - forward