SigLIP2
This model was released on 2025-02-20 and added to Hugging Face Transformers on 2025-02-21.
SigLIP2
Section titled “SigLIP2”Overview
Section titled “Overview”SigLIP2 is a family of multilingual vision-language encoders that builds on the SigLIP training recipe. It includes decoder-based pretraining, self-distillation, and masked prediction to improve dense prediction tasks (segmentation, depth estimation, etc.). This model is available in two variants:
- NaFlex supports different resolutions and maintains the native image aspect ratio
- FixRes supports fixed resolutions and is backwards compatible with SigLIP
You can find all the original SigLIP2 checkpoints under the SigLIP2 collection.
The example below demonstrates zero-shot classification with Pipeline or the AutoModel class.
import torchfrom transformers import pipeline
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
pipeline = pipeline(task="zero-shot-image-classification", model="google/siglip2-base-patch16-224", device=0, dtype=torch.bfloat16)pipeline(image, candidate_labels=candidate_labels)import torchimport requestsfrom PIL import Imagefrom transformers import AutoProcessor, AutoModel
model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
# follows the pipeline prompt template to get same resultstexts = [f'This is a photo of {label}.' for label in candidate_labels]
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with thisinputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs)
logits_per_image = outputs.logits_per_imageprobs = torch.sigmoid(logits_per_image)print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")import torchimport requestsfrom PIL import Imagefrom transformers import AutoProcessor, AutoModel
model = AutoModel.from_pretrained("google/siglip2-base-patch16-naflex", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-naflex")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]texts = [f'This is a photo of {label}.' for label in candidate_labels]
# default value for `max_num_patches` is 256, but you can increase resulted image resolution providing higher values e.g. `max_num_patches=512`inputs = processor(text=texts, images=image, padding="max_length", max_num_patches=256, return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs)
logits_per_image = outputs.logits_per_imageprobs = torch.sigmoid(logits_per_image)print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to only quantize the weights to int4.
import torchimport requestsfrom PIL import Imagefrom transformers import AutoProcessor, AutoModel, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)model = AutoModel.from_pretrained("google/siglip2-large-patch16-512", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa")processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
# follows the pipeline prompt template to get same resultstexts = [f'This is a photo of {label}.' for label in candidate_labels]
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with thisinputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs)
logits_per_image = outputs.logits_per_imageprobs = torch.sigmoid(logits_per_image)print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")-
Training is supported for DDP and FSDP on single-node multi-accelerator setups. However, it does not use torch.distributed utilities which may limit the scalability of batch size.
-
When using the standalone
GemmaTokenizerFastmake sure to passpadding="max_length"andmax_length=64as that’s how the model was trained. -
Model was trained with lowercased text, so make sure your text labels are preprocessed the same way.
-
To get the same results as the
Pipeline, a prompt template of"This is a photo of {label}."should be passed to the processor. -
The NaFlex variant processes different types of images at the appropriate resolution (using a larger resolution to process document images for example), while also minimizing the impact of aspect ratio distortion for certain inference tasks like OCR.
NaFlex resizes the input image so the height and width are multiples of the patch size after resizing. It keeps the aspect ratio distortion as low as possible and produces a sequence length of at most the desired target sequence length (
max_num_patches). After resizing, the image is split into a sequence of patches and a mask with padding information is added. -
Toggle the
attn_implementationparameter to either"sdpa"or"flash_attention_2"to use a more memory-efficient attention.# pip install -U flash-attn --no-build-isolationfrom transformers import SiglipModelmodel = SiglipModel.from_pretrained("google/siglip2-so400m-patch14-384",attn_implementation="flash_attention_2",dtype=torch.float16,device_map=device,)
Siglip2Config
Section titled “Siglip2Config”[[autodoc]] Siglip2Config
Siglip2TextConfig
Section titled “Siglip2TextConfig”[[autodoc]] Siglip2TextConfig
Siglip2VisionConfig
Section titled “Siglip2VisionConfig”[[autodoc]] Siglip2VisionConfig
Siglip2ImageProcessor
Section titled “Siglip2ImageProcessor”[[autodoc]] Siglip2ImageProcessor - preprocess
Siglip2ImageProcessorFast
Section titled “Siglip2ImageProcessorFast”[[autodoc]] Siglip2ImageProcessorFast - preprocess
Siglip2Processor
Section titled “Siglip2Processor”[[autodoc]] Siglip2Processor
Siglip2Model
Section titled “Siglip2Model”[[autodoc]] Siglip2Model - forward - get_text_features - get_image_features
Siglip2TextModel
Section titled “Siglip2TextModel”[[autodoc]] Siglip2TextModel - forward
Siglip2VisionModel
Section titled “Siglip2VisionModel”[[autodoc]] Siglip2VisionModel - forward
Siglip2ForImageClassification
Section titled “Siglip2ForImageClassification”[[autodoc]] Siglip2ForImageClassification - forward