AltCLIP
This model was released on 2022-11-12 and added to Hugging Face Transformers on 2023-01-04.
AltCLIP
Section titled “AltCLIP”AltCLIP replaces the CLIP text encoder with a multilingual XLM-R encoder and aligns image and text representations with teacher learning and contrastive learning.
You can find all the original AltCLIP checkpoints under the AltClip collection.
The examples below demonstrates how to calculate similarity scores between an image and one or more captions with the AutoModel class.
import torchimport requestsfrom PIL import Imagefrom transformers import AltCLIPModel, AltCLIPProcessor
model = AltCLIPModel.from_pretrained("BAAI/AltCLIP", dtype=torch.bfloat16)processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)logits_per_image = outputs.logits_per_image # this is the image-text similarity scoreprobs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
labels = ["a photo of a cat", "a photo of a dog"]for label, prob in zip(labels, probs[0]): print(f"{label}: {prob.item():.4f}")Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# !pip install torchaoimport torchimport requestsfrom PIL import Imagefrom transformers import AltCLIPModel, AltCLIPProcessor, TorchAoConfig
model = AltCLIPModel.from_pretrained( "BAAI/AltCLIP", quantization_config=TorchAoConfig("int4_weight_only", group_size=128), dtype=torch.bfloat16,)
processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)logits_per_image = outputs.logits_per_image # this is the image-text similarity scoreprobs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
labels = ["a photo of a cat", "a photo of a dog"]for label, prob in zip(labels, probs[0]): print(f"{label}: {prob.item():.4f}")- AltCLIP uses bidirectional attention instead of causal attention and it uses the
[CLS]token in XLM-R to represent a text embedding. - Use
CLIPImageProcessorto resize (or rescale) and normalize images for the model. AltCLIPProcessorcombinesCLIPImageProcessorandXLMRobertaTokenizerinto a single instance to encode text and prepare images.
AltCLIPConfig
Section titled “AltCLIPConfig”[[autodoc]] AltCLIPConfig
AltCLIPTextConfig
Section titled “AltCLIPTextConfig”[[autodoc]] AltCLIPTextConfig
AltCLIPVisionConfig
Section titled “AltCLIPVisionConfig”[[autodoc]] AltCLIPVisionConfig
AltCLIPModel
Section titled “AltCLIPModel”[[autodoc]] AltCLIPModel
AltCLIPTextModel
Section titled “AltCLIPTextModel”[[autodoc]] AltCLIPTextModel
AltCLIPVisionModel
Section titled “AltCLIPVisionModel”[[autodoc]] AltCLIPVisionModel
AltCLIPProcessor
Section titled “AltCLIPProcessor”[[autodoc]] AltCLIPProcessor