ColPali
This model was released on 2024-06-27 and added to Hugging Face Transformers on 2024-12-17.
ColPali
Section titled “ColPali”ColPali is a model designed to retrieve documents by analyzing their visual features. Unlike traditional systems that rely heavily on text extraction and OCR, ColPali treats each page as an image. It uses Paligemma-3B to capture not only text, but also the layout, tables, charts, and other visual elements to create detailed multi-vector embeddings that can be used for retrieval by computing pairwise late interaction similarity scores. This offers a more comprehensive understanding of documents and enables more efficient and accurate retrieval.
This model was contributed by @tonywu71 (ILLUIN Technology) and @yonigozlan (HuggingFace).
You can find all the original ColPali checkpoints under Vidore’s Hf-native ColVision Models collection.
import requestsimport torchfrom PIL import Image
from transformers import ColPaliForRetrieval, ColPaliProcessor
# Load the model and the processormodel_name = "vidore/colpali-v1.3-hf"
model = ColPaliForRetrieval.from_pretrained( model_name, dtype=torch.bfloat16, device_map="auto", # "cpu", "cuda", "xpu", or "mps" for Apple Silicon)processor = ColPaliProcessor.from_pretrained(model_name)
# The document page screenshots from your corpusurl1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"
images = [ Image.open(requests.get(url1, stream=True).raw), Image.open(requests.get(url2, stream=True).raw),]
# The queries you want to retrieve documents forqueries = [ "When was the United States Declaration of Independence proclaimed?", "Who printed the edition of Romeo and Juliet?",]
# Process the inputsinputs_images = processor(images=images).to(model.device)inputs_text = processor(text=queries).to(model.device)
# Forward passwith torch.no_grad(): image_embeddings = model(**inputs_images).embeddings query_embeddings = model(**inputs_text).embeddings
# Score the queries against the imagesscores = processor.score_retrieval(query_embeddings, image_embeddings)
print("Retrieval scores (query x image):")print(scores)If you have issue with loading the images with PIL, you can use the following code to create dummy images:
images = [ Image.new("RGB", (128, 128), color="white"), Image.new("RGB", (64, 32), color="black"),]Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to quantize the weights to int4.
import requestsimport torchfrom PIL import Image
from transformers import BitsAndBytesConfig, ColPaliForRetrieval, ColPaliProcessor
model_name = "vidore/colpali-v1.3-hf"
# 4-bit quantization configurationbnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16,)
model = ColPaliForRetrieval.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto",)
processor = ColPaliProcessor.from_pretrained(model_name)
url1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"
images = [ Image.open(requests.get(url1, stream=True).raw), Image.open(requests.get(url2, stream=True).raw),]
queries = [ "When was the United States Declaration of Independence proclaimed?", "Who printed the edition of Romeo and Juliet?",]
# Process the inputsinputs_images = processor(images=images, return_tensors="pt").to(model.device)inputs_text = processor(text=queries, return_tensors="pt").to(model.device)
# Forward passwith torch.no_grad(): image_embeddings = model(**inputs_images).embeddings query_embeddings = model(**inputs_text).embeddings
# Score the queries against the imagesscores = processor.score_retrieval(query_embeddings, image_embeddings)
print("Retrieval scores (query x image):")print(scores)score_retrievalreturns a 2D tensor where the first dimension is the number of queries and the second dimension is the number of images. A higher score indicates more similarity between the query and image.
ColPaliConfig
Section titled “ColPaliConfig”[[autodoc]] ColPaliConfig
ColPaliProcessor
Section titled “ColPaliProcessor”[[autodoc]] ColPaliProcessor
ColPaliForRetrieval
Section titled “ColPaliForRetrieval”[[autodoc]] ColPaliForRetrieval - forward