Skip to content

BLIP

This model was released on 2022-01-28 and added to Hugging Face Transformers on 2022-12-21.

PyTorch

BLIP (Bootstrapped Language-Image Pretraining) is a vision-language pretraining (VLP) framework designed for both understanding and generation tasks. Most existing pretrained models are only good at one or the other. It uses a captioner to generate captions and a filter to remove the noisy captions. This increases training data quality and more effectively uses the messy web data.

You can find all the original BLIP checkpoints under the BLIP collection.

Click on the BLIP models in the right sidebar for more examples of how to apply BLIP to different vision language tasks.

The example below demonstrates how to visual question answering with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipeline = pipeline(
task="visual-question-answering",
model="Salesforce/blip-vqa-base",
dtype=torch.float16,
device=0
)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
pipeline(question="What is the weather in this image?", image=url)
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForVisualQuestionAnswering
processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = AutoModelForVisualQuestionAnswering.from_pretrained(
"Salesforce/blip-vqa-base",
dtype=torch.float16,
device_map="auto"
)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
question = "What is the weather in this image?"
inputs = processor(images=image, text=question, return_tensors="pt").to(model.device, torch.float16)
output = model.generate(**inputs)
processor.batch_decode(output, skip_special_tokens=True)[0]

Refer to this notebook to learn how to fine-tune BLIP for image captioning on a custom dataset.

[[autodoc]] BlipConfig

[[autodoc]] BlipTextConfig

[[autodoc]] BlipVisionConfig

[[autodoc]] BlipProcessor

[[autodoc]] BlipImageProcessor - preprocess

[[autodoc]] BlipImageProcessorFast - preprocess

BlipModel is going to be deprecated in future versions, please use BlipForConditionalGeneration, BlipForImageTextRetrieval or BlipForQuestionAnswering depending on your usecase.

[[autodoc]] BlipModel - forward - get_text_features - get_image_features

[[autodoc]] BlipTextModel - forward

[[autodoc]] BlipTextLMHeadModel - forward

[[autodoc]] BlipVisionModel - forward

[[autodoc]] BlipForConditionalGeneration - forward

[[autodoc]] BlipForImageTextRetrieval - forward

[[autodoc]] BlipForQuestionAnswering - forward