Mistral 3
This model was released on 2025-01-30 and added to Hugging Face Transformers on 2025-03-18.
Mistral 3
Section titled “Mistral 3”Mistral 3 is a latency optimized model with a lot fewer layers to reduce the time per forward pass. This model adds vision understanding and supports long context lengths of up to 128K tokens without compromising performance.
You can find the original Mistral 3 checkpoints under the Mistral AI organization.
The example below demonstrates how to generate text for an image with Pipeline and the AutoModel class.
import torchfrom transformers import pipeline
messages = [ {"role": "user", "content":[ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",}, {"type": "text", "text": "Describe this image."} ,] ,},]
pipeline = pipeline( task="image-text-to-text", model="mistralai/Mistral-Small-3.1-24B-Instruct-2503", dtype=torch.bfloat16, device=0)outputs = pipeline(text=messages, max_new_tokens=50, return_full_text=False)
outputs[0]["generated_text"]'The image depicts a vibrant and lush garden scene featuring a variety of wildflowers and plants. The central focus is on a large, pinkish-purple flower, likely a Greater Celandine (Chelidonium majus), with a'import torchfrom transformers import AutoProcessor, AutoModelForImageTextToTextfrom accelerate import Accelerator
torch_device = Accelerator().devicemodel_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"processor = AutoProcessor.from_pretrained(model_checkpoint)model = AutoModelForImageTextToText.from_pretrained( model_checkpoint, device_map=torch_device, dtype=torch.bfloat16)
messages = [ {"role": "user", "content":[ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",}, {"type": "text", "text": "Describe this image."} ,] ,},]
inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=20)decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
decoded_output'The image depicts a vibrant and lush garden scene featuring a variety of wildflowers and plants. The central focus is on a large, pinkish-purple flower, likely a Greater Celandine (Chelidonium majus), with a'- Mistral 3 supports text-only generation.
import torchfrom transformers import AutoProcessor, AutoModelForImageTextToTextfrom accelerate import Accelerator
torch_device = Accelerator().devicemodel_checkpoint = ".mistralai/Mistral-Small-3.1-24B-Instruct-2503"processor = AutoProcessor.from_pretrained(model_checkpoint)model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, dtype=torch.bfloat16)
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": user_prompt},]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)inputs = processor(text=text, return_tensors="pt").to(0, dtype=torch.float16)generate_ids = model.generate(**inputs, max_new_tokens=50, do_sample=False)decoded_output = processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True)[0]
print(decoded_output)"1. À plus tard! 2. Salut, à plus! 3. À toute! 4. À la prochaine! 5. Je me casse, à plus!/_/
( o.o )
^ <
- Mistral 3 accepts batched image and text inputs.
import torchfrom transformers import AutoProcessor, AutoModelForImageTextToTextfrom accelerate import Accelerator
torch_device = Accelerator().devicemodel_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"processor = AutoProcessor.from_pretrained(model_checkpoint)model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, dtype=torch.bfloat16)
messages = [ [ { "role": "user", "content": [ {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"}, {"type": "text", "text": "Write a haiku for this image"}, ], }, ], [ { "role": "user", "content": [ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}, {"type": "text", "text": "Describe this image"}, ], }, ], ]
inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
output = model.generate(**inputs, max_new_tokens=25)
decoded_outputs = processor.batch_decode(output, skip_special_tokens=True) decoded_outputs["Write a haiku for this imageCalm waters reflect\nWhispers of the forest's breath\nPeace on wooden path", "Describe this imageThe image depicts a vibrant street scene in what appears to be a Chinatown district. The focal point is a traditional Chinese"]- Mistral 3 also supported batched image and text inputs with a different number of images for each text. The example below quantizes the model with bitsandbytes.
import torchfrom transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfigfrom accelerate import Accelerator
torch_device = Accelerator().devicemodel_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"processor = AutoProcessor.from_pretrained(model_checkpoint)quantization_config = BitsAndBytesConfig(load_in_4bit=True)model = AutoModelForImageTextToText.from_pretrained( model_checkpoint, quantization_config=quantization_config )
messages = [ [ { "role": "user", "content": [ {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"}, {"type": "text", "text": "Write a haiku for this image"}, ], }, ], [ { "role": "user", "content": [ {"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"}, {"type": "image", "url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg"}, {"type": "text", "text": "These images depict two different landmarks. Can you identify them?"}, ], }, ], ]
inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
output = model.generate(**inputs, max_new_tokens=25)
decoded_outputs = processor.batch_decode(output, skip_special_tokens=True) decoded_outputs["Write a haiku for this imageSure, here is a haiku inspired by the image:\n\nCalm lake's wooden path\nSilent forest stands guard\n", "These images depict two different landmarks. Can you identify them? Certainly! The images depict two iconic landmarks:\n\n1. The first image shows the Statue of Liberty in New York City."]Mistral3Config
Section titled “Mistral3Config”[[autodoc]] Mistral3Config
MistralCommonBackend
Section titled “MistralCommonBackend”[[autodoc]] MistralCommonBackend
Mistral3Model
Section titled “Mistral3Model”[[autodoc]] Mistral3Model
Mistral3ForConditionalGeneration
Section titled “Mistral3ForConditionalGeneration”[[autodoc]] Mistral3ForConditionalGeneration - forward