Qwen2.5-Omni
This model was released on 2025-03-26 and added to Hugging Face Transformers on 2025-04-14.
Qwen2.5-Omni
Section titled “Qwen2.5-Omni”
Overview
Section titled “Overview”The Qwen2.5-Omni model is a unified multiple modalities model proposed in Qwen2.5-Omni Technical Report from Qwen team, Alibaba Group.
The abstract from the technical report is the following:
We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model. Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture. In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench. Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
- Use
Qwen2_5OmniForConditionalGenerationto generate audio and text output. To generate only one output type, useQwen2_5OmniThinkerForConditionalGenerationfor text-only andQwen2_5OmniTalkersForConditionalGenerationfor audio-only outputs. - Audio generation with
Qwen2_5OmniForConditionalGenerationsupports only single batch size at the moment. - In case out out-of-memory errors hwen working with video input, decrease
processor.max_pixels. By default the maximum is set to a very arge value and high resolution visuals will not be resized, unless resolution exceedsprocessor.max_pixels. - The processor has its own
apply_chat_templatemethod to convert chat messages to model inputs.
Usage example
Section titled “Usage example”Qwen2.5-Omni can be found on the Huggingface Hub.
Single Media inference
Section titled “Single Media inference”The model can accept text, images, audio and videos as input. Here’s an example code for inference.
import soundfile as sffrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", dtype="auto", device_map="auto")processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
conversations = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "/path/to/video.mp4"}, {"type": "text", "text": "What cant you hear and see in this video?"}, ], },]
inputs = processor.apply_chat_template( conversations, load_audio_from_video=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor` padding=True, use_audio_in_video=True,).to(model.device)
# Generation params for audio or text can be different and have to be prefixed with `thinker_` or `talker_`text_ids, audio = model.generate(**inputs, use_audio_in_video=True, thinker_do_sample=False, talker_do_sample=True)text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
sf.write( "output.wav", audio.reshape(-1).detach().cpu().numpy(), samplerate=24000,)print(text)Text-only generation
Section titled “Text-only generation”To generate only text output and save compute by not loading the audio generation model, we can use Qwen2_5OmniThinkerForConditionalGeneration model.
from transformers import Qwen2_5OmniThinkerForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", dtype="auto", device_map="auto",)processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
conversations = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "video": "/path/to/video.mp4"}, {"type": "text", "text": "What cant you hear and see in this video?"}, ], },]
inputs = processor.apply_chat_template( conversations, load_audio_from_video=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor` padding=True, use_audio_in_video=True,).to(model.device)
text_ids = model.generate(**inputs, use_audio_in_video=True)text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
sf.write( "output.wav", audio.reshape(-1).detach().cpu().numpy(), samplerate=24000,)print(text)Batch Mixed Media Inference
Section titled “Batch Mixed Media Inference”The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when using Qwen2_5OmniThinkerForConditionalGeneration model. Here is an example.
import soundfile as sffrom transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", dtype="auto", device_map="auto")processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
# Conversation with video onlyconversation1 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "video", "path": "/path/to/video.mp4"}, ] }]
# Conversation with audio onlyconversation2 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "audio", "path": "/path/to/audio.wav"}, ] }]
# Conversation with pure textconversation3 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [{"type": "text", "text": "who are you?"}], }]
# Conversation with mixed mediaconversation4 = [ { "role": "system", "content": [ {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."} ], }, { "role": "user", "content": [ {"type": "image", "path": "/path/to/image.jpg"}, {"type": "video", "path": "/path/to/video.mp4"}, {"type": "audio", "path": "/path/to/audio.wav"}, {"type": "text", "text": "What are the elements can you see and hear in these medias?"}, ], }]
conversations = [conversation1, conversation2, conversation3, conversation4]
inputs = processor.apply_chat_template( conversations, load_audio_from_video=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor` padding=True, use_audio_in_video=True,).to(model.thinker.device)
text_ids = model.generate(**inputs, use_audio_in_video=True)text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)Usage Tips
Section titled “Usage Tips”Image Resolution trade-off
Section titled “Image Resolution trade-off”The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs.
min_pixels = 128*28*28max_pixels = 768*28*28processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B", min_pixels=min_pixels, max_pixels=max_pixels)Prompt for audio output
Section titled “Prompt for audio output”If users need audio output, the system prompt must be set as “You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.”, otherwise the audio output may not work as expected.
{ "role": "system", "content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",}Use audio output or not
Section titled “Use audio output or not”The model supports both text and audio outputs, if users do not need audio outputs, they can set enable_audio_output in the from_pretrained function. This option will save about ~2GB of GPU memory but the return_audio option for generate function will only allow to be set at False.
model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", dtype="auto", device_map="auto", enable_audio_output=False,)In order to obtain a flexible experience, we recommend that users set enable_audio_output at True when initializing the model through from_pretrained function, and then decide whether to return audio when generate function is called. When return_audio is set to False, the model will only return text outputs to get text responses faster.
model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", dtype="auto", device_map="auto", enable_audio_output=True,)...text_ids = model.generate(**inputs, return_audio=False)Change voice type of output audio
Section titled “Change voice type of output audio”Qwen2.5-Omni supports the ability to change the voice of the output audio. Users can use the spk parameter of generate function to specify the voice type. The "Qwen/Qwen2.5-Omni-7B" checkpoint support two voice types: Chelsie and Ethan, while Chelsie is a female voice and Ethan is a male voice. By default, if spk is not specified, the default voice type is Chelsie.
text_ids, audio = model.generate(**inputs, spk="Chelsie")text_ids, audio = model.generate(**inputs, spk="Ethan")Flash-Attention 2 to speed up generation
Section titled “Flash-Attention 2 to speed up generation”First, make sure to install the latest version of Flash Attention 2:
pip install -U flash-attn --no-build-isolationAlso, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the flash attention repository. FlashAttention-2 can only be used when a model is loaded in torch.float16 or torch.bfloat16.
To load and run a model using FlashAttention-2, add attn_implementation="flash_attention_2" when loading the model:
from transformers import Qwen2_5OmniForConditionalGeneration
model = Qwen2_5OmniForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-Omni-7B", device_map="auto", dtype=torch.bfloat16, attn_implementation="flash_attention_2",)Qwen2_5OmniConfig
Section titled “Qwen2_5OmniConfig”[[autodoc]] Qwen2_5OmniConfig
Qwen2_5OmniProcessor
Section titled “Qwen2_5OmniProcessor”[[autodoc]] Qwen2_5OmniProcessor
Qwen2_5OmniForConditionalGeneration
Section titled “Qwen2_5OmniForConditionalGeneration”[[autodoc]] Qwen2_5OmniForConditionalGeneration - forward
Qwen2_5OmniPreTrainedModelForConditionalGeneration
Section titled “Qwen2_5OmniPreTrainedModelForConditionalGeneration”[[autodoc]] Qwen2_5OmniPreTrainedModelForConditionalGeneration
Qwen2_5OmniThinkerConfig
Section titled “Qwen2_5OmniThinkerConfig”[[autodoc]] Qwen2_5OmniThinkerConfig
Qwen2_5OmniThinkerForConditionalGeneration
Section titled “Qwen2_5OmniThinkerForConditionalGeneration”[[autodoc]] Qwen2_5OmniThinkerForConditionalGeneration
Qwen2_5OmniThinkerTextModel
Section titled “Qwen2_5OmniThinkerTextModel”[[autodoc]] Qwen2_5OmniThinkerTextModel
Qwen2_5OmniTalkerConfig
Section titled “Qwen2_5OmniTalkerConfig”[[autodoc]] Qwen2_5OmniTalkerConfig
Qwen2_5OmniTalkerForConditionalGeneration
Section titled “Qwen2_5OmniTalkerForConditionalGeneration”[[autodoc]] Qwen2_5OmniTalkerForConditionalGeneration
Qwen2_5OmniTalkerModel
Section titled “Qwen2_5OmniTalkerModel”[[autodoc]] Qwen2_5OmniTalkerModel
Qwen2_5OmniToken2WavConfig
Section titled “Qwen2_5OmniToken2WavConfig”[[autodoc]] Qwen2_5OmniToken2WavConfig
Qwen2_5OmniToken2WavModel
Section titled “Qwen2_5OmniToken2WavModel”[[autodoc]] Qwen2_5OmniToken2WavModel
Qwen2_5OmniToken2WavDiTModel
Section titled “Qwen2_5OmniToken2WavDiTModel”[[autodoc]] Qwen2_5OmniToken2WavDiTModel
Qwen2_5OmniToken2WavBigVGANModel
Section titled “Qwen2_5OmniToken2WavBigVGANModel”[[autodoc]] Qwen2_5OmniToken2WavBigVGANModel