Skip to content

Dia

This model was released on 2025-04-21 and added to Hugging Face Transformers on 2025-06-26.

PyTorch FlashAttention SDPA

Dia is an open-source text-to-speech (TTS) model (1.6B parameters) developed by Nari Labs. It can generate highly realistic dialogue from transcript including non-verbal communications such as laughter and coughing. Furthermore, emotion and tone control is also possible via audio conditioning (voice cloning).

Model Architecture: Dia is an encoder-decoder transformer based on the original transformer architecture. However, some more modern features such as rotational positional embeddings (RoPE) are also included. For its text portion (encoder), a byte tokenizer is utilized while for the audio portion (decoder), a pretrained codec model DAC is used - DAC encodes speech into discrete codebook tokens and decodes them back into audio.

from transformers import AutoProcessor, DiaForConditionalGeneration
from accelerate import Accelerator
torch_device = Accelerator().device
model_checkpoint = "nari-labs/Dia-1.6B-0626"
text = ["[S1] Dia is an open weights text to dialogue model."]
processor = AutoProcessor.from_pretrained(model_checkpoint)
inputs = processor(text=text, padding=True, return_tensors="pt").to(torch_device)
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
outputs = model.generate(**inputs, max_new_tokens=256) # corresponds to around ~2s
# save audio to a file
outputs = processor.batch_decode(outputs)
processor.save_audio(outputs, "example.wav")

Generation with Text and Audio (Voice Cloning)

Section titled “Generation with Text and Audio (Voice Cloning)”
from datasets import load_dataset, Audio
from transformers import AutoProcessor, DiaForConditionalGeneration
from accelerate import Accelerator
torch_device = Accelerator().device
model_checkpoint = "nari-labs/Dia-1.6B-0626"
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
ds = ds.cast_column("audio", Audio(sampling_rate=44100))
audio = ds[-1]["audio"]["array"]
# text is a transcript of the audio + additional text you want as new audio
text = ["[S1] I know. It's going to save me a lot of money, I hope. [S2] I sure hope so for you."]
processor = AutoProcessor.from_pretrained(model_checkpoint)
inputs = processor(text=text, audio=audio, padding=True, return_tensors="pt").to(torch_device)
prompt_len = processor.get_audio_prompt_len(inputs["decoder_attention_mask"])
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
outputs = model.generate(**inputs, max_new_tokens=256) # corresponds to around ~2s
# retrieve actually generated audio and save to a file
outputs = processor.batch_decode(outputs, audio_prompt_len=prompt_len)
processor.save_audio(outputs, "example_with_audio.wav")
from datasets import load_dataset, Audio
from transformers import AutoProcessor, DiaForConditionalGeneration
from accelerate import Accelerator
torch_device = Accelerator().device
model_checkpoint = "nari-labs/Dia-1.6B-0626"
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
ds = ds.cast_column("audio", Audio(sampling_rate=44100))
audio = ds[-1]["audio"]["array"]
# text is a transcript of the audio
text = ["[S1] I know. It's going to save me a lot of money, I hope."]
processor = AutoProcessor.from_pretrained(model_checkpoint)
inputs = processor(
text=text,
audio=audio,
generation=False,
output_labels=True,
padding=True,
return_tensors="pt"
).to(torch_device)
model = DiaForConditionalGeneration.from_pretrained(model_checkpoint).to(torch_device)
out = model(**inputs)
out.loss.backward()

This model was contributed by Jaeyong Sung, Arthur Zucker, and Anton Vlasjuk. The original code can be found here.

[[autodoc]] DiaConfig

[[autodoc]] DiaDecoderConfig

[[autodoc]] DiaEncoderConfig

[[autodoc]] DiaTokenizer - call

[[autodoc]] DiaFeatureExtractor - call

[[autodoc]] DiaProcessor - call - batch_decode - decode

[[autodoc]] DiaModel - forward

[[autodoc]] DiaForConditionalGeneration - forward - generate