Skip to content

HuBERT

This model was released on 2021-06-14 and added to Hugging Face Transformers on 2021-06-16.

PyTorch FlashAttention SDPA

HuBERT is a self-supervised speech model to cluster aligned target labels for BERT-like prediction loss and applying the prediction loss only over masked regions to force the model to learn both acoustic and language modeling over continuous inputs. It addresses the challenges of multiple sound units per utterance, no lexicon during pre-training, and variable-length sound units without explicit segmentation.

You can find all the original HuBERT checkpoints under the HuBERT collection.

Click on the HuBERT models in the right sidebar for more examples of how to apply HuBERT to different audio tasks.

The example below demonstrates how to automatically transcribe speech into text with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipeline = pipeline(
task="automatic-speech-recognition",
model="facebook/hubert-large-ls960-ft",
dtype=torch.float16,
device=0
)
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
import torch
from transformers import AutoProcessor, AutoModelForCTC
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation").sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/hubert-base-ls960")
model = AutoModelForCTC.from_pretrained("facebook/hubert-base-ls960", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to quantize the weights to 4-bits.

import torch
from transformers import AutoProcessor, AutoModelForCTC, BitsAndBytesConfig
from datasets import load_dataset
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
)
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation").sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/hubert-base-ls960")
model = AutoModelForCTC.from_pretrained("facebook/hubert-base-ls960", quantization_config=bnb_config, dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription[0])
  • HuBERT models expect raw audio input as a 1D float array sampled at 16kHz.

[[autodoc]] HubertConfig - all

[[autodoc]] HubertModel - forward

[[autodoc]] HubertForCTC - forward

[[autodoc]] HubertForSequenceClassification - forward