Skip to content

CLAP

This model was released on 2022-11-12 and added to Hugging Face Transformers on 2023-02-16.

PyTorch

CLAP (Contrastive Language-Audio Pretraining) is a multimodal model that combines audio data with natural language descriptions through contrastive learning.

It incorporates feature fusion and keyword-to-caption augmentation to process variable-length audio inputs and to improve performance. CLAP doesn’t require task-specific training data and can learn meaningful audio representations through natural language.

You can find all the original CLAP checkpoints under the CLAP collection.

Click on the CLAP models in the right sidebar for more examples of how to apply CLAP to different audio retrieval and classification tasks.

The example below demonstrates how to extract text embeddings with the AutoModel class.

import torch
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained("laion/clap-htsat-unfused", dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused")
texts = ["the sound of a cat", "the sound of a dog", "music playing"]
inputs = tokenizer(texts, padding=True, return_tensors="pt").to(model.device)
with torch.no_grad():
text_features = model.get_text_features(**inputs)
print(f"Text embeddings shape: {text_features.shape}")
print(f"Text embeddings: {text_features}")

[[autodoc]] ClapConfig

[[autodoc]] ClapTextConfig

[[autodoc]] ClapAudioConfig

[[autodoc]] ClapFeatureExtractor

[[autodoc]] ClapProcessor

[[autodoc]] ClapModel - forward - get_text_features - get_audio_features

[[autodoc]] ClapTextModel - forward

[[autodoc]] ClapTextModelWithProjection - forward

[[autodoc]] ClapAudioModel - forward

[[autodoc]] ClapAudioModelWithProjection - forward