Skip to content

XLM-RoBERTa

This model was released on 2019-11-05 and added to Hugging Face Transformers on 2020-11-16.

PyTorch SDPA

XLM-RoBERTa is a large multilingual masked language model trained on 2.5TB of filtered CommonCrawl data across 100 languages. It shows that scaling the model provides strong performance gains on high-resource and low-resource languages. The model uses the RoBERTa pretraining objectives on the XLM model.

You can find all the original XLM-RoBERTa checkpoints under the Facebook AI community organization.

The example below demonstrates how to predict the <mask> token with Pipeline, AutoModel, and from the command line.

import torch
from transformers import pipeline
pipeline = pipeline(
task="fill-mask",
model="FacebookAI/xlm-roberta-base",
dtype=torch.float16,
device=0
)
# Example in French
pipeline("Bonjour, je suis un modèle <mask>.")
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained(
"FacebookAI/xlm-roberta-base"
)
model = AutoModelForMaskedLM.from_pretrained(
"FacebookAI/xlm-roberta-base",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
# Prepare input
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")
Terminal window
echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/xlm-roberta-base --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the quantization guide overview for more available quantization backends.

The example below uses bitsandbytes the quantive the weights to 4 bits

import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
bnb_4bit_quant_type="nf4", # or "fp4" for float 4-bit quantization
bnb_4bit_use_double_quant=True, # use double quantization for better performance
)
tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-roberta-large")
model = AutoModelForMaskedLM.from_pretrained(
"facebook/xlm-roberta-large",
dtype=torch.float16,
device_map="auto",
attn_implementation="flash_attention_2",
quantization_config=quantization_config
)
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
  • Unlike some XLM models, XLM-RoBERTa doesn’t require lang tensors to understand what language is being used. It automatically determines the language from the input IDs

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

Multiple choice

🚀 Deploy

This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.

[[autodoc]] XLMRobertaConfig

[[autodoc]] XLMRobertaTokenizer - get_special_tokens_mask - save_vocabulary

[[autodoc]] XLMRobertaTokenizerFast

[[autodoc]] XLMRobertaModel - forward

[[autodoc]] XLMRobertaForCausalLM - forward

[[autodoc]] XLMRobertaForMaskedLM - forward

[[autodoc]] XLMRobertaForSequenceClassification - forward

[[autodoc]] XLMRobertaForMultipleChoice - forward

[[autodoc]] XLMRobertaForTokenClassification - forward

[[autodoc]] XLMRobertaForQuestionAnswering - forward