XLM-RoBERTa
This model was released on 2019-11-05 and added to Hugging Face Transformers on 2020-11-16.
XLM-RoBERTa
Section titled “XLM-RoBERTa”XLM-RoBERTa is a large multilingual masked language model trained on 2.5TB of filtered CommonCrawl data across 100 languages. It shows that scaling the model provides strong performance gains on high-resource and low-resource languages. The model uses the RoBERTa pretraining objectives on the XLM model.
You can find all the original XLM-RoBERTa checkpoints under the Facebook AI community organization.
The example below demonstrates how to predict the <mask> token with Pipeline, AutoModel, and from the command line.
import torchfrom transformers import pipeline
pipeline = pipeline( task="fill-mask", model="FacebookAI/xlm-roberta-base", dtype=torch.float16, device=0)# Example in Frenchpipeline("Bonjour, je suis un modèle <mask>.")from transformers import AutoModelForMaskedLM, AutoTokenizerimport torch
tokenizer = AutoTokenizer.from_pretrained( "FacebookAI/xlm-roberta-base")model = AutoModelForMaskedLM.from_pretrained( "FacebookAI/xlm-roberta-base", dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
# Prepare inputinputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]predicted_token_id = predictions[0, masked_index].argmax(dim=-1)predicted_token = tokenizer.decode(predicted_token_id)
print(f"The predicted token is: {predicted_token}")echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/xlm-roberta-base --device 0Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the quantization guide overview for more available quantization backends.
The example below uses bitsandbytes the quantive the weights to 4 bits
import torchfrom transformers import AutoModelForMaskedLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16 bnb_4bit_quant_type="nf4", # or "fp4" for float 4-bit quantization bnb_4bit_use_double_quant=True, # use double quantization for better performance)tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-roberta-large")model = AutoModelForMaskedLM.from_pretrained( "facebook/xlm-roberta-large", dtype=torch.float16, device_map="auto", attn_implementation="flash_attention_2", quantization_config=quantization_config)
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)outputs = model.generate(**inputs, max_new_tokens=100)print(tokenizer.decode(outputs[0], skip_special_tokens=True))- Unlike some XLM models, XLM-RoBERTa doesn’t require
langtensors to understand what language is being used. It automatically determines the language from the input IDs
Resources
Section titled “Resources”A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS
XLMRobertaForSequenceClassificationis supported by this example script and notebook..- Text classification chapter of the 🤗 Hugging Face Task Guides.
- Text classification task guide
XLMRobertaForTokenClassificationis supported by this example script and notebook.- Token classification chapter of the 🤗 Hugging Face Course.
- Token classification task guide
XLMRobertaForCausalLMis supported by this example script and notebook.- Causal language modeling chapter of the 🤗 Hugging Face Task Guides.
- Causal language modeling task guide
XLMRobertaForMaskedLMis supported by this example script and notebook.- Masked language modeling chapter of the 🤗 Hugging Face Course.
- Masked language modeling
XLMRobertaForQuestionAnsweringis supported by this example script and notebook.- Question answering chapter of the 🤗 Hugging Face Course.
- Question answering task guide
Multiple choice
XLMRobertaForMultipleChoiceis supported by this example script and notebook.- Multiple choice task guide
🚀 Deploy
- A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda.
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.
XLMRobertaConfig
Section titled “XLMRobertaConfig”[[autodoc]] XLMRobertaConfig
XLMRobertaTokenizer
Section titled “XLMRobertaTokenizer”[[autodoc]] XLMRobertaTokenizer - get_special_tokens_mask - save_vocabulary
XLMRobertaTokenizerFast
Section titled “XLMRobertaTokenizerFast”[[autodoc]] XLMRobertaTokenizerFast
XLMRobertaModel
Section titled “XLMRobertaModel”[[autodoc]] XLMRobertaModel - forward
XLMRobertaForCausalLM
Section titled “XLMRobertaForCausalLM”[[autodoc]] XLMRobertaForCausalLM - forward
XLMRobertaForMaskedLM
Section titled “XLMRobertaForMaskedLM”[[autodoc]] XLMRobertaForMaskedLM - forward
XLMRobertaForSequenceClassification
Section titled “XLMRobertaForSequenceClassification”[[autodoc]] XLMRobertaForSequenceClassification - forward
XLMRobertaForMultipleChoice
Section titled “XLMRobertaForMultipleChoice”[[autodoc]] XLMRobertaForMultipleChoice - forward
XLMRobertaForTokenClassification
Section titled “XLMRobertaForTokenClassification”[[autodoc]] XLMRobertaForTokenClassification - forward
XLMRobertaForQuestionAnswering
Section titled “XLMRobertaForQuestionAnswering”[[autodoc]] XLMRobertaForQuestionAnswering - forward