Skip to content

DeBERTa

This model was released on 2020-06-05 and added to Hugging Face Transformers on 2020-11-16.

PyTorch

DeBERTa improves the pretraining efficiency of BERT and RoBERTa with two key ideas, disentangled attention and an enhanced mask decoder. Instead of mixing everything together like BERT, DeBERTa separates a word’s content from its position and processes them independently. This gives it a clearer sense of what’s being said and where in the sentence it’s happening.

The enhanced mask decoder replaces the traditional softmax decoder to make better predictions.

Even with less training data than RoBERTa, DeBERTa manages to outperform it on several benchmarks.

You can find all the original DeBERTa checkpoints under the Microsoft organization.

The example below demonstrates how to classify text with Pipeline, AutoModel, and from the command line.

import torch
from transformers import pipeline
classifier = pipeline(
task="text-classification",
model="microsoft/deberta-base-mnli",
device=0,
)
classifier({
"text": "A soccer game with multiple people playing.",
"text_pair": "Some people are playing a sport."
})
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "microsoft/deberta-base-mnli"
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base-mnli")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/deberta-base-mnli", device_map="auto")
inputs = tokenizer(
"A soccer game with multiple people playing.",
"Some people are playing a sport.",
return_tensors="pt"
).to(model.device)
with torch.no_grad():
logits = model(**inputs).logits
predicted_class = logits.argmax().item()
labels = ["contradiction", "neutral", "entailment"]
print(f"The predicted relation is: {labels[predicted_class]}")
Terminal window
echo -e '{"text": "A soccer game with multiple people playing.", "text_pair": "Some people are playing a sport."}' | transformers run --task text-classification --model microsoft/deberta-base-mnli --device 0
  • DeBERTa uses relative position embeddings, so it does not require right-padding like BERT.
  • For best results, use DeBERTa on sentence-level or sentence-pair classification tasks like MNLI, RTE, or SST-2.
  • If you’re using DeBERTa for token-level tasks like masked language modeling, make sure to load a checkpoint specifically pretrained or fine-tuned for token-level tasks.

[[autodoc]] DebertaConfig

[[autodoc]] DebertaTokenizer - get_special_tokens_mask - save_vocabulary

[[autodoc]] DebertaTokenizerFast

[[autodoc]] DebertaModel - forward

[[autodoc]] DebertaPreTrainedModel

[[autodoc]] DebertaForMaskedLM - forward

[[autodoc]] DebertaForSequenceClassification - forward

[[autodoc]] DebertaForTokenClassification - forward

[[autodoc]] DebertaForQuestionAnswering - forward