Skip to content

DeBERTa-v2

This model was released on 2020-06-05 and added to Hugging Face Transformers on 2021-02-19.

PyTorch

DeBERTa-v2 improves on the original DeBERTa architecture by using a SentencePiece-based tokenizer and a new vocabulary size of 128K. It also adds an additional convolutional layer within the first transformer layer to better learn local dependencies of input tokens. Finally, the position projection and content projection matrices are shared in the attention layer to reduce the number of parameters.

You can find all the original [DeBERTa-v2] checkpoints under the Microsoft organization.

Click on the DeBERTa-v2 models in the right sidebar for more examples of how to apply DeBERTa-v2 to different language tasks.

The example below demonstrates how to classify text with Pipeline or the AutoModel class.

import torch
from transformers import pipeline
pipeline = pipeline(
task="text-classification",
model="microsoft/deberta-v2-xlarge-mnli",
device=0,
dtype=torch.float16
)
result = pipeline("DeBERTa-v2 is great at understanding context!")
print(result)
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"microsoft/deberta-v2-xlarge-mnli"
)
model = AutoModelForSequenceClassification.from_pretrained(
"microsoft/deberta-v2-xlarge-mnli",
dtype=torch.float16,
device_map="auto"
)
inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")
Terminal window
echo -e "DeBERTa-v2 is great at understanding context!" | transformers run --task fill-mask --model microsoft/deberta-v2-xlarge-mnli --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes quantization to only quantize the weights to 4-bit.

from transformers import AutoModelForSequenceClassification, AutoTokenizer, BitsAndBytesConfig
model_id = "microsoft/deberta-v2-xlarge-mnli"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
quantization_config=quantization_config,
dtype="float16"
)
inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")

[[autodoc]] DebertaV2Config

[[autodoc]] DebertaV2Tokenizer - get_special_tokens_mask - save_vocabulary

[[autodoc]] DebertaV2TokenizerFast

[[autodoc]] DebertaV2Model - forward

[[autodoc]] DebertaV2PreTrainedModel - forward

[[autodoc]] DebertaV2ForMaskedLM - forward

[[autodoc]] DebertaV2ForSequenceClassification - forward

[[autodoc]] DebertaV2ForTokenClassification - forward

[[autodoc]] DebertaV2ForQuestionAnswering - forward

[[autodoc]] DebertaV2ForMultipleChoice - forward