ModernBERT Decoder
This model was released on 2024-12-18 and added to Hugging Face Transformers on 2025-07-15.
ModernBERT Decoder
Section titled “ModernBERT Decoder”ModernBERT Decoder has the same architecture as ModernBERT but it is trained from scratch with a causal language modeling objective from the Ettin paper. This allows for using the same architecture to compare encoders and decoders. This model is the decoder architecture implementation of ModernBERT, designed for autoregressive text generation tasks.
ModernBERT Decoder uses sliding window attention and rotary positional embeddings for efficiency and to handle longer sequences.
You can find all the original ModernBERT Decoder checkpoints under the jhu-clsp collection.
Click on the ModernBERT Decoder models in the right sidebar for more examples of how to apply ModernBERT Decoder to different text generation tasks.
The example below demonstrates how to use ModernBERT Decoder for text generation with Pipeline, AutoModel (with and without quantization), and from the command line.
import torchfrom transformers import pipeline
generator = pipeline( task="text-generation", model="jhu-clsp/ettin-decoder-17m", dtype=torch.float16, device=0)generator("The future of artificial intelligence is", max_length=50, num_return_sequences=1)
# For sequence classificationclassifier = pipeline( task="text-classification", model="jhu-clsp/ettin-decoder-17m", dtype=torch.float16, device=0)classifier("This movie is really great!")import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-17m")model = AutoModelForCausalLM.from_pretrained( "jhu-clsp/ettin-decoder-17m", dtype=torch.float16, device_map="auto",)
prompt = "The future of artificial intelligence is"inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model.generate( **inputs, max_length=50, num_return_sequences=1, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id )
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)print(f"Generated text: {generated_text}")
# For sequence classificationfrom transformers import AutoModelForSequenceClassification
classifier_model = AutoModelForSequenceClassification.from_pretrained( "jhu-clsp/ettin-decoder-17m", dtype=torch.float16, device_map="auto", num_labels=2)
text = "This movie is really great!"inputs = tokenizer(text, return_tensors="pt").to(classifier_model.device)
with torch.no_grad(): outputs = classifier_model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(predictions, dim=-1)
print(f"Predicted class: {predicted_class.item()}")print(f"Prediction probabilities: {predictions}")import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig( load_in_8bit=True,)
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-1b")model = AutoModelForCausalLM.from_pretrained( "jhu-clsp/ettin-decoder-1b", dtype=torch.float16, device_map="auto", quantization_config=quantization_config)
prompt = "The future of artificial intelligence is"inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad(): outputs = model.generate( **inputs, max_length=50, num_return_sequences=1, temperature=0.7, do_sample=True, pad_token_id=tokenizer.eos_token_id )
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)print(f"Generated text: {generated_text}")echo "The future of artificial intelligence is" | transformers run --task text-generation --model jhu-clsp/ettin-decoder-17m --device 0ModernBertDecoderConfig
Section titled “ModernBertDecoderConfig”[[autodoc]] ModernBertDecoderConfig
ModernBertDecoderModel
Section titled “ModernBertDecoderModel”[[autodoc]] ModernBertDecoderModel - forward
ModernBertDecoderForCausalLM
Section titled “ModernBertDecoderForCausalLM”[[autodoc]] ModernBertDecoderForCausalLM - forward
ModernBertDecoderForSequenceClassification
Section titled “ModernBertDecoderForSequenceClassification”[[autodoc]] ModernBertDecoderForSequenceClassification - forward