Skip to content

GPT

This model was released on 2018-06-11 and added to Hugging Face Transformers on 2023-06-20.

PyTorch SDPA FlashAttention

GPT (Generative Pre-trained Transformer) (blog post) focuses on effectively learning text representations and transferring them to tasks. This model trains the Transformer decoder to predict the next word, and then fine-tuned on labeled data.

GPT can generate high-quality text, making it well-suited for a variety of natural language understanding tasks such as textual entailment, question answering, semantic similarity, and document classification.

You can find all the original GPT checkpoints under the OpenAI community organization.

The example below demonstrates how to generate text with Pipeline, AutoModel, and from the command line.

import torch
from transformers import pipeline
generator = pipeline(task="text-generation", model="openai-community/openai-gpt", device=0)
output = generator("The future of AI is", max_length=50, do_sample=True)
print(output[0]["generated_text"])
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("openai-community/openai-gpt")
model = AutoModelForCausalLM.from_pretrained("openai-community/openai-gpt")
inputs = tokenizer("The future of AI is", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Terminal window
echo -e "The future of AI is" | transformers run --task text-generation --model openai-community/openai-gpt --device 0
  • Inputs should be padded on the right because GPT uses absolute position embeddings.

[[autodoc]] OpenAIGPTConfig

[[autodoc]] OpenAIGPTModel - forward

[[autodoc]] OpenAIGPTLMHeadModel - forward

[[autodoc]] OpenAIGPTDoubleHeadsModel - forward

[[autodoc]] OpenAIGPTForSequenceClassification - forward

[[autodoc]] OpenAIGPTTokenizer

[[autodoc]] OpenAIGPTTokenizerFast