Skip to content

RAG

This model was released on 2020-05-22 and added to Hugging Face Transformers on 2020-11-16.

PyTorch FlashAttention

Retrieval-Augmented Generation (RAG) combines a pretrained language model (parametric memory) with access to an external data source (non-parametric memory) by means of a pretrained neural retriever. RAG fetches relevant passages and conditions its generation on them during inference. This often makes the answers more factual and lets you update knowledge by changing the index instead of retraining the whole model.

You can find all the original RAG checkpoints under the AI at Meta organization.

Click on the RAG models in the right sidebar for more examples of how to apply RAG to different language tasks.

The examples below demonstrates how to generate text with AutoModel.

import torch
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained(
"facebook/dpr-ctx_encoder-single-nq-base", dataset="wiki_dpr", index_name="compressed"
)
model = RagSequenceForGeneration.from_pretrained(
"facebook/rag-token-nq",
retriever=retriever,
dtype="auto",
attn_implementation="flash_attention_2",
)
input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])

Quantization reduces memory by storing weights in lower precision. See the Quantization overview for supported backends. The example below uses bitsandbytes to quantize the weights to 4-bits.

import torch
from transformers import BitsAndBytesConfig, RagTokenizer, RagRetriever, RagSequenceForGeneration
bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained(
"facebook/dpr-ctx_encoder-single-nq-base", dataset="wiki_dpr", index_name="compressed"
)
model = RagSequenceForGeneration.from_pretrained(
"facebook/rag-token-nq",
retriever=retriever,
quantization_config=bnb, # quantizes generator weights
device_map="auto",
)
input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])

[[autodoc]] RagConfig

[[autodoc]] RagTokenizer

[[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput

[[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput

[[autodoc]] RagRetriever

[[autodoc]] RagModel - forward

[[autodoc]] RagSequenceForGeneration - forward - generate

[[autodoc]] RagTokenForGeneration - forward - generate