Gemma2
This model was released on 2024-07-31 and added to Hugging Face Transformers on 2024-06-27.
Gemma2
Section titled “Gemma2”Gemma 2 is a family of language models with pretrained and instruction-tuned variants, available in 2B, 9B, 27B parameters. The architecture is similar to the previous Gemma, except it features interleaved local attention (4096 tokens) and global attention (8192 tokens) and grouped-query attention (GQA) to increase inference performance.
The 2B and 9B models are trained with knowledge distillation, and the instruction-tuned variant was post-trained with supervised fine-tuning and reinforcement learning.
You can find all the original Gemma 2 checkpoints under the Gemma 2 collection.
The example below demonstrates how to chat with the model with Pipeline or the AutoModel class, and from the command line.
import torchfrom transformers import pipeline
pipe = pipeline( task="text-generation", model="google/gemma-2-9b", dtype=torch.bfloat16, device_map="auto",)
pipe("Explain quantum computing simply. ", max_new_tokens=50)import torchfrom transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-9b", dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")
input_text = "Explain quantum computing simply."input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=32, cache_implementation="static")print(tokenizer.decode(outputs[0], skip_special_tokens=True))echo -e "Explain quantum computing simply." | transformers run --task text-generation --model google/gemma-2-2b --device 0Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to only quantize the weights to int4.
import torchfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b")model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-27b", dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")
input_text = "Explain quantum computing simply."input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=32, cache_implementation="static")print(tokenizer.decode(outputs[0], skip_special_tokens=True))Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.
from transformers.utils.attention_visualizer import AttentionMaskVisualizervisualizer = AttentionMaskVisualizer("google/gemma-2b")visualizer("You are an assistant. Make sure you print me")
Gemma2Config
Section titled “Gemma2Config”[[autodoc]] Gemma2Config
Gemma2Model
Section titled “Gemma2Model”[[autodoc]] Gemma2Model - forward
Gemma2ForCausalLM
Section titled “Gemma2ForCausalLM”[[autodoc]] Gemma2ForCausalLM - forward
Gemma2ForSequenceClassification
Section titled “Gemma2ForSequenceClassification”[[autodoc]] Gemma2ForSequenceClassification - forward
Gemma2ForTokenClassification
Section titled “Gemma2ForTokenClassification”[[autodoc]] Gemma2ForTokenClassification - forward