Qwen2
This model was released on 2024-07-15 and added to Hugging Face Transformers on 2024-01-17.
Qwen2 is a family of large language models (pretrained, instruction-tuned and mixture-of-experts) available in sizes from 0.5B to 72B parameters. The models are built on the Transformer architecture featuring enhancements like group query attention (GQA), rotary positional embeddings (RoPE), a mix of sliding window and full attention, and dual chunk attention with YARN for training stability. Qwen2 models support multiple languages and context lengths up to 131,072 tokens.
You can find all the official Qwen2 checkpoints under the Qwen2 collection.
The example below demonstrates how to generate text with Pipeline, AutoModel, and from the command line using the instruction-tuned models.
import torchfrom transformers import pipeline
pipe = pipeline( task="text-generation", model="Qwen/Qwen2-1.5B-Instruct", dtype=torch.bfloat16, device_map=0)
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about the Qwen2 model family."},]outputs = pipe(messages, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)print(outputs[0]["generated_text"][-1]['content'])import torchfrom transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-1.5B-Instruct", dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa")tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
prompt = "Give me a short introduction to large language models."messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt}]text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True)model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate( model_inputs.input_ids, cache_implementation="static", max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]print(response)# pip install -U flash-attn --no-build-isolationtransformers chat Qwen/Qwen2-7B-Instruct --dtype auto --attn_implementation flash_attention_2 --device 0Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to quantize the weights to 4-bits.
# pip install -U flash-attn --no-build-isolationimport torchfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True,)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B")model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-7B", dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config, attn_implementation="flash_attention_2")
inputs = tokenizer("The Qwen2 model family is", return_tensors="pt").to(model.device)outputs = model.generate(**inputs, max_new_tokens=100)print(tokenizer.decode(outputs[0], skip_special_tokens=True))- Ensure your Transformers library version is up-to-date. Qwen2 requires Transformers>=4.37.0 for full support.
Qwen2Config
Section titled “Qwen2Config”[[autodoc]] Qwen2Config
Qwen2Tokenizer
Section titled “Qwen2Tokenizer”[[autodoc]] Qwen2Tokenizer - save_vocabulary
Qwen2TokenizerFast
Section titled “Qwen2TokenizerFast”[[autodoc]] Qwen2TokenizerFast
Qwen2RMSNorm
Section titled “Qwen2RMSNorm”[[autodoc]] Qwen2RMSNorm - forward
Qwen2Model
Section titled “Qwen2Model”[[autodoc]] Qwen2Model - forward
Qwen2ForCausalLM
Section titled “Qwen2ForCausalLM”[[autodoc]] Qwen2ForCausalLM - forward
Qwen2ForSequenceClassification
Section titled “Qwen2ForSequenceClassification”[[autodoc]] Qwen2ForSequenceClassification - forward
Qwen2ForTokenClassification
Section titled “Qwen2ForTokenClassification”[[autodoc]] Qwen2ForTokenClassification - forward
Qwen2ForQuestionAnswering
Section titled “Qwen2ForQuestionAnswering”[[autodoc]] Qwen2ForQuestionAnswering - forward