ByT5
This model was released on 2021-05-28 and added to Hugging Face Transformers on 2021-06-01.
ByT5 is tokenizer-free version of the T5 model designed to works directly on raw UTF-8 bytes. This means it can process any language, more robust to noise like typos, and simpler to use because it doesn’t require a preprocessing pipeline.
You can find all the original ByT5 checkpoints under the Google organization.
The example below demonstrates how to generate text with Pipeline, AutoModel and from the command line.
import torchfrom transformers import pipeline
pipeline = pipeline( task="text2text-generation", model="google/byt5-small", dtype=torch.float16, device=0)pipeline("translate English to French: The weather is nice today")import torchfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "google/byt5-small")model = AutoModelForSeq2SeqLM.from_pretrained( "google/byt5-small", dtype=torch.float16, device_map="auto")
input_ids = tokenizer("summarize: Photosynthesis is the process by which plants, algae, and some bacteria convert light energy into chemical energy.", return_tensors="pt").to(model.device)
output = model.generate(**input_ids)print(tokenizer.decode(output[0], skip_special_tokens=True))echo -e "translate English to French: Life is beautiful." | transformers run --task text2text-generation --model google/byt5-small --device 0Quantization
Section titled “Quantization”Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install torchaoimport torchfrom transformers import TorchAoConfig, AutoModelForSeq2SeqLM, AutoTokenizer
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
model = AutoModelForSeq2SeqLM.from_pretrained( "google/byt5-xl", dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained("google/byt5-xl")input_ids = tokenizer("translate English to French: The weather is nice today.", return_tensors="pt").to(model.device)
output = model.generate(**input_ids)print(tokenizer.decode(output[0], skip_special_tokens=True))-
It is recommended to use the tokenizer for batched inference and training.
-
The example below shows how to use the model without a tokenizer.
import torchfrom transformers import AutoModelForSeq2SeqLMmodel = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-small")num_special_tokens = 3input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokenslabels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokensloss = model(input_ids, labels=labels).lossloss.item() -
ByT5 uses the top byte values (258, 257, etc.) for masking instead of sentinel tokens like
{extra_id_0}.# Example: character-level denoising with mask tokensinput_ids = tokenizer("The dog chases a ball in the park.").input_idsmasked_input = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]])output = model.generate(masked_input, max_length=100)
ByT5Tokenizer
Section titled “ByT5Tokenizer”[[autodoc]] ByT5Tokenizer