Skip to content

RecurrentGemma

This model was released on 2024-04-11 and added to Hugging Face Transformers on 2024-04-10.

PyTorch

The Recurrent Gemma model was proposed in RecurrentGemma: Moving Past Transformers for Efficient Open Language Models by the Griffin, RLHF and Gemma Teams of Google.

The abstract from the paper is the following:

We introduce RecurrentGemma, an open language model which uses Google’s novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.

Tips:

This model was contributed by Arthur Zucker. The original code can be found here.

[[autodoc]] RecurrentGemmaConfig

[[autodoc]] RecurrentGemmaModel - forward

[[autodoc]] RecurrentGemmaForCausalLM - forward