Evolla
This model was released on 2025-01-05 and added to Hugging Face Transformers on 2025-07-26.
Evolla
Section titled “Evolla”Overview
Section titled “Overview”The Evolla model was proposed in Decoding the Molecular Language of Proteins with Evolla by Zhou et al..
Evolla is an advanced 80-billion-parameter protein-language generative model designed to decode the molecular language of proteins. It integrates information from protein sequences, structures, and user queries to generate precise and contextually nuanced insights into protein function. Trained on an unprecedented AI-generated dataset of 546 million protein question-answer pairs and 150 billion word tokens, Evolla significantly advances research in proteomics and functional genomics, providing expert-level insights and shedding light on the molecular logic encoded in proteins.
The abstract from the paper is the following:
Proteins, nature’s intricate molecular machines, are the products of billions of years of evolution and play fundamental roles in sustaining life. Yet, deciphering their molecular language - that is, understanding how protein sequences and structures encode and determine biological functions - remains a corner-stone challenge in modern biology. Here, we introduce Evolla, an 80 billion frontier protein-language generative model designed to decode the molecular language of proteins. By integrating information from protein sequences, structures, and user queries, Evolla generates precise and contextually nuanced insights into protein function. A key innovation of Evolla lies in its training on an unprecedented AI-generated dataset: 546 million protein question-answer pairs and 150 billion word tokens, designed to reflect the immense complexity and functional diversity of proteins. Post-pretraining, Evolla integrates Direct Preference Optimization (DPO) to refine the model based on preference signals and Retrieval-Augmented Generation (RAG) for external knowledge incorporation, improving response quality and relevance. To evaluate its performance, we propose a novel framework, Instructional Response Space (IRS), demonstrating that Evolla delivers expert-level insights, advancing research in proteomics and functional genomics while shedding light on the molecular logic encoded in proteins. The online demo is available at http://www.chat-protein.com/.
Examples:
processor = EvollaProcessor.from_pretrained("westlake-repl/Evolla-10B-DPO-hf")model = EvollaForProteinText2Text.from_pretrained("westlake-repl/Evolla-10B-DPO-hf")# aa_seq should have same length as foldseekprotein_inputs = [ {
"aa_seq": "MATGGRRG...", "foldseek": "###lqpfd...", # hashtag means the low-confidence foldseek tokens }, { "aa_seq": "MLPGLALL...", "foldseek": "dfwwkwad...", }]message_list = [ [ { "role": "system", "content": "You are an AI expert that can answer any questions about protein.", }, {"role": "user", "content": "What is the function of this protein?"}, ], [ { "role": "system", "content": "You are an AI expert that can answer any questions about protein.", }, {"role": "user", "content": "What is the function of this protein?"}, ]]input_dict = processor( protein_inputs, messages_list, return_tensors="pt", text_max_length=512, protein_max_length=1024)with torch.no_grad(): generated_ids = hf_model.generate(**input_dict)generated_texts = processor.batch_decode( generated_ids, skip_special_tokens=True)Tips:
- This model was contributed by Xibin Bayes Zhou.
- The original code can be found here.
EvollaConfig
Section titled “EvollaConfig”[[autodoc]] EvollaConfig
EvollaModel
Section titled “EvollaModel”[[autodoc]] EvollaModel - forward
EvollaForProteinText2Text
Section titled “EvollaForProteinText2Text”[[autodoc]] EvollaForProteinText2Text - forward
EvollaProcessor
Section titled “EvollaProcessor”[[autodoc]] EvollaProcessor - call