Skip to content

Community

This page regroups resources around 🤗 Transformers developed by the community.

ResourceDescriptionAuthor
Hugging Face Transformers Glossary FlashcardsA set of flashcards based on the Transformers Docs Glossary that has been put into a form which can be easily learned/revised using Anki an open source, cross platform app specifically designed for long term knowledge retention. See this Introductory video on how to use the flashcards.Darigov Research
NotebookDescriptionAuthor
Fine-tune a pre-trained Transformer to generate lyricsHow to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 modelAleksey KorshukOpen In Colab
Train T5 on TPUHow to train T5 on SQUAD with Transformers and NlpSuraj PatilOpen In Colab
Fine-tune T5 for Classification and Multiple ChoiceHow to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch LightningSuraj PatilOpen In Colab
Fine-tune DialoGPT on New Datasets and LanguagesHow to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbotsNathan CooperOpen In Colab
Long Sequence Modeling with ReformerHow to train on sequences as long as 500,000 tokens with ReformerPatrick von PlatenOpen In Colab
Fine-tune BART for SummarizationHow to fine-tune BART for summarization with fastai using blurrWayde GilliamOpen In Colab
Fine-tune a pre-trained Transformer on anyone’s tweetsHow to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 modelBoris DaymaOpen In Colab
Optimize 🤗 Hugging Face models with Weights & BiasesA complete tutorial showcasing W&B integration with Hugging FaceBoris DaymaOpen In Colab
Pretrain LongformerHow to build a “long” version of existing pretrained modelsIz BeltagyOpen In Colab
Fine-tune Longformer for QAHow to fine-tune longformer model for QA taskSuraj PatilOpen In Colab
Evaluate Model with 🤗nlpHow to evaluate longformer on TriviaQA with nlpPatrick von PlatenOpen In Colab
Fine-tune T5 for Sentiment Span ExtractionHow to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch LightningLorenzo AmpilOpen In Colab
Fine-tune DistilBert for Multiclass ClassificationHow to fine-tune DistilBert for multiclass classification with PyTorchAbhishek Kumar MishraOpen In Colab
Fine-tune BERT for Multi-label ClassificationHow to fine-tune BERT for multi-label classification using PyTorchAbhishek Kumar MishraOpen In Colab
Fine-tune T5 for SummarizationHow to fine-tune T5 for summarization in PyTorch and track experiments with WandBAbhishek Kumar MishraOpen In Colab
Speed up Fine-Tuning in Transformers with Dynamic Padding / BucketingHow to speed up fine-tuning by a factor of 2 using dynamic padding / bucketingMichael BenestyOpen In Colab
Pretrain Reformer for Masked Language ModelingHow to train a Reformer model with bi-directional self-attention layersPatrick von PlatenOpen In Colab
Expand and Fine Tune Sci-BERTHow to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it.Tanmay ThakurOpen In Colab
Fine Tune BlenderBotSmall for Summarization using the Trainer APIHow to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API.Tanmay ThakurOpen In Colab
Fine-tune Electra and interpret with Integrated GradientsHow to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated GradientsEliza SzczechlaOpen In Colab
fine-tune a non-English GPT-2 Model with Trainer classHow to fine-tune a non-English GPT-2 Model with Trainer classPhilipp SchmidOpen In Colab
Fine-tune a DistilBERT Model for Multi Label Classification taskHow to fine-tune a DistilBERT Model for Multi Label Classification taskDhaval TaunkOpen In Colab
Fine-tune ALBERT for sentence-pair classificationHow to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification taskNadir El ManouziOpen In Colab
Fine-tune Roberta for sentiment analysisHow to fine-tune a Roberta model for sentiment analysisDhaval TaunkOpen In Colab
Evaluating Question Generation ModelsHow accurate are the answers to questions generated by your seq2seq transformer model?Pascal ZolekoOpen In Colab
Leverage BERT for Encoder-Decoder Summarization on CNN/DailymailHow to warm-start a EncoderDecoderModel with a google-bert/bert-base-uncased checkpoint for summarization on CNN/DailymailPatrick von PlatenOpen In Colab
Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSumHow to warm-start a shared EncoderDecoderModel with a FacebookAI/roberta-base checkpoint for summarization on BBC/XSumPatrick von PlatenOpen In Colab
Fine-tune TAPAS on Sequential Question Answering (SQA)How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) datasetNiels RoggeOpen In Colab
Evaluate TAPAS on Table Fact Checking (TabFact)How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers librariesNiels RoggeOpen In Colab
Fine-tuning mBART for translationHow to fine-tune mBART using Seq2SeqTrainer for Hindi to English translationVasudev GuptaOpen In Colab
Fine-tune LayoutLM on FUNSD (a form understanding dataset)How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documentsNiels RoggeOpen In Colab
Fine-Tune DistilGPT2 and Generate TextHow to fine-tune DistilGPT2 and generate textAakash TripathiOpen In Colab
Fine-Tune LED on up to 8K tokensHow to fine-tune LED on pubmed for long-range summarizationPatrick von PlatenOpen In Colab
Evaluate LED on ArxivHow to effectively evaluate LED on long-range summarizationPatrick von PlatenOpen In Colab
Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset)How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classificationNiels RoggeOpen In Colab
Wav2Vec2 CTC decoding with GPT2 adjustmentHow to decode CTC sequence with language model adjustmentEric LamOpen In Colab
Fine-tune BART for summarization in two languages with Trainer classHow to fine-tune BART for summarization in two languages with Trainer classEliza SzczechlaOpen In Colab
Evaluate Big Bird on Trivia QAHow to evaluate BigBird on long document question answering on Trivia QAPatrick von PlatenOpen In Colab
Create video captions using Wav2Vec2How to create YouTube captions from any video by transcribing the audio with Wav2VecNiklas MuennighoffOpen In Colab
Fine-tune the Vision Transformer on CIFAR-10 using PyTorch LightningHow to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch LightningNiels RoggeOpen In Colab
Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 TrainerHow to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 TrainerNiels RoggeOpen In Colab
Evaluate LUKE on Open Entity, an entity typing datasetHow to evaluate LukeForEntityClassification on the Open Entity datasetIkuya YamadaOpen In Colab
Evaluate LUKE on TACRED, a relation extraction datasetHow to evaluate LukeForEntityPairClassification on the TACRED datasetIkuya YamadaOpen In Colab
Evaluate LUKE on CoNLL-2003, an important NER benchmarkHow to evaluate LukeForEntitySpanClassification on the CoNLL-2003 datasetIkuya YamadaOpen In Colab
Evaluate BigBird-Pegasus on PubMed datasetHow to evaluate BigBirdPegasusForConditionalGeneration on PubMed datasetVasudev GuptaOpen In Colab
Speech Emotion Classification with Wav2Vec2How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA datasetMehrdad FarahaniOpen In Colab
Detect objects in an image with DETRHow to use a trained DetrForObjectDetection model to detect objects in an image and visualize attentionNiels RoggeOpen In Colab
Fine-tune DETR on a custom object detection datasetHow to fine-tune DetrForObjectDetection on a custom object detection datasetNiels RoggeOpen In Colab
Finetune T5 for Named Entity RecognitionHow to fine-tune T5 on a Named Entity Recognition TaskOgundepo OdunayoOpen In Colab
Fine-Tuning Open-Source LLM using QLoRA with MLflow and PEFTHow to use QLoRA and PEFT to fine-tune an LLM in a memory-efficient way, while using MLflow to manage experiment trackingYuki WatanabeOpen In Colab