top of page
< Back

How to Fine Tuning Hugging Faces Transformers in Python for NLP Classification

Learn how to fine tune the hugging faces transfomers models like bart bert reberta bart gpt2 on a text classification to get the best prediction from transfomers models in python for free education gratis

Learn how to fine-tune the transformer models  Roberta, Electra, BERT, T5, and Alberta in Python in this free beginner intro to fine-tuning with hugging faces with a simple NLP classification problem. In this free Python educational lesson, we will use transformers' Auto Model class to easily import each different model with similar Python code.  This pairs nicely with the Auto Tokenizer which helps ensure our texts are tokenized in the appropriate format for each model without having to import a separate tokenizer or model function.  


Using the Auto Model and Auto Tokenizer makes setting up experiments in with transforms so much faster and easier.


Here we fine-tune these various sequence classification models on a New Articles classification problem with 4 classes.  To show the effect of fine tuning with will look at the model's accuracy and confusion matrix first without fine tuning and then again after we have completed fine tuning.  In these free Python lessons, we will be training these models for only 3 epochs and it's amazing the effect of such a short training time on the final accuracy. 


A Challenging part of fine-tuning a hugging faces model with the size of these models makes them hard to even fit in your GPU's memory for speedy training.  DistilBert and Electra are smaller models and easier to train on a normal GPU but I had to use Google Colabratory's most expensive GPU, the A100 GPU, just to have a large enough memory to hold the model for training.   Even using the best GPU training some off these transformers for only 3 epochs took many hours.  In total to fine tune 6 models for the Free educational Python Project had a compute cost of $10 USD, generously donated by DataSimple.education.


Join us and learn how to fine-tune the powerful transformers available in the Hugging Faces library in the free educational Python class.  

Summary

In this free Python NLP lesson we discussed how to fine-tune a hugging face and transformers NLP classification model.   We test each model first to set a baseline to better judge the value of fine-tuning for only 3 epochs.  We tested  transformer models  Roberta, Electra, BERT, T5, and Alberta we judge each of these models by how well they were able to predict in terms of accuracy and a simple confusion matrix to complete a visual inspection.  


In general we increase the accurary from .25 to .95 after a short amount of fine tuning.  


Note that we implemented the process with the Pytorch modules as the TensorFlow versions at the time of this writing were still unstable.  



free transfomers lesson donated by DataSimple learning for beginners easy simple learn NLP Ai
free pytorch hugging face fine tuning python model ai gratis to learn donated by DataSimple
bottom of page