HomeUpcoming webinars

Fine-Tuning Your Own Llama 3 Model

Key Takeaways:
  • Learn how to use Hugging Face Python packages to fine-tune LLMs.
  • Understand the workflow for customizing LLMs by fine-tuning.
  • Learn how to evaluate the success of your fine-tuned model.
Tuesday, August 27 11AM ET
View More Webinars

Register for the webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Description

Meta's Llama 3 is one of the most powerful open weights LLMs, and forms the basis of many commercial generative AI applications. Fine-tuning is a technique to get better performance from LLMs for specific use cases, and is fast becoming an essential skill for organizations making AI applications.

In this session, Maxime, one of the world's leading thinkers in generative AI research, shows you how to fine-tune the Llama 3 LLM using Python and the Hugging Face platform. You'll take a stock Llama 3 LLM, process data for training, then fine-tune the model, and evaluate its performance for an industry use case.

Presenter Bio

Maxime Labonne Headshot
Maxime LabonneSenior Staff Machine Learning Scientist at Liquid AI

Maxime Labonne is a Senior Staff Machine Learning Scientist at Liquid AI, serving as the head of post-training. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is recognized as a Google Developer Expert in AI/ML.

An active blogger, he has made significant contributions to the open-source community, including the LLM Course on GitHub, tools such as LLM AutoEval, and several state-of-the-art models like NeuralBeagle and Phixtral. He is the author of the best-selling book “Hands-On Graph Neural Networks Using Python,” published by Packt.

View More Webinars