Saltar al contenido principal
InicioProgramando juntosArtificial Intelligence (AI)

Fine-Tuning Your Own Llama 2 Model

In this session, we take a step-by-step approach to fine-tune a Llama 2 model on a custom dataset.
nov 2023
View Dataset

The advent of large language models has taken the AI world by storm. Outside of proprietary foundation models like GPT-4, open-source models are playing a pivotal role in driving the AI revolution forward, democratizing access for anyone looking to leverage these models in production. One of the biggest challenges in generating high-quality output from open-source models rests in fine-tuning, where we improve their outputs based on a series of instructions.

In this session, we take a step-by-step approach to fine-tune a Llama 2 model on a custom dataset. First, we build our own dataset using techniques to remove duplicates and analyze the number of tokens. Then, we fine-tune the Llama 2 model using state-of-the art techniques from the Axolotl library. Finally, we see how to run our fine-tuned model and evaluate its performance.

Key Takeaways:

  • How to build an instruction dataset
  • How to fine-tune a Llama 2 model
  • How to use and evaluate the trained model

Note: To participate in this code-along, you will need to have a valid Google Colab account. Get started here.

Additional Resources

Solution Notebook (dataset)

Solution Model

[SKILL TRACK] AI Fundamentals

[BLOG] Introduction to Meta AI’s LLaMA

[BLOG] Fine-Tuning LLaMA 2: A Step-by-Step Guide to Customizing the Large Language Model

[BLOG] Llama.cpp Tutorial: A Complete Guide to Efficient LLM Inference and Implementation

Temas
Relacionado

tutorial

Fine-Tuning LLaMA 2: A Step-by-Step Guide to Customizing the Large Language Model

Learn how to fine-tune Llama-2 on Colab using new techniques to overcome memory and computing limitations to make open-source large language models more accessible.
Abid Ali Awan's photo

Abid Ali Awan

12 min

tutorial

Fine-Tuning Llama 3 and Using It Locally: A Step-by-Step Guide

We'll fine-tune Llama 3 on a dataset of patient-doctor conversations, creating a model tailored for medical dialogue. After merging, converting, and quantizing the model, it will be ready for private local use via the Jan application.
Abid Ali Awan's photo

Abid Ali Awan

19 min

tutorial

Fine Tuning Google Gemma: Enhancing LLMs with Customized Instructions

Learn how to run inference on GPUs/TPUs and fine-tune the latest Gemma 7b-it model on a role-play dataset.
Abid Ali Awan's photo

Abid Ali Awan

12 min

tutorial

An Introductory Guide to Fine-Tuning LLMs

Fine-tuning Large Language Models (LLMs) has revolutionized Natural Language Processing (NLP), offering unprecedented capabilities in tasks like language translation, sentiment analysis, and text generation. This transformative approach leverages pre-trained models like GPT-2, enhancing their performance on specific domains through the fine-tuning process.
Josep Ferrer's photo

Josep Ferrer

12 min

tutorial

Fine-Tune and Run Inference on Google's Gemma Model Using TPUs for Enhanced Speed and Performance

Learn to infer and fine-tune LLMs with TPUs and implement model parallelism for distributed training on 8 TPU devices.
Abid Ali Awan's photo

Abid Ali Awan

12 min

Programando juntos

Retrieval Augmented Generation with LlamaIndex

In this session you'll learn how to get started with Chroma and perform Q&A on some documents using Llama 2, the RAG technique, and LlamaIndex.
Dan Becker's photo

Dan Becker

See MoreSee More