Working with Llama 3
Explore the latest techniques for running the Llama LLM locally, fine-tuning it, and integrating it within your stack.
Commencer Le Cours Gratuitement4 heures14 vidéos43 exercices2 969 apprenantsDéclaration de réalisation
Créez votre compte gratuit
ou
En continuant, vous acceptez nos Conditions d'utilisation, notre Politique de confidentialité et le fait que vos données sont stockées aux États-Unis.Formation de 2 personnes ou plus ?
Essayer DataCamp for BusinessApprécié par les apprenants de milliers d'entreprises
Description du cours
Learn to Use the Llama Large-Language Model (LLM)
What is the Llama LLM, and how can you use it to enhance your projects? This course will teach you about the architecture of Llama and its applications. It will also provide you with the techniques required to fine-tune and deploy the model for specific tasks, and to optimize its performance.Understand the Llama Model and Its Applications
You’ll start with an introduction to the foundational concepts of Llama, learning how to interact with Llama models and exploring their general use cases. You'll also gain hands-on experience setting up, running, and performing inference using the llama-cpp-python library.Learn to Fine-Tune and Deploy Llama Models
You'll explore dataset preprocessing, model fine-tuning with Hugging Face, and advanced optimization techniques for efficient performance. To wrap up the course, you'll implement a RAG system using Llama and LangChain.Throughout the course, you'll engage with practical examples, including an example of creating a customer service bot, to reinforce your understanding of these concepts.
This is an ideal introduction to Llama for developers and AI practitioners. It explores the foundations of this powerful open-source LLM and how to apply it in real-world scenarios.
Formation de 2 personnes ou plus ?
Donnez à votre équipe l’accès à la plateforme DataCamp complète, y compris toutes les fonctionnalités.Dans les titres suivants
Ingénieur IA associé pour les scientifiques de données
Aller à la piste- 1
Understanding LLMs and Llama
GratuitThe field of large language models has exploded, and Llama is a standout. With Llama 3, possibilities have soared. Explore how it was built, learn to use it with llama-cpp-python, and understand how to craft precise prompts to control the model's behavior.
- 2
Using Llama Locally
Language models are often useful as agents, and in this Chapter, you'll explore how you can leverage llama-cpp-python's capabilities for local text generation and creating agents with personalities. You'll also learn about decoding parameters' impact on output quality. Finally, you'll build specialized inference classes for diverse text generation tasks.
Performing inference with Llama50 xpCreating a JSON inventory list100 xpGenerating answers with a JSON schema100 xpTuning inference parameters50 xpMaking safe responses100 xpMaking a creative chatbot100 xpCreating an LLM inference class50 xpPersonal shopping agent100 xpMulti-agent conversations100 xpImproving the Agent class100 xp - 3
Finetuning Llama for Customer Service using Hugging Face & Bitext Dataset
Language models are powerful, and you can unlock their full potential with the right techniques. Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Dive into fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.
Preprocessing data for fine-tuning50 xpFiltering datasets for evaluation100 xpCreating training samples100 xpModel fine-tuning with Hugging Face50 xpSetting up Llama training arguments100 xpFine-tuning Llama for customer service QA100 xpEvaluate generated text using ROUGE100 xpEfficient fine-tuning with LoRA50 xpUsing LoRA adapters100 xpLoRA fine-tuning Llama for customer service100 xp - 4
Creating a Customer Service Chatbot with Llama and LangChain
LLMs work best when they solve a real-world problem, such as creating a customer service chatbot using Llama and LangChain. Explore how to customize LangChain, integrate fine-tuned models, and craft templates for a real-world use case, utilizing RAG to enhance your chatbot's intelligence and accuracy. This chapter equips you with the technical skills to develop responsive and specialized chatbots.
Getting started with LangChain50 xpCreating a LangChain pipeline100 xpAdding custom template to LangChain pipeline100 xpCustomizing LangChain for specific use-cases50 xpUsing a customized Hugging Face model in LangChain100 xpClosed question-answering with LangChain100 xpDocument retrieval with Llama50 xpPreparing documents for retrieval100 xpCreating retrieval function100 xpBuilding a Retrieval Augmented Generation system50 xpCreating a RAG pipeline100 xpExtract retrieved documents from a RAG chain100 xpExtract LLM response from a RAG chain100 xpRecap: Working with Llama 350 xp
Formation de 2 personnes ou plus ?
Donnez à votre équipe l’accès à la plateforme DataCamp complète, y compris toutes les fonctionnalités.Dans les titres suivants
Ingénieur IA associé pour les scientifiques de données
Aller à la pistecollaborateurs
prérequis
Introduction to LLMs in PythonImtihan Ahmed
Voir PlusMachine Learning Engineer
Qu’est-ce que les autres apprenants ont à dire ?
Inscrivez-vous 15 millions d’apprenants et commencer Working with Llama 3 Aujourd’hui!
Créez votre compte gratuit
ou
En continuant, vous acceptez nos Conditions d'utilisation, notre Politique de confidentialité et le fait que vos données sont stockées aux États-Unis.