Reinforcement Learning from Human Feedback (RLHF)
Learn how to make GenAI models truly reflect human values while gaining hands-on experience with advanced LLMs.
Kurs Kostenlos Starten4 Stunden13 Videos38 Übungen
Kostenloses Konto erstellen
oder
Durch Klick auf die Schaltfläche akzeptierst du unsere Nutzungsbedingungen, unsere Datenschutzrichtlinie und die Speicherung deiner Daten in den USA.Trainierst du 2 oder mehr?
Versuchen DataCamp for BusinessBeliebt bei Lernenden in Tausenden Unternehmen
Kursbeschreibung
Combine the efficiency of Generative AI with the understanding of human expertise in this course on Reinforcement Learning from Human Feedback. You’ll learn how to make GenAI models truly reflect human values and preferences while getting hands-on experience with LLMs. You’ll also navigate the complexities of reward models and learn how to build upon LLMs to produce AI that not only learns but also adapts to real-world scenarios.
Trainierst du 2 oder mehr?
Verschaffen Sie Ihrem Team Zugriff auf die vollständige DataCamp-Plattform, einschließlich aller Funktionen.- 1
Foundational Concepts
KostenlosThis chapter introduces the basics of Reinforcement Learning with Human Feedback (RLHF), a technique that uses human input to help AI models learn more effectively. Get started with RLHF by understanding how it differs from traditional reinforcement learning and why human feedback can enhance AI performance in various domains.
Introduction to RLHF50 xpText generation with RLHF100 xpClassifying generated text for RLHF100 xpRL vs. RLHF50 xpExploring pre-trained LLMs50 xpTokenize a text dataset100 xpFine-tuning for review classification100 xpPreparing data for RLHF50 xpPreparing the preference dataset100 xpExtracting prompts50 xp - 2
Gathering Human Feedback
Discover how to set up systems for gathering human feedback in this Chapter. Learn best practices for collecting high-quality data, from pairwise comparisons to uncertainty sampling, and explore strategies for enhancing your data collection.
Methods for high-quality feedback gathering50 xpUnderstanding comparison and rating in RLHF100 xpComparing slogans for a gym campaign100 xpMeasuring feedback quality and relevance50 xpLow confidence100 xpK-means for feedback clustering100 xpActive learning50 xpImplementing an active learning pipeline100 xpActive learning loop100 xp - 3
Tuning Models with Human Feedback
In this Chapter, you'll get into the core of Reinforcement Learning from Human Feedback training. This includes exploring fine-tuning with PPO, techniques to train efficiently, and handling potential divergences from your metrics' objectives.
- 4
Model Evaluation
Explore key techniques for assessing and improving model performance in this last Chapter of Reinforcement Learning from Human Feedback (RLHF): from fine-tuning metrics to incorporating diverse feedback sources, you'll be provided with a comprehensive toolkit to refine your models effectively.
Model metrics and adjustments50 xpMitigating negative KL divergence100 xpChecking the reward model50 xpIncorporating diverse feedback sources50 xpMajority voting on multiple data sources100 xpUnreliable data source identification100 xpEvaluating RLHF models50 xpInterpreting curves50 xpEvaluating RLHF with metrics50 xpWrapping up your RLHF journey50 xp
Trainierst du 2 oder mehr?
Verschaffen Sie Ihrem Team Zugriff auf die vollständige DataCamp-Plattform, einschließlich aller Funktionen.Mitwirkende
Voraussetzungen
Deep Reinforcement Learning in PythonMina Parham
Mehr AnzeigenAI Engineer, Chubb
Was sagen andere Lernende?
Melden Sie sich an 15 Millionen Lernende und starten Sie Reinforcement Learning from Human Feedback (RLHF) Heute!
Kostenloses Konto erstellen
oder
Durch Klick auf die Schaltfläche akzeptierst du unsere Nutzungsbedingungen, unsere Datenschutzrichtlinie und die Speicherung deiner Daten in den USA.