Pular para o conteúdo principal
InícioPython

Reinforcement Learning from Human Feedback (RLHF)

Learn how to make GenAI models truly reflect human values while gaining hands-on experience with advanced LLMs.

Comece O Curso Gratuitamente
4 horas13 vídeos38 exercícios

Crie sua conta gratuita

GoogleLinkedInFacebook

ou

Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.
Group

Treinar 2 ou mais pessoas?

Tentar DataCamp for Business

Amado por alunos de milhares de empresas


Descrição do Curso

Combine the efficiency of Generative AI with the understanding of human expertise in this course on Reinforcement Learning from Human Feedback. You’ll learn how to make GenAI models truly reflect human values and preferences while getting hands-on experience with LLMs. You’ll also navigate the complexities of reward models and learn how to build upon LLMs to produce AI that not only learns but also adapts to real-world scenarios.
Para Empresas

Treinar 2 ou mais pessoas?

Obtenha acesso à sua equipe à plataforma DataCamp completa, incluindo todos os recursos.
DataCamp Para EmpresasPara uma solução sob medida , agende uma demonstração.
  1. 1

    Foundational Concepts

    Gratuito

    This chapter introduces the basics of Reinforcement Learning with Human Feedback (RLHF), a technique that uses human input to help AI models learn more effectively. Get started with RLHF by understanding how it differs from traditional reinforcement learning and why human feedback can enhance AI performance in various domains.

    Reproduzir Capítulo Agora
    Introduction to RLHF
    50 xp
    Text generation with RLHF
    100 xp
    Classifying generated text for RLHF
    100 xp
    RL vs. RLHF
    50 xp
    Exploring pre-trained LLMs
    50 xp
    Tokenize a text dataset
    100 xp
    Fine-tuning for review classification
    100 xp
    Preparing data for RLHF
    50 xp
    Preparing the preference dataset
    100 xp
    Extracting prompts
    50 xp
  2. 2

    Gathering Human Feedback

    Discover how to set up systems for gathering human feedback in this Chapter. Learn best practices for collecting high-quality data, from pairwise comparisons to uncertainty sampling, and explore strategies for enhancing your data collection.

    Reproduzir Capítulo Agora
  3. 3

    Tuning Models with Human Feedback

    In this Chapter, you'll get into the core of Reinforcement Learning from Human Feedback training. This includes exploring fine-tuning with PPO, techniques to train efficiently, and handling potential divergences from your metrics' objectives.

    Reproduzir Capítulo Agora
  4. 4

    Model Evaluation

    Explore key techniques for assessing and improving model performance in this last Chapter of Reinforcement Learning from Human Feedback (RLHF): from fine-tuning metrics to incorporating diverse feedback sources, you'll be provided with a comprehensive toolkit to refine your models effectively.

    Reproduzir Capítulo Agora
Para Empresas

Treinar 2 ou mais pessoas?

Obtenha acesso à sua equipe à plataforma DataCamp completa, incluindo todos os recursos.

colaboradores

Collaborator's avatar
Francesca Donadoni

pré-requisitos

Deep Reinforcement Learning in Python
Mina Parham HeadshotMina Parham

AI Engineer, Chubb

Ver Mais

O que os outros alunos têm a dizer?

Junte-se a mais de 15 milhões de alunos e comece Reinforcement Learning from Human Feedback (RLHF) hoje mesmo!

Crie sua conta gratuita

GoogleLinkedInFacebook

ou

Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.