Evaluating LLM Responses
In this session, we cover the different evaluations that are useful for reducing hallucination and improving retrieval quality of LLMs.
Nov 2023
RelatedSee MoreSee More
blog
Attention Mechanism in LLMs: An Intuitive Explanation
Learn how the attention mechanism works and how it revolutionized natural language processing (NLP).
Yesha Shastri
8 min
tutorial
Boost LLM Accuracy with Retrieval Augmented Generation (RAG) and Reranking
Discover the strengths of LLMs with effective information retrieval mechanisms. Implement a reranking approach and incorporate it into your own LLM pipeline.
Iván Palomares Carrascosa
11 min
tutorial
LLM Classification: How to Select the Best LLM for Your Application
Discover the family of LLMs available and the elements to consider when evaluating which LLM is the best for your use case.
Andrea Valenzuela
15 min
tutorial
An Introduction to Debugging And Testing LLMs in LangSmith
Discover how LangSmith optimizes LLM testing and debugging for AI applications. Enhance quality assurance and streamline development with real-world examples.
Bex Tuychiev
12 min
code-along
Retrieval Augmented Generation with LlamaIndex
In this session you'll learn how to get started with Chroma and perform Q&A on some documents using Llama 2, the RAG technique, and LlamaIndex.
Dan Becker
code-along
Retrieval Augmented Generation with the OpenAI API & Pinecone
Build a movie recommender system using GPT and learn key techniques to minimize hallucinations and ensure factual answers.
Vincent Vankrunkelsven