07-10, 13:00–14:00 (Europe/Prague), Main Hall A
Retrieval-Augmented Generation (RAG) presents an excellent approach to overcoming the limitations associated with Large Language Models (LLMs), such as hallucinations or issues related to the recency of their training data. However, relying solely on RAG is insufficient, particularly when dealing with domain-specific data or verifying a response's adequacy. Neglecting these scenarios can cost time, money, and customer satisfaction. That’s why, as you develop an application, it's crucial to evaluate your retrieval process, improve it with advanced techniques if necessary, and consider all edge cases, including handling out-of-domain queries, and implement fallback mechanisms. Thus, you ensure that your system is both resilient and flexible.
This poster will explain some problems you may encounter in real life and which steps to take to build reliable and resilient RAG applications with the open source LLM framework Haystack that you can safely use in production
Intermediate
Bilge is a Developer Relations Engineer at deepset, working with Haystack, an open source LLM framework. With over two years of experience as a Software Engineer, she developed a strong interest in NLP and pursued a master's degree in Artificial Intelligence at KU Leuven with a focus on NLP. Now, she enjoys working with Haystack, writing blog posts and tutorials, and helping the community build LLM applications. ✨ 🥑