Jodie Burchell

Dr. Jodie Burchell is the Developer Advocate in Data Science at JetBrains, and was previously a Lead Data Scientist at Verve Group Europe. After finishing a PhD in Psychology and a postdoc in biostatistics, she has worked in a range of data science and machine learning roles across natural language processing, search improvement, recommendation systems, and programmatic advertising. She is passionate about making Python data science and machine learning accessible for others. She is also a long time content creator in data science, across conference and user group presentations, books, webinars, and blogging.


Sessions

07-09
09:30
90min
Humble Data
Jodie Burchell

Are you a complete beginner to coding, but would love to learn how to get started? Have you been curious about data science, but feel overwhelmed with all the talk of AI? Many people working in data science were once in the same position and know how hard it is to take those first steps.

Club C
07-09
11:15
90min
Humble Data
Jodie Burchell

Are you a complete beginner to coding, but would love to learn how to get started? Have you been curious about data science, but feel overwhelmed with all the talk of AI? Many people working in data science were once in the same position and know how hard it is to take those first steps.

Club C
07-09
13:45
90min
Humble Data
Jodie Burchell

Are you a complete beginner to coding, but would love to learn how to get started? Have you been curious about data science, but feel overwhelmed with all the talk of AI? Many people working in data science were once in the same position and know how hard it is to take those first steps.

Club C
07-09
15:30
90min
Humble Data
Jodie Burchell

Are you a complete beginner to coding, but would love to learn how to get started? Have you been curious about data science, but feel overwhelmed with all the talk of AI? Many people working in data science were once in the same position and know how hard it is to take those first steps.

Club C
07-11
11:55
30min
Lies, damned lies and large language models
Jodie Burchell

Would you like to use large language models (LLMs) in your own project, but are troubled by their tendency to frequently “hallucinate”, or produce incorrect information? Have you ever wondered if there was a way to easily measure an LLM’s hallucination rate, and compare this against other models? And would you like to learn how to help LLMs produce more accurate information?

In this talk, we’ll have a look at some of the main reasons that hallucinations occur in LLMs, and then focus on how we can measure one specific type of hallucination: the tendency of models to regurgitate misinformation that they have learned from their training data. We’ll explore how we can easily measure this type of hallucination in LLMs using a dataset called TruthfulQA in conjunction with Python tooling including Hugging Face’s datasets and transformers packages, and the langchain package.

We’ll end by looking at recent initiatives to reduce hallucinations in LLMs, using a technique called retrieval augmented generation (RAG). We’ll look at how and why RAG makes LLMs less likely to hallucinate, and how this can help make these models more reliable and usable in a range of contexts.

PyData: LLMs
Forum Hall