- Speakers
Chelsea Troy
- Description
LLMs produce more accurate and useful answers when engineers practice judicious context management. But nothing about the tools teach or encourage us to do that. As a result, it's easy to see things like:
- conversations starting with a mismatch in shared language
- loss of important details as conversations continue
- the goals of different LLM conversations getting mixed up
- complete failure, or only partial success, at communicating the desire to change course
As it turns out, the intuition for bounded contexts can help us arrive at LLM usage practices that both improve our results and guide us towards the sorts of situations in which we'll find LLM tools most useful
About Chelsea Troy
MLOps at Mozilla; Python and Pedagogy at UChicago; Software Maintenance and AI at O'Reilly.