Large language models are not a paradigm shift
Presented at Berlin Buzzwords (Berlin, Germany)
Hundreds of millions of people have used generative large language models (LLMs) to entertain themselves and improve their productivity. Sophisticated enterprises have scrambled to quickly put LLM initiatives and applications in place. It’s hard not to wonder if LLMs are finally the technology that will make AI “real” to a broad audience after years of steady progress in machine learning (ML) applications. This talk will argue that the emergence of LLMs does not represent the paradigm shift that it might appear to. As we’ll see, many of the putatively novel challenges that LLM applications have strong analogies to challenges faced by conventional ML systems. We’ll discuss the differences between LLM applications and ML applications, and see how many of these are important but not fundamental. We’ll discuss the challenges inherent to getting the best out of LLMs and learn how AI systems can go wrong, how to build them responsibly, and why the aspects of LLMs that are truly new give us a reason to ultimately believe at least some of the hype after all.