Mathematical Computer Science Seminar
Gyorgy Turan
UIC
On the brittleness of large language models: A journey around set membership
Abstract: Large language models (LLM) have impressive performance on hard tasks, but also exhibit brittleness in simple tasks. We describe an experiment on a basic ``sub-reasoning'' task: deciding if an element belongs to a set. The results give a comprehensive picture of the various types of errors that can occur.
In the second part of the talk we give a brief overview of the mathematical challenges posed by the goal of understanding how a neural network works, including understanding what an LLM ``knows''.
Joint work with Gabor Berend, Lea Hergert, Mark Jelasity and Mario Szegedy.
Monday December 1, 2025 at 3:00 PM in 1227 SEO