Happy New Year from Berkeley, where the magnolias are already in bud and we have just welcomed the participants in our Spring 2026 research program on...
Large language models (LLMs) gain their encyclopedic knowledge and conversational tact by learning from an entire internet’s worth of human-generated...
Season’s greetings from Berkeley, where we have donned our light jackets and the campus squirrels are newly plump for winter. We just concluded a...
The successes of generative AI and large language models involve both powerful observable behavior and deep internal representations of the world that they construct for their own uses. How do these internal representations work, and to what extent are they similar to or different from the representations of the world that we build as humans? In this talk, Jon Kleinberg explores these questions through the lens of generative AI, drawing on examples from game-playing, geographic navigation, and other complex tasks.
In this episode of our Polylogues web series, Simons Institute Founding Associate Director Alistair Sinclair interviews newly appointed Institute Director Venkatesan Guruswami. Their wide-ranging conversation touches on the Institute’s mission and strategy, prospects for the field in the years to come, and engagement with the global research community as well as the broader public.
Happy New Year from Berkeley, where the magnolias are already in bud and we have just welcomed the participants in our Spring 2026 research program on Federated and Collaborative Learning. In addition to the periodic workshops associated with the program, we also have upcoming workshops on various aspects at the nexus of theoretical computer science and machine learning, ranging from the deployment of ML models in social systems, healthcare, and deep learning theory to the impact of techniques developed in learning theory on the theory of computing.
Large language models (LLMs) gain their encyclopedic knowledge and conversational tact by learning from an entire internet’s worth of human-generated text. But learning from language alone has shown diminishing returns. While LLMs have proved themselves to be masters of producing fluent language, their capabilities in other cognitive skills, like logic and reasoning, have lagged behind. At the Simons Institute workshop on LLMs, Cognitive Science, Linguistics, and Neuroscience last year, neuroscientists, linguists, and computer scientists came together to explore why this is the case — and how a different model, the human brain, could point the way forward.
Fast matrix multiplication is a central goal in algorithms research. The goal is to find the smallest real value omega such that n by n matrices can be multiplied in n{omega + o(1)} time in the worst case. The current best bound is omega < 2.37134. In this Richard M. Karp Distinguished Lecture in the Complexity and Linear Algebra program, Virginia Vassilevska Williams examines progress on matrix multiplication algorithms over the decades and offers some intuition about where the research area may be headed.
Some types of virtualization, such as virtual memory, are implemented by providing a layer of indirection between what the program sees and what the system implements. This layer of indirection is typically ignored in theoretical analysis but has a real (and, in some cases, increasing) impact on system performance. In this Richard M. Karp Distinguished Lecture in the Algorithmic Foundations for Emerging Computing Technologies program, Martín Farach-Colton covers a variety of cases where the cost of indirection becomes significant, including new architectures, such as for hardware accelerators and shared memory.
In the span of four decades, quantum computation has evolved from an intellectual curiosity to a potentially realizable technology. Nevertheless, the path toward a full-stack scalable technology is a work in progress. In this talk from our fourth annual Quantum Industry Day, newly minted Nobel laureate John Martinis shows how the road to scaling could be paved by adopting existing semiconductor technology to build much higher-quality qubits and employing system engineering approaches.
The Simons Institute has received $250K in support from the Google DeepMind x Google.org AI for Math Initiative, which was launched in late October. The Simons Institute will be a member of the newly created consortium, along with Imperial College London, the Institute for Advanced Study, Institut des Hautes Études Scientifiques (IHES), and the Tata Institute of Fundamental Research (TIFR).
Season’s greetings from Berkeley, where we have donned our light jackets and the campus squirrels are newly plump for winter. We just concluded a fantastic semester with two highly energetic programs on Complexity and Linear Algebra, and on Algorithmic Foundations for Emerging Computing Technologies. It was energizing to witness Calvin Lab bustling with collaborations filling up its open spaces.