News

The theorem goes by many names, as researchers have discovered and rediscovered it in different contexts. Some call it the experts’ theorem: for...

The Simons Institute for the Theory of Computing has received a $300,000 grant from the UC Noyce Initiative to hold a research program on Cryptography...

Greetings from Berkeley. With the holidays approaching, the final workshops of each of the fall semester programs are upon us, with the workshop on...

News archive

358 results

Greetings from Berkeley. With the holidays approaching, the final workshops of each of the fall semester programs are upon us, with the workshop on Logic and Algebra for Query Evaluation taking place this week, and a workshop on Optimization and Algorithm Design scheduled for the week after Thanksgiving.   

In his presentation in our Theoretically Speaking public lecture series, Leonardo de Moura (AWS) described the Lean proof assistant's contributions to the mathematical domain, its extensive mathematical library encapsulating over a million lines of formalized mathematics, its pivotal role in cutting-edge mathematical endeavors such as the Liquid Tensor Experiment, its impact on mathematical education, and its role in AI for mathematics.

In the opening talk from the Simons Institute’s recent workshop on Online and Matching-Based Market Design, Paul Milgrom (Stanford) introduced fast approximation algorithms for the knapsack problem that have no confirming negative externalities, and guarantee close to 100% for both allocation and investment.

In her Richard M. Karp Distinguished Lecture, Monika Henzinger (Institute of Science and Technology Austria) surveyed the state of the art in dynamic graph algorithms, the different algorithmic techniques developed for them, and all the questions in the field that still await an answer.

The Simons Institute for the Theory of Computing has received a $300,000 grant from the UC Noyce Initiative to hold a research program on Cryptography in Summer 2025, and a Department of Energy sub-award of $1.2 million in support of the Institute’s Research Pod in Quantum Computing.

The theorem goes by many names, as researchers have discovered and rediscovered it in different contexts. Some call it the experts’ theorem: for example, given two recommendations from two experts to buy two different stocks, the theorem lays out a method to combine the two recommendations that will perform almost as well as the best of these two experts. For this story, we will call it Blackwell’s approachability theorem — to highlight David Blackwell, the UC Berkeley mathematician who proved it and published it in 1956.

Simons Foundation International (SFI) has awarded a $25 million matching pledge to the Simons Institute for the Theory of Computing at the University of California, Berkeley, to build an ongoing stream of philanthropic revenue that will support the mission and research of the Institute.

Dear friends,

Greetings from Berkeley. At the Simons Institute, we are halfway through a vibrant semester of research and discovery. At the same time, like many of you, I am troubled and deeply saddened this week by the news of the enormous suffering and loss of life in Israel and Gaza. The series of devastating earthquakes in Afghanistan is also heartwrenching. I’m pleased to share news from the Institute with you, but must also acknowledge that all this is happening amidst a lot of turmoil elsewhere in the world. 

| Machine Learning & Data Science, Natural & Social Sciences

Is ingesting in-copyright works posted on the open internet as training data for building large language models copyright infringement or not? The stakes for this nascent industry and for researchers in the resolution of this issue could not be greater. Presented by Pamela Samuelson (Berkeley Law) as part of the Simons Institute’s workshop on Large Language Models and Transformers.

| Machine Learning & Data Science

On the first day of the workshop on Large Language Models and Transformers, Alexei Efros (UC Berkeley) moderated a panel that addressed a range of topics, including the future of LLMs, memorization vs. generalization, and novelty and creativity. Featuring Sanjeev Arora (Princeton University), Chris Manning (Stanford), Yejin Choi (University of Washington), Ilya Sutskever (OpenAI), and Yin Tat Lee (University of Washington and Microsoft Research).