About

Transformers have now been scaled to vast amounts of static data. This approach has been so successful it has forced the research community to ask, "What's next?". This workshop will bring together researchers thinking about questions related to the future of language models beyond the current standard model. The workshop is meant to be exploratory and welcome to novel vectors in which new setups may arise, e.g. data efficiency, training paradigms, and architectures.

If you require special accommodation, please contact our access coordinator at simonsevents@berkeley.edu with as much advance notice as possible.

Please note: the Simons Institute regularly captures photos and video of activity around the Institute for use in videos, publications, and promotional materials. 

Chairs/Organizers
Invited Participants

Sanjeev Arora (Princeton University), Kianté Brantley (Harvard University), Danqi Chen (Princeton University), Grigorios Chrysos (University of Wisconsin-Madison), Gintare Karolina Dziugaite (Google DeepMind), Zaid Harchaoui (University of Washington), Elad Hazan (Princeton University), He He (New York University), Andrew Ilyas (Stanford University), Yoon Kim (Massachusetts Institute of Technology), Aviral Kumar (Carnegie Mellon University), Jason Lee (Princeton University), Sewon Min (UC Berkeley), Azalia Mirhoseini (Stanford / DeepMind), Nanyun (Violet) Peng (UCLA), Daniela Rus (MIT), Sasha Rush (Cornell University), Kilian Weinberger (Cornell University), Luke Zettlemoyer (University of Washington), Denny Zhou (Google DeepMind)

Register

Registration is required for in-person attendance, access to the livestream, and early access to the recording. Space may be limited, and you are advised to register early. 

For additional information please visit: https://simons.berkeley.edu/participating-workshop.

Please note: the Simons Institute regularly captures photos and video of activity around the Institute for use in videos, publications, and promotional materials. 

Register Now