About

Transformers have now been scaled to vast amounts of static data. This approach has been so successful it has forced the research community to ask, "What's next?". This workshop will bring together researchers thinking about questions related to the future of language models beyond the current standard model. The workshop is meant to be exploratory and welcome to novel vectors in which new setups may arise, e.g. data efficiency, training paradigms, and architectures.

Chairs/Organizers
Register

Registration is required for in-person attendance, access to the livestream, and early access to the recording. Space may be limited, and you are advised to register early. 

For additional information please visit: https://simons.berkeley.edu/participating-workshop.

Please note: the Simons Institute regularly captures photos and video of activity around the Institute for use in videos, publications, and promotional materials. 

Register Now