Abstract
For the first time in history, there exist systems other than the human brain that can process speech and language, extract meaningful symbolic structure, and produce complex and appropriate responses. I will present studies from my lab that use these large speech and language models to generate algorithmic hypotheses of the biological implementation of language understanding. The work uses neural timeseries data across different spatial scales: From population ensembles using MEG and intracranial EEG, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into what representations and operations serve to bridge between sound and meaning in biological and artificial systems, including how information at different timescales is nested, in time and in space, to allow information exchange across hierarchical structures. Together, the findings represent a new era of scientific inquiry to understand system-level implementations of human language.