Abstract

Today’s large language models (LLMs) routinely generate coherent, grammatical, and seemingly meaningful paragraphs of text. This achievement has led to speculation that LLMs have become “thinking machines”, capable of performing tasks that require reasoning and/or world knowledge. In this talk, I will discuss how easy it is to conflate language and thinking, both in humans and in machines. To address this conflation, I will introduce a distinction between formal competence—knowledge of linguistic rules and patterns—and functional competence—understanding and using language in the world. This distinction is grounded in human neuroscience, which shows that formal and functional competence recruit different brain mechanisms. I will then discuss how researchers can leverage behavioral and neuroscience approaches from the study of human intelligence to carefully examine—and dissociate—distinct capabilities in AI systems, and, in turn, how advances in AI can contribute to our understanding of language & cognition in humans.