Cognitive Science Colloquium - Ellie Pavlick

Date
Oct 24, 2024, 12:00 pm1:00 pm

Speaker

Details

Event Description

"Not-Your-Mother's Connectionism: LLMs as Cognitive Models"

Abstract‏‏‎: This is an exploratory talk meant to prompt discussion! I will argue that, despite the many ways that LLMs are not "human like", they nonetheless should be taken seriously as models of human cognition. I will discuss two recent projects in collaboration with cognitive and neuroscience which add nuance to the view that LLMs are simply bigger versions of the neural networks of the 1980s. First, I will argue that the emergent "in-context learning" ability of LLMs enables them to explain human behaviors that have traditionally been out of reach of neural networks, such as compositional generalization and flexible reasoning. Then, I will present data showing that, compared to computational cognitive science models based on probabilistic language of thought (pLoT), LLMs are in fact a better fit for human data on a concept learning task. Together, I argue that these results foreshadow how AI might yield new theoretical insights about the human brain and mind.
 

Bio‏‎: Ellie Pavlick is an Associate Professor of Computer Science and Linguistics at Brown University and a Research Scientist at Google. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Her work focuses on computational models of language (currently, primarily LLMs) and its connections to the study of language and cognition more broadly. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.