Interview with Professor Uri Hasson from the Department of Psychology at Princeton University
The fascination with the brain has endured throughout human history. From the ancient Egyptians to the present day, we have questioned how it works and how it processes language. According to Uri Hasson, professor in the Department of Psychology at Princeton University, “we have big theories, but evidence too small to support them”. Hasson is working to overcome this limitation and validate modern theories about language learning and processing. We had a long and engaging conversation with him during Festival della Scienza, hosted in the pressroom on a late October afternoon. Hasson stands at 1.90 meters (six foot three), a physical presence that complements the reasoning he shared with us. In answering our numerous questions, he displayed the determination of someone confident in the strength of his ideas. . Over the course of our 60-minute interview, he explained how AI is transforming the paradigms of neuroscience.
The first point of discussion came from recent development of artificial intelligence: his research is evolving alongside large language models (LLMs), tools capable of conversing in ways we perceive as natural. This perception of “naturalness” has given rise to a research question: do language models share anything with how the human brain functions, or are they technologies whose mechanisms bear no relation to our cognitive abilities? The answer matters profoundly to neuroscience, as it determines whether LLMs can serve as valid models for studying language processing. “It’s rather like considering an analogue clock and a digital clock: both tell the time, yet they operate in different ways. Observing what a system does isn’t enough; we must understand how it achieves the expected result.”, he explains. In the case of LLMs, the mere fact that they speak like human beings doesn’t mean they’re machines with human minds: we need to analyse the internal mechanisms that produce externally visible behaviours.
According to Hasson, who is studying correlations between the brain and language models, the similarity stands at roughly 50%, with the other half lying in the techniques of language production: “LLMs pass the Turing test, they speak like us, they convince us they’re too intelligent to ignore. On the other hand, they’re stupid as well, they make mistakes, they have no consciousness, and they don’t know what they’re talking about. They’re statistical models: I’m not sure they are thinking.” Another difference he considers crucial lies in how programmers train their LLMs, relying on texts available online and repeating learning procedures multiple times. “We want language models to be superhuman. You want them to know everything about science, history, politics, mental health, food. You want them to be superhuman. You can never find a person that knows everything about everything. Because no one can read all the books in the world”
The human mind, indeed, doesn’t learn language that way. A baby is born unable to move, speak, or even sit up, yet within three years is conversing with everyone, peers and adults alike. “It’s the magic of language,” Hasson emphasises, “And the question is how do they do it? We debate for 2000 years; I would say there’s a debate. Of nature versus nurture. You are born with the knowledge of language, or you learn it from the input from the environment?” Hasson and his team are gathering data to contribute to this debate through a project that monitored the first two years of life for 15 infants, using cameras and microphones installed in their homes. Thousands of hours of recordings allow them to analyse how each child interacts with their environment and learns to speak. These data will form the substrate for building artificial intelligence models to test whether inputs alone is enough for learning language, or whether innate knowledge is required. He tells us that someone in the past “recorded his baby for two years, but it was before machine learning and data were too big. So he asked people to look into it. It was impossible. No one knew what to do with it. So basically I think now that we have deep learning, I say, oh, okay, now I know what to do with the data.” The smile that accompanied the entire discussion suggests they’re preparing to deliver some intriguing answers.
The future of LLMs and language processing research
Looking at the history of machine learning and neuroscience, Hasson believes that understanding of how the human brain works has grown in tandem with computing technologies. Born in the 1950s from an idea by psychologist Warren Sturgis McCulloch, the first mathematical models inspired by biological neurons were ahead of their time, but their operation was too constrained by the computational capacity of computers and the scarcity of available data. During that period, neuroscience was advancing the notion that the brain predicts events based on what has preceded them, the so-called predictive coding.
With the advent of the internet and increasingly fast, miniaturised processors, those models began to work. The emergence of autoregressive LLMs, capable of predicting the next word based on text already written, with behavioursimilar to that of humans, has confirmed the theory of predictive coding. This co-evolution of neuroscience and artificial intelligence is deepening our understanding of human brain behaviour. LLMs don’t know what a verb or noun is; they use mathematics to evaluate how similar two words are in meaning and how similarly they function within a sentence. It’s a process called embedding, and the human brain appears to employ this same strategy for processing language.
All this knowledge is increasing the percentage of similarity between LLMs and the human brain. “We still don’t know what the ceiling glass is. How much we can push this model until it be stuck? And then will stack it on 75 or we get to 100%? We don’t know. It’s an open question. No one does. If you talk with people in the industry, they think they can get to 100%. They are very confident now, some of them. So I’m trying to be more careful because but we don’t know right now”
New prospects are opening up for neuroscience, as research can leave the laboratory, where environments and stimuli are controlled, and study the brain in real life, where unexpected situations may arise that cannot be replicated identically. Hasson is collaborating with IIT’s Genetics of Cognition unit, led by Francesco Papaleo, in this direction: pushing the current boundaries of research towards studying complex social behaviours within the environments where they emerge and evolve. According to Papaleo, “even simply discussing things with Hasson makes me think about matters I hadn’t considered, such as training a computational model with preclinical data from our experiments.”



