Skip to main content

Who Ate the Mouse?

Psycholinguists from Potsdam examine how the human brain processes language

We can understand what we read and hear without any difficulty. We do this without ever really thinking about the high performance of our brain. It is the aim of psycholinguistics to examine the unconscious processes that form the basis of our understanding of language. Shravan Vasishth’s group develops computational models of parsing based on data that were collected in experiments and compares the results of computer simulations with the results of experiments.

Let us start with a little experiment. Please, read the following sentence: “The cat ate the mouse.” You can understand the content of the sentence without being aware of the principles that our brain relies on to assemble the words to construct a sentence. Now consider: “The cat that the dog chased ate the mouse.” You can understand this sentence, too, without much difficulty. But what about the following example: “The cat that the dog that the boy called out to chased ate the mouse.” Most English speakers have considerable difficulty to comprehend who did what to whom. Some may even consider it ungrammatical. The last sentence is grammatically correct, but our brain does not manage at once to find the respective subject for each verb. What happens is that we are unable to keep track of who called out to whom, who chased whom, and who ate what. One might think that the sentence is difficult to understand compared to the other two because it is much longer. However, rephrasing the sentence shows that the problem does not lie in its length. “The boy called out to the dog that chased the cat that ate the mouse.” This sentence has exactly the same meaning as the difficult sentence above, it is as long as the one before but much easier to understand. A theory of human language processing should explain this and many other phenomena.

Psycholinguist Shravan Vasishth and his group work on these phenomena. In experiments, the scientists try to understand the mechanisms that our brain uses to unlock the meaning of words and sentences. They simulate these processes in computer models and then test them on people to find out whether they respond the way the model predicted it according to the theory.

The research scientists do not restrict their work to the German language but analyse comparable phenomena in many other languages. Potsdam’s proximity to multicultural Berlin turns out to be an advantage. Vasishth’s postdoctoral researcher Titus von der Malsburg found Spanish-speaking participants there, and Rukshin Shaher could work with native English speakers.

The research group is almost a bit like the Tower of Babel: The PhD students and postdocs have come to Potsdam from various parts of the world and they also speak plenty of foreign languages themselves. Thus it is much easier for them to collect their data in their language area. They do this in cooperation with university institutions around the globe. Pavel Logačev has worked together with Hindi-speaking participants in Allahabad, India. Lena Jäger has worked on Mandarin Chinese in Beijing and Taipei. Others are doing their research in Great Britain and in Argentina.

We cannot directly observe the processes that take place in our brain. On top of it, these processes happen at an extreme speed. For recording them in real time, the scientists have to use experimental methods with a significantly high time resolution. One of these methods is “eyetracking,” i.e. measuring the movement of gaze direction. Another is electroencephalography (EEG), which is the recording of the brain’s electrical activity.Eyetracking experiments are based on the assumption that eye movements while reading reflect cognitive processes of language processing. If the eyes stay longer on a certain word, it suggests that there are difficulties in integrating this word into the sentence structure.

During the EEG experiment, the participant wears a cap with many electrodes, which register the brain waves while reading. A classical experiment would use a pair of sentences like this: “Peter drinks his coffee with milk.” and “Peter drinks his coffee with salt”. As soon as the participant reads the inappropriate word “salt” with regard to the context, he or she will stop short. When comparing the EEG signals of the two sentences, the amplitude for the startling at “salt” is bigger than at the word “milk” that had to be expected in this context.

Pairs of sentences like these that differ in just one detail are the material for the experiments in the language processing laboratory. The effects of this tiny manipulation are measured here. The results of each test contribute new pieces to the puzzle: giving an answer to the big question how the brain understands language. 

Another important tool of Shravan Vasishth and his group is a computational model that the professor developed together with his PhD supervisor Richard Lewis. At that time the prevailing approach of linguistics was influenced by the research on artificial intelligence. Mechanisms that we use to understand language were described mathematically and simulated on the computer. The Lewis and Vasishth model makes it possible to simulate how we analyse sentence structures during language processing.

The researchers use their experimental data to work out a theory. On this basis, the model then predicts, for instance, at which point of the sentence “The cat that the dog that the boy called out to …” a virtual participant would start to hesitate. Such predictions are verified with measurements from participants and other examples. If the results are contradictory, the researchers will have to revise their theory. They will also refine the model itself. PhD student Felix Engelmann for instance works on connecting the theories on eye movement control with the language processing model. This will allow for more precise predictions.

What is it good for? Such a typical question of a non-professional makes each basic researcher sigh more or less deeply. “For me there are two answers to this question,” says Shravan Vasishth with composure. The first one focuses on the practical aspect of theoretical science: You can integrate disorders of speech processing into the computer models as they happen to people with cerebral damage, for instance due to a stroke. Umesh Patil, a member of Vasishth’s team, simulates such disorders called aphasia to find the causes of these symptoms. He uses experimental data that his colleague Sandra Hanne collects in her work with aphasia patients. Research projects of that kind may help to develop therapies for such speech disorders.

The second answer, the actual motivating one for the research scientist is: “I cannot do otherwise. It is the urge to find this profound truth about nature.” 

One day the computational models for the research in speech processing might lead to the development of thinking machines or perhaps to something completely different. For the time being, the psycholinguists have the aim to contribute to a comprehensive understanding of information processing in the human brain that enables thinking, learning, and knowledge.

The Scientist

Professor Shravan Vasishth studied Japanese in New Delhi and Osaka, did a PhD in linguistics and a Master’s degree in computer and information sciences at Ohio State University in the USA. Since 2004 he has been professor at the University of Potsdam, since 2008 he has been Chair of Psycholinguistics and Neurolinguistics. His central research interest is human sentence comprehension processes.

Contact

Universität Potsdam
Department für Linguistik
Karl-Liebknecht-Str. 24–25,
14476 Potsdam OT Golm
vasishthrz.uni-potsdamde 

Author: Sabine Sütterlin, Web Content Editing: Julia Schwaibold, Translator: Voigt

Published

Online editorial

Julia Schwaibold