When you ask e-commerce giant Amazon’s virtual assistant about its intelligence, you will hear, “Yes, I think, therefore I am.” Alexa has learned this from eager employees of the world’s biggest online retailer. But is there any truth in it? How intelligent are the systems being widely referred to as “artificial intelligence”? What actually is artificial intelligence (AI)? What makes it intelligent? And how much is AI changing our lives? For the cover story, Matthias Zimmermann interviewed cognitive scientist Prof. Reinhold Kliegl, educational researcher Prof. Rebecca Lazarides, and computer scientist Prof. Tobias Scheffer.
What actually is artificial intelligence, and what makes it different from human intelligence?
Lazarides: In general, it is related to development of computer programs or machines that behave in ways that we would call intelligent in humans. However, there is no single, clear definition of artificial intelligence but rather many different ones. It is similar when it comes to human intelligence – a report issued by the Board of Scientific Affairs of the American Psychological Association describes it as follows: “Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.“
Kliegl: Artificial intelligence is the area within computer science that constituted the cognitive sciences in the 1950s together with experimental psychology and some sub-areas of linguistics. The common goal of this “interdiscipline” was and is a theoretically grounded explanation of genuinely human achievements such as perception, memory, language, thinking, problem-solving, and the control of actions. AI is, thus, cognition implemented in a computer or robot that simulates complex human behavior. These simulations not only have to reproduce proper behavior but also typical human errors if they are to serve as an explanation of human behavior. In the application-oriented engineering context of AI, you don’t want errors, of course. The goal is to build programs that work fast and error-free. These conflicting goals mean that AI and human intelligence research are very different. What they have in common is that they use well-defined maximum human performance as a benchmark (e.g., chess, Go, image and speech recognition). The cognitive sciences try to explain these types of performance; AI often takes such explanations as a heuristic starting point but tries to surpass them.
Scheffer: Broadly speaking, human intelligence is considered to be what is measured by an intelligence test, but Ms. Lazarides has already said that there is no real definition. The research field “Artificial Intelligence” deals with a variety of problems whose solution is considered to be intellectual performance in a person. Today’s AI systems solve specific tasks, for example playing Go or comparing people in video recordings with their passport photos. A complete artificial intelligence would be a technical system that is at least equal to a human with regard to all intelligence achievements.
Lazarides: In the “Science of Intelligence” Cluster of Excellence at Technische Universität Berlin and Humboldt-Universität zu Berlin, we define intelligent behavior as goal-oriented, cost-efficient (e.g. physical, calculative costs), and behavior that can be transferred to a real-life environment. From an interdisciplinary perspective, we analyze the overriding principles of such behavior. We use our research to better understand the intelligent behavior of humans and to create new intelligent technologies. In my sub-project, I am interested, for example, in how intelligent tutorial systems and learning robots (social learning companions) can be used to support social learning processes in school.
Is that comparison helpful or an obstacle?
Scheffer: With its visionary goal of transferring intelligence to technical systems, artificial intelligence defines itself as an ambitious field of research. This has helped AI to attract attention and ambitious young researchers since the 1950s. Bruce Lee reportedly said that you need not necessarily achieve every goal, but the goal could also serve as something to strive for.
Kliegl: We need computer models to understand the dynamics of complex cognitive processes that underlie human intelligence. Technical hardware and software developments in AI provide increasingly better tools for these models. I definitely see advantages in that AI and human intelligence research have common benchmarks.
What is AI able to learn from human intelligence and vice versa?
Lazarides: To answer exactly this question, our cluster uses a synthetic approach. We combine the research of “analytical disciplines” like sociology and educational science with the research of “synthetic disciplines” like robotics and computer science. Unlike humans and animals, synthetic artifacts such as robots can be manipulated and modified more easily. This enables us to closely monitor different behaviors as part of such manipulations. Robots, for example, can be programmed to solve tasks very slowly, regardless of the environment. Other robots can be programmed to do things very quickly. With these robots, we can then test specific teaching-learning techniques and, thus, find out more about learning processes that also help us better understand human leaning. On the other hand, we observe behavior in humans that we don’t find in AI experiments and have to expand certain concepts that we use for our work with AI systems.
Scheffer: I think that the ability to learn is the core of intelligence. Today, for example, AI systems use the data we leave behind to learn how to translate texts from one language to another, identify pedestrians and their intentions in traffic, or to assess credit default risks. One of the few AI systems that cannot learn anything from humans anymore is the Go program AlphaGo Zero. While earlier software versions learned from databases of human Go games, the current version learns only from games against itself. Human players are far behind and describe AlphaGo as “supernatural”. The world's top Go player Ke Jie even declared AlphaGo the God of Go.
Kliegl: In comparison, humans are characterized by their ability to generalize and adapt to new situations.
A weakness of AI programs compared to human intelligence is their specificity. So far, almost all of them have worked for only very narrowly described applications. Humans are characterized by their ability to generalize and adapt to new situations. This is certainly an area in which AI can learn from humans. An example of how this weakness is currently being overcome has recently been published. There is now AlphaZero, which beats AlphaGo Zero in Go, the best chess program, and the best Shogi (Japanese chess) program. AlphaZero not only learns nothing from humans but by combining a very general learning principle with the search algorithms used for the Go program, the Go performance was able to be transferred to the two chess variants. The general learning principle (reinforcement learning) “rewards” goal-oriented moves.
What is AI not able to learn from human intelligence and vice versa?
Scheffer: Since the birth of this field of research, skeptics have been searching for a common thread that will forever separate artificial intelligence from human intelligence. For the most part, the reasoning behind this is that computers are subject to fundamental, theoretical limits of computability. It is assumed, however, that human brains are excluded from these mathematical contexts. Human Go players can certainly learn from AlphaGo, but probably not at the speed at which AlphaGo further improves its own abilities.
Kliegl: When we relate this question to a more comprehensive understanding of human intelligence, then I see no way in which subjective experience or consciousness can be plausibly mapped onto artificial intelligence. I can’t really imagine that a computer program that claims to be very happy because it solved a problem feels the same as a human. I also don’t know how we could possibly know that.
Does AI need humans, and do humans need AI? And, if so, what for?
Scheffer: Today, AI needs humans. With each CAPTCHA we solve, we create new training data for image processing models. The Watson AI system, which won the Jeopardy! game show in 2011, learns from books written by humans, and now also from medical publications. On the other hand, humans benefit enormously from AI. The equivalent of a Google search used to be an afternoon in the library. With the help of automatic translations, we can now also understand Chinese texts to some extent.
Kliegl: In most cases, the performance of the currently best-known AI programs for voice and image recognition are often based on gigantic databases of human behavior that are indispensable for training the algorithms that underpin AI performance. However, as I said, this is no longer the case for chess, Shogi, and Go. We use AI products – often unknowingly – in our everyday life. Without AI, we will probably not be able to get to grips in the future with the current problems facing humanity, which we have also generated with technological progress.
Lazarides: I see a mutual relationship between humans and AI, which interests me particularly with regard to research processes. As researchers we benefit a lot from working with AI, for example when we want to find out more about learning processes. AI is very useful for answering questions concerning human learning. On the other hand, by dealing with human learning processes we learn more about effective learning of AI systems. In this respect, we use – and need – it in our everyday life but also in research.
How will AI change our lives – now and in the future?
Lazarides: As a junior professor of school pedagogy, I am particularly interested in this question with regard to education processes in school. This includes the question of what significance AI will have on school education in the future. One of the challenges of educational research is to explore the role of AI in supporting human teaching and learning in the classroom. On the other hand, there is the question as to how schools can impart the skills to children and adolescents that will allow them to engage in the self-determined and responsible use of AI. It also means discussing and reflecting on related opportunities and challenges.
Scheffer: Artificial intelligence has yet to come close to reaching its potential. For example, AI is part of search engines, voice input, music recommendations, and facial recognition. In the foreseeable future, it will drive vehicles autonomously. In precision medicine, it will replace chemotherapy with more compatible, personalized therapies. In precision agriculture, it will help produce healthier foods with less energy, water, and pesticide.
Kliegl: These examples show that our lives are already permeated with AI in many different ways and that this is only the beginning. A challenge for the future will be ensuring that AI-based decisions are fair and transparent and offering ethically responsible options for action. There are coordinated efforts to make AI technologies fruitful for very many and very diverse problems facing humankind. The program of the "AI for the Social Good" workshop at the 2018 NeurIPS conference, for example, provided an overview.
How is AI changing your life and your research?
Scheffer: Machine learning has been my main research interest from the very beginning.
Lazarides: In my research, I address the question of how to implement AI in pedagogically meaningful and goal-oriented teaching-learning settings. By dealing with the role of AI systems for teaching and learning processes, I am also changing my own research, which is becoming more interdisciplinary. In the Cluster of Excellence, for example, I collaborate with researchers from the fields of computer science and robotics. In general, it is still important to effectively support students in their learning according to their individual needs and to empirically investigate related theoretical questions. What the explicit benefit of AI systems will be, however, plays a greater role.
Kliegl: Artificial intelligence provides methods that are very important for my research. I hardly see any possibilities for us to test theories about the dynamics of complex cognitive processes and the behavior they control or by which they are controlled without modeling experimental and observational data. Take eye movement control when reading or looking at images as an example of the interplay of perception, knowledge, memory, speech, and programming and the execution of eye movements. AI methods are indispensable for understanding how these processes are orchestrated. But it is important that we do not confuse the methods of AI that we use to test our theories with the theories themselves.
Prof. Tobias Scheffer is Professor for Machine Learning at the University of Potsdam. He was coordinator of the Emmy Noether Junior Research Group at Humboldt-Universität zu Berlin and was head of the Machine Learning working group at the Max Planck Institute for Informatics in Saarbrücken. In a joint project with the Max Planck Institute for Molecular Genetics, he is working on machine learning methods for cancer therapies. Together with Cisco, he is developing learning methods for the detection of computer viruses and attacks on networks. In other projects, he is developing learning methods for on-board diagnostics in cars and modeling credit default risks. He is a member of the Collaborative Research Center “Data Assimilation” at the University of Potsdam.
Prof. Rebecca Lazarides is Junior Professor of School Pedagogy (equivalent to Assistant Professor) with a research focus on learning and instruction at the University of Potsdam. After studying educational science at Freie Universität Berlin, she earned a doctoral degree at Technische Universität Berlin. Her PhD thesis dealt with the role of instruction for student motivation in mathematics. Her research interests include learning and instruction processes, particularly with regard to the classroom dynamics that optimally promote the motivational and affective development of students in secondary school. In this context, Lazarides, who is Principal Investigator of the “Science of Intelligence” Cluster of Excellence at Technische Universität Berlin and Humboldt-Universität zu Berlin, examines the role of robot-based learning companions in increasing classroom motivation.
Prof. Reinhold Kliegl is Professor of Experimental Psychology with a research focus on cognition. After earning his doctorate at the University of Colorado, he worked at the Max Planck Institute for Human Development. Since 1993, he has been working at the University of Potsdam. He focuses on how the dynamics of language-related, perceptual, and oculomotor processes influence reading, spatial attention, and working memory tasks and examines neural correlates and age-related differences in these processes. In the CRC “Limits of Variability in Language”, Kliegl researches whether borders in syntactic variability can be shifted with training. His current research also focuses on the modeling of the relationship between cognitive and physical fitness and individual differences in children and older adults in these processes.
Text: Matthias Zimmermann
Translation: Susanne Voigt
Published online: Agnes Bressa
Contact to the online editorial office: email@example.com