updated on 9 April 2021
The benefit of interdisciplinary knowledge exchange: Peter Dayan has been an Alexander von Humboldt Professor for Artificial Intelligence at the University of Tübingen since the beginning of 2020. Here he reports on how he wants to link machine learning with research and clinical psychiatry.
Humboldt Foundation: Some people see artificial intelligence primarily as a revolutionary cutting-edge technology which will massively improve people's lives in all areas. By contrast, artificial intelligence is often depicted as a dangerous force that can spin out of control and that threatens humanity. Which side is right?
Peter Dayan: As ever, both are right. AI is a set of powerful new technologies. Like most such technologies, they can be used for good or bad, and can have unforeseen consequences and knock-on effects that themselves can be good or bad. AI is already revolutionizing our lives – predicting the longer-term effect is, of course, very difficult. It is important to remember that the current collection of models for things like image recognition and language generation and translation have only rather recently crossed the threshold of being unembarrassing, so there is a very long way to go.
In the future, artificial intelligence will be capable of performing many tasks much, much better and much faster than humans can. But will artificial intelligence ever be able to be truly creative?
There’s a popular saying that genius is 1% inspiration and 99% perspiration. It’s not clear that we can’t replace the 1% inspiration by 1% randomness. This is well exemplified by some of the more unusual moves from Deepmind’s go program, AlphaGo, that are widely celebrated by human experts as almost coming from a next century of play. The program enjoys a lot of computational perspiration – but the inspiration largely comes from the random numbers that are what are perspired over.
You are currently establishing the new Computational Neuroscience department at the Max Planck Institute in Tübingen. What areas will be the focus of your research there?
I work in three linked areas: computational psychiatry, reinforcement learning and neuromodulation. All these areas concern optimal, approximately optimal and dysfunctional decision making by natural and artificial systems in the face of uncertainty and risk. We investigate the mechanisms that determine the way we learn and think, and the so-called meta-control we exert over these mechanisms.
How do you proceed?
We use a variety of experimental methods, including the administration of reward-based cognitive tasks (currently largely online), and functional magnetic resonance and magnetoencephalographic imaging whilst subjects perform such tasks. From these we are able to derive computational models of the algorithms that people themselves employ. We also use pharmacological methods to manipulate the particular systems (called neuromodulatory systems) in the brain that report on specific aspects of consequential decisions, and, in collaboration, gain access to measurements of the operation of these systems in the human brain during functional neurosurgical operations. I am particularly interested in how the brain reacts to reward and various forms of uncertainty and how these reactions stir learning processes.
One special aspect of your Humboldt Professorship in Tübingen is the "bridge" between machine learning and psychiatry. How can machine learning help people with mental illness?
Computational psychiatry is a nascent field which uses methods and ideas from neural and artificial decision-making to look at how human choice can break down in psychiatric and neurological disease. It is often pointed out that psychiatry currently lacks what are known as biomarkers that provide definitive diagnoses of separable disorders, and delimit appropriate treatments and provide prognoses. The hope is that the aspects of computational psychiatry on which I work will help provide an alternative or additional coordinates that provide structure for the dysfunctions, and thereby help improve diagnoses, prognoses and treatment.
If you were to take a look into a crystal ball, how could the day-to-day work of a doctor in clinical psychiatry look as a result of machine learning?
It is very hard to look far ahead. Of course, I very much hope that the essential connection between patient and psychiatrist remains – but I expect that the doctor would have the possibility of accessing much more granular data about what a patient does – in particular how she navigates her physical and social environment, along with much other multi-modal data. The psychiatrist would then have a far finer grasp on the underlying nature of the problem and higher temporal resolution on the potential recurrence of problems. I would hope that the psychiatrist would thereby be able to specify more personalized treatments, some of which might themselves be delivered or enhanced through AI. Excitingly, the Department of Psychiatry at the University of Tübingen is a wonderfully enthusiastic collaborator in this research endeavour.