Humboldt Foundation: Professor Hoos, You as well call for a six-month moratorium on training artificial intelligence (AI) systems that are more powerful than the GPT-4 language model. What do you hope to achieve with the delay?
Holger Hoos: The point is to use this break to better understand where the weaknesses in the current systems lie and what regulations we need to minimise risks.
Are six months long enough? The EU has been working on its AI Act that is supposed to regulate the use of AI for the past two years ...
I expect that the EU will manage to get it finished this year. Sure, it would be better to have more time to debate the consequences of progress in ChatGPT, but it’s unrealistic to hold back developments for such a long time. The important thing is that the call for a moratorium has triggered public debate on whether we really want to let this kind of technology run wild.
What should we do instead?
Apart from regulations, it’s also about economic power. At the moment, two firms, Microsoft and Google, are more or less exclusively leading the field. Of course, it’s not good when a few large, understandably profit-driven, US-based corporations control key technologies. My hope is that the public sector will get active, as it did with the sequencing of the human genome, and commit to investing in AI, because this field is too important to leave it to industry alone! This would benefit small and medium-sized enterprises, too, which are otherwise forced to buy the technology from US-based industrial giants. The European Union could and should show global leadership in this area.
What exactly could European engagement look like in concrete terms?
Apart from investing more in research networks, which is already happening on a medium scale, the EU should establish a major, globally visible AI centre. Pretty well-developed plans already exist for such a centre and could be implemented for approximately 10 billion euros. The EU can come up with sums like this, as we saw during the pandemic. I would really hope that politicians understand how important it is to take action swiftly.
Is it really possible to catch up with the technical headstart in the US and China?
Yes, it is, even though Europe is really lagging behind at the moment. But when Airbus was set up, most experts thought a new European aircraft company didn’t stand a chance of competing with established companies, especially Boeing. Today, Airbus isn’t only at the same level but, in certain crucial areas of aviation, has emerged as the global market leader. And this is because we invested in the vision. If we do the same thing for AI today, I’m convinced we’ll soon catch up and be able, for instance, to achieve something of great market value with publicly developed and thus widely available language models that can compete with ChatGPT and GPT-4. Product safety is another extremely important area.
In what way?
AI techniques are already being used to make sure that computer chips and software work properly, for example. If we use AI to write computer code, the resulting software must be as error-free as possible. Because faulty software causes security risks, which in turn cause economic losses and lead to an IT infrastructure we cannot trust. Thus, ensuring product safety in AI would be an important responsibility that belongs in the hands of public authorities. Using AI for climate protection is yet another important task.
That would not only benefit Europe ...
That’s precisely what we should be striving for. A European centre, let’s call it a CERN for AI, should not just be located in Europe and work for the benefit of Europeans. It should be open to the whole world. A place where a global community can come together to conduct fantastic research for the public good. It would also be a nucleus for a European AI economy, for an AI ecosystem of smaller and larger firms that would benefit from the expertise and talent gathered there.