Colourbox

AI learning concept in server center. Illustration: Colourbox

Can all AI systems be trained?

Artificial intelligence (AI) is currently transforming many parts of society and the sciences with full force.

Av Forskerbloggen
Publisert 28. sep. 2022

By Vegard Antun (postdoc, UiO), Matthew J. Colbrook (postdoc, Cambridge) and Anders C. Hansen (Professor, Cambridge/UiO).

One only has to look at the news to see the latest breakthrough, whether it is an AI beating a world-champion player at some game, achieving human-level object recognition, or diagnosing cancer from medical scans. However, there is another side to this story. It is also becoming increasingly apparent that many AI systems are non-robust and unstable to tiny changes in the input data. The AI may even hallucinate and produce nonsensical predictions with high confidence.

A good example of a hallucination can be seen in Figure 1. Here an AI used in the Facebook and NYU fastMRI challenge is trained to reconstruct images from magnetic resonance imaging (MRI) data.

Original image AI reconstruction Figure 1: An AI generated hallucination taken from the Facebook FastMRI challenge 2020. The left image is the correct reconstruction. However, the AI method (right) produces a detail that has nothing to do with the correct image.

However, when the AI is tested on new MRI data, it inserts a false, realistically looking detail in the reconstructed image. Such hallucinations, as well other types of instabilities in AI, are now causing severe concerns in safety-critical applications, such as medical diagnosis and self-driving vehicles. It may also impact potential legal frameworks for the use of AI. Alarmingly, these complications also seem to occur even for problems where we know that classical methods produce stable and thus safe solutions.

Instability appears to be the Achilles’ heel of modern AI.

The state-of-the-art tool in AI is neural networks (NNs), which are inspired by links between neurons in the brain. For many decades mathematicians have studied the properties of NNs, and developed so-called “universal approximation theorems”. These theorems come in different shapes and sizes, and say that stable problems can be solved stably with a NN. Therefore, we are led to the following puzzling question: Why does AI lead to unstable methods and AI-generated hallucinations, even in scenarios where one can prove that stable and accurate NNs exist?

In our work, together with Matthew J. Colbrook and Anders C. Hansen, we seek to answer this question.

We show that there are problems where stable and accurate NNs exist, yet no algorithm can produce such a network. Regardless of how many computational resources or data one throws at the problem, this impossibility result holds. Moreover, for some problems, one can only train NNs to solve the problem to a certain accuracy. This accuracy is impossible to improve upon for the given problem.

Vegard Antun. Photo: Eivind Torgersen/UiO

These findings suggest that the world of NNs is much more refined than what universal approximation theorems suggest, and that we can only design training algorithms for stable and accurate NNs in specific cases.

The above results show an essential difference between abstract existence and trainability. Mathematically proving the existence of a good NN is not enough – one must also show that it can be obtained in practice. This paradox is very much related to the work of Alan Turing and Kurt Gödel. About 100 years ago, mathematicians set out to show that mathematics was the ultimate consistent language of the universe. There was a tremendous amount of optimism, similar to the optimism we see in AI today.

However, Turing and Gödel turned this optimism on its head, by showing that it is impossible to prove whether certain mathematical statements are true or false, and that some problems cannot be tackled with algorithms. Much later, the mathematician and Fields medalist Steve Smale proposed a list of 18 unsolved mathematical problems for the 21st century. His 18th problem, featured in the title of our recent work, concerned the limits of intelligence for both humans and machines. The mathematics of foundations, i.e., figuring out what is and is not possible, is now entering the world of AI.

The above paradox may seem gloomy, but it is important to stress that AI is not inherently flawed. The above results show that AI is only reliable in specific areas, using specific methods. The problem now is to figure out when AI can be made reliable. When 20th-century mathematicians identified foundational boundaries in their field, they did not stop studying mathematics. They just had to find new paths because they understood the limitations. Currently, the practical success of AI is far ahead of our understanding of these systems. A programme on the foundations of AI is needed to bridge this gap. It will be healthy for AI in the long run to figure out what can and cannot be done. The limitations of mathematics and computers identified by G¨odel and Turing led to rich foundation theories, new techniques, and methodology. Perhaps a similar foundations theory may blossom in AI.

A variant of this article has been published in the TheScienceBreaker .

Related article in Norwegian: Har påvist at det finnes oppgaver kunstige intelligenser aldri vil kunne løse

Vitenskapelig artikkel:

Matthew J. Colbrook, Vegard Antun og Anders C. Hansen: The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale’s 18th problem. Proceedings of the National Academy of Sciences (PNAS), mars 2022.