top of page

Research Suggests That Even AI Will Have Limitations On What it Can Do

Updated: Feb 24, 2023

Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.

In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in human brain. A neural network is dubbed “deep” if it possesses multiple layers of neurons.

Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them. A digital computer can only compute certain specific neural networks, and sometimes computing a desirable neural network is impossible. It is hard to evaluate the results until implementation, which might be too late for some scenarios.

These new findings do not suggest that all neural networks are completely flawed, but that they may prove stable and accurate only in limited scenarios. These new findings regarding the limitations of neural networks are not aimed at damping artificial-intelligence research, and may instead spur new work exploring ways to bend these rules.


bottom of page