Identifying Key Vulnerabilities: Research Reveals AI's Fundamental Weaknesses

25 January 2024 2875
Share Tweet

Researchers from the University of Copenhagen have shown that it's impossible to create fully stable Machine Learning algorithms for complex problems, emphasizing the importance of rigorous testing and understanding of AI's limitations. Credit: SciTechDaily.com

Machine Learning technologies such as ChatGPT are increasingly used, but even the most sophisticated algorithms have limits. The University of Copenhagen's research team has made a significant breakthrough by mathematically proving that for anything beyond basic issues, wholly stable AI algorithms cannot be developed. This discovery may lead to improved algorithm testing procedures, underlining the marked differences between machine computation and human intellect.

The scientific paper detailing the findings has been accepted for publication at a prominent international conference on theoretical computer science.

While machines are more accurate than doctors at interpreting medical scan images, can translate foreign languages, and perhaps soon drive cars safer than humans, even the best algorithms have flaws. The Department of Computer Science at the University of Copenhagen aims to uncover these weaknesses.

Consider an example of an automated vehicle reading a road sign. A human driver wouldn't be distracted if a sticker was placed on the sign. However, a machine might be misled as the sign would differ from those it was trained on.

“We desire stability in algorithms, such that minor input changes don't significantly alter the output. Real-life noise that humans typically ignore can confuse machines,” says Professor Amir Yehudayoff, who leads the group.

The group, along with researchers from other nations, is the first to mathematically prove that it's impossible to create always stable Machine Learning algorithms, except for simple problems. The scientific paper outlining the result was accepted for publication at the renowned international theoretical computer science conference, Foundations of Computer Science (FOCS).

“I should clarify that we have not directly worked on automated car applications. Nonetheless, this appears to be a problem too complex for algorithms to consistently remain stable,” states Amir Yehudayoff. He adds this does not imply significant implications for the development of autonomous cars:

“If the algorithm only errs under very rare circumstances, it might be tolerable. But it’s a serious issue if it does so under a broad array of situations.”

The scientific paper doesn't offer a direct method for the industry to discover algorithmic glitches, which wasn’t the aim, explains the Professor:

“We are devising a language to talk about the weaknesses in Machine Learning algorithms. This could lead to guidelines stipulating how algorithms should be tested. Over time, this might contribute to the creation of better and more stable algorithms.”

One possible application could be algorithm testing for digital privacy protection.

“Some firms might declare their solution for privacy protection is foolproof. First off, our technique could help demonstrate that the solution is not completely safe. Secondly, it can identify weak points,” Yehudayoff explains.

The scientific paper primarily adds to theoretical knowledge, especially the mathematical aspect is groundbreaking, he furthers: “We intuitively understand that a stable algorithm should perform almost as well as before when it encounters a slight amount of input noise just like the road sign with a sticker. As theoretical computer scientists, we require a solid definition. We need to articulate the problem mathematically. How much noise should an algorithm endure, and how close should the output be to the original if the algorithm is to be considered stable? These are the questions we propose answers to.”

The scientific paper has received considerable interest from peers in the theoretical computer science community, but not from the tech industry, at least not yet.

“A delay between a new theoretical advance and interest from people working with real-world applications should be expected,” says Amir Yehudayoff, adding jokingly: “And some theoretical developments might stay unnoticed for good.”

However, he does not see that happening in this case: ”Machine Learning continues to progress rapidly, and it is important to remember that even solutions which are very successful in the real world still do have limitations. The machines may sometimes seem to be able to think but after all, they do not possess human intelligence. This is important to keep in mind.”


RELATED ARTICLES