Breaking

4 ideas about AI that even 'experts' get wrong - The Next Web
May 08, 2021 3 mins, 20 secs

Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.

These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.

Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”.

“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.

When it comes to humans, we would expect an intelligent person to do hard things that take years of study and practice.

Mitchell describes the second fallacy as “Easy things are easy and hard things are hard.”.

“The things that we humans do without much thought—looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone—turn out to be the hardest challenges for machines,” Mitchell writes.

“Conversely, it’s often easier to get machines to do things that are very hard for humans; for example, solving complex mathematical problems, mastering games like chess and Go, and translating sentences between hundreds of languages have all turned out to be relatively easier for machines.”.

That’s why, for instance, the computer vision systems used in self-driving cars need to be complemented with advanced technology such as lidars and mapping data.

“AI is harder than we think, because we are largely unconscious of the complexity of our own thought processes,” Mitchell writes.

Mitchell calls this fallacy “the lure of wishful mnemonics” and writes, “Such shorthand can be misleading to the public trying to understand these results (and to the media reporting on them), and can also unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.”.

Consider, for example, the General Language Understanding Evaluation (GLUE) benchmark, developed by some of the most esteemed organizations and academic institutions in AI.

But contrary to what the media portray, if an AI agent gets a higher GLUE score than a human, it doesn’t mean that it is better at language understanding than humans.

“While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks’ names,” Mitchell writes.

This led to a stream of clickbait articles that warned about AI systems that were becoming smarter than humans and were communicating in secret dialects.

Four years later, the most advanced language models still struggle with understanding basic concepts that most humans learn at a very young age without being instructed.

“A growing cadre of researchers is questioning the basis of the ‘all in the brain’ information processing model for understanding intelligence and for creating AI,” she writes.

“Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world.

Developing general AI needs an adjustment to our understanding of intelligence itself.

“It’s clear that to make and assess progress in AI more effectively, we will need to develop a better vocabulary for talking about what machines can do,” Mitchell writes.

“And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems in nature.”.

This is the current frontier of AI research, and one encouraging way forward is to tap into what’s known about the development of these abilities in young children,” Mitchell writes

“Understanding these fallacies and their subtle influences can point to directions for creating more robust, trustworthy, and perhaps actually intelligent AI systems,” Mitchell writes

RECENT NEWS

SUBSCRIBE

Get monthly updates and free resources.

CONNECT WITH US

© Copyright 2024 365NEWSX - All RIGHTS RESERVED