365NEWSX
365NEWSX
Subscribe

Welcome

Artificial Intelligence Neural Network Learns When It Should Not Be Trusted - SciTechDaily

Artificial Intelligence Neural Network Learns When It Should Not Be Trusted - SciTechDaily

Artificial Intelligence Neural Network Learns When It Should Not Be Trusted - SciTechDaily
Nov 22, 2020 2 mins, 38 secs

MIT researchers have developed a way for deep learning neural networks to rapidly estimate confidence levels in their output.

Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis.

Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions.

“We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.

“We’ve had huge successes using deep learning,” says Amini.

“One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,” says Amini.

Uncertainty analysis in neural networks isn’t new.

But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence.

The researchers devised a way to estimate uncertainty from only a single run of the neural network.

This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.

They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel.

Their network’s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty.

“It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini says.

To stress-test their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data — completely new types of images never encountered during training.

In these cases, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” says Amini.

The effect was subtle — barely perceptible to the human eye — but the network sniffed out those images, tagging its output with high levels of uncertainty.

Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work.

“We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,” says Amini.

“Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.

Summarized by 365NEWSX ROBOTS

RECENT NEWS

SUBSCRIBE

Get monthly updates and free resources.

CONNECT WITH US

© Copyright 2024 365NEWSX - All RIGHTS RESERVED