Sunday , October 17 2021

New Training Model Helps Authentic Cars See AI's Blind Spots

Since their introduction several years ago, autonomous vehicles have been slowly making their way to the road in greater and greater numbers, but the public remains wary of them despite the undeniable safety advantages they offer the public.

Autonomous vehicle companies are fully aware of the public's skepticism. Every crash makes it more difficult to gain public trust and the fear is that if companies do not manage the autonomous vehicle roll-out properly, the backlash could close the door on auto-driving car technology the way the Three Mile Island accident shut down the growth of nuclear power plants in the United States in the 1970's.

Making autonomous vehicles safer than they already are means identifying those cases that programmers might have never thought of and that the AI ​​will fail to respond appropriately, but that a human driver will understand intuitively as a potentially dangerous situation. New research from a joint effort by MIT and Microsoft may help bridge this gap between machine learning and human intuition to produce the safest autonomous vehicles yet.

Reassuring to Wary Public

Were public hesitancy not a factor, every car on the road would be replaced with an autonomous vehicle within a couple of years. Every truck would be fully autonomous by now and there would be no Uber or Lyft drivers, only shuttle cabs that you would order by phone and would pull up smoothly to the curb in a couple of minutes without a driver at sight.

Accidents would happen and people would still die as a result, but by some estimates, 90% of traffic fatalities around the world could be prevented with vehicles autonomous. Autonomous cars may need to recharge, but they do not need to sleep, take breaks, and they are single-mindedly concerned with carrying out the instructions in their programming.

Self Driving Truck
Source: Daimler

For companies that rely on transport to move goods and people from point A to point B, replacing drivers with self-driving cars saves on labor, insurance, and other ancillary costs that come with having a large human workforce.

The cost savings and the safety gains are just too great to keep humans on the road and behind the wheel.

We fall asleep, we drive drunk, we get distracted, or we are simply bad at driving, and the consequences are both costly and deadly. A little over a million people die every year on the roads around the world and the move to autonomous commercial trucking alone could cut transportation costs for some companies in the middle.

Yet, the public is not convinced and they become more skeptical with each report of an accident involving a self-driving car.

Edge Cases: The Achilles Heel of Self-Driving Cars?

Whether it is fair or not, the burden of demonstrating autonomous vehicle safety is on those advocating for self-driving vehicle technology. In order to do this, companies must work to identify and address those edge cases that can cause high profile accidents that reduce public confidence in the otherwise safe technology.

What happens when a vehicle is driving down the road and it spots a weather-beaten, bent, misshapen, and faded stop sign? Though an obviously rare situation-transportation departments would have probably removed such a sign long before it got to this awful state-edge cases are exactly this type of situation.

An edge case is a low probability event that should not happen but does happen in the real world, exactly the kinds of cases that programmers and machine learning processes might not consider.

Source: KNOW MALTA by Peter Grima / Flickr

In a real-world scenario, the autonomous vehicle could detect the sign and have no idea that it's a stop sign. It does not treat it as such and could decide to proceed through the intersection at speed and cause an accident.

A human driver may have a hard time identifying the stop sign too, but that is much less likely for experienced drivers. We know what a stop sign is and if it's in anything other than complete ruin, we'll know to stop at the intersection rather than proceed through it.

This type of situation is exactly what researchers at MIT and Microsoft have come together to identify and solve, which could improve autonomous vehicle safety and, hopefully, reduce the types of accidents that could slow or prevent the adoption of vehicles in our roads.

Modeling at the Edge

In two papers presented at the last year's Autonomous Agents and Multiagent Systems conference and the forthcoming Association for the Advancement of Artificial Intelligence conference, the researchers explain a new model for training systems autonomous like self-driving cars that use input to identify and fix these " blind spots "in AI systems.

The researchers run the AI ​​through simulated training exercises like traditional systems go through, but in this case, a human observes the machines actions and identifies when the machine is about to make or has made a mistake.

The researchers then take the machine's training data and synthesize it with the human observer's feedback and put it through a machine-learning system. This system will then create a model that researchers can use to identify situations where the AI ​​is missing critical information about how to behave, especially in cases edge.

Autonomous Sight "width =" 1200 "height =" 680
Source: Berkeley Deep Drive

"The model helps autonomous systems better know what they do not know," according to Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory at MIT and the lead author of the study.

"Many times, when these systems are deployed, their trained simulations do not match the real-world setting [and] They could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors. "

The problem arises when a situation occurs, such as the distorted stop sign, in which the majority of cases the AI ​​has been trained does not reflect the real world condition that it should have been trained to recognize. In this case, it has been trained that stop signs have a certain shape, color, etc. It could even have created a list of shapes that could be stop signs and would know to stop for those, but if it can not identify a stop sign properly, the situation could end in disaster.

"[B]Inacceptable actions were made far less than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous, "says Ramakrishnan.

Meeting the Highest Standards for Safety

By showing researchers where the AI ​​has incomplete data, autonomous systems can be made safer at the edge where high profile accidents can occur. If they can do this, we can get to the point where public trust in autonomous systems can start growing and the rollout of vehicles autonomous can begin earnestly, making us all more secure as a result.

Source link