Stefan Heck is the CEO of Nauto, and a rare engineer with a Ph.D. in philosophy. Nauto works with commercial vehicle fleets to install computer-vision and AI equipment that studies road conditions and driver behavior. Insights from the data collected about human driving patterns is then sold to AV companies. The data helps to shape how AVs behave on the road to help keep people safe.
But whatever those fleet drivers do influences the algorithm that makes the decisions, and that scenario would be implemented in millions of cars.
The idea is that strictly following the law isn’t always the right thing to do. For instance, Nauto’s data shows that drivers tend to exceed the posted speed limit by about 15 percent, and that it’s safer at times for other drivers to simply go with the flow rather than follow the speed limit and become a bottleneck. That’s when it’s likely that cars will go around the slower driver, which is risky and can increase the fatality rate.
However, Heck says, “We kill 1.2 million people globally every year in car accidents. Any delay we put on [automotive] autonomy is killing people.”
Humans feed machines the data upon which it will eventually make decisions, and those decisions could put people’s lives at risk. With a human behind the wheel, it may happen here and there, but driverless cars must grapple with it at scale.
AI increasingly drives decisions in industries like health care, law enforcement, and banking. Whose ethics should it follow? Explore this question and more with IEEE.
Coming soon, the Guide to Autonomous Vehicle Technology is a seven-course program exploring the latest industry-leading strategies and business-critical research on autonomous, connected and intelligent vehicle technologies. The program was developed by top experts for AV technologies in partnership with IEEE Vehicular Technology Society. Pre-order this valuable program for your team today.
Foster, Tom. (Nov 2018). Sure, Self-Driving Cars Are Smart. But Can They Learn Ethics. Inc.