Researchers Make Breakthroughs in Autonomous Vehicle Systems


In recent years, autonomous vehicles have made significant progress. Waymo, a subsidiary of Google’s parent company Alphabet, recently incorporated its next-generation hardware sensor system into its Jaguar I-Pace vehicles. The company will use the system to collect data for training machine-learning models. In March, General Motors announced plans to invest over $20 billion USD through 2025 on its fleet of all-electric and autonomous vehicles. Last year in Beijing, 77 autonomous vehicles from thirteen China-based companies steered their way across 1.04 million kilometers (646,226 miles) of busy urban streets, which is a 153,600 kilometer (95,442.6 mile) increase from 2018. 

Autonomous vehicle developers still face some challenging terrain—whether it’s figuring out ways to cut back on the long-distances required to train them or deciphering how to help them traverse hard-to-see roads. However, researchers are discovering potential breakthroughs. 

“Photorealistic” Simulation System to Help Speed Training

To match the capabilities of human drivers, current estimates indicate that autonomous vehicles need to undergo at least 11 billion miles of on-the-road training. Some experts, however, think simulations can help speed up these training sessions. 

Recently, researchers at MIT designed a simulation system known as the Virtual Image Synthesis and Transformation for Autonomy (VISTA). By incorporating real-life data obtained by researchers, VISTA can generate a photorealistic universe with endless driving scenarios. In this virtual world, autonomous vehicles can safely learn how to avoid and recover from accidents in a fraction of the time it would take them to learn on physical roadways. 

Here’s how it works. When a controller travels a specified amount of miles within VISTA without getting into an accident, it’s given a reward. This incentivizes the controller to learn from its errors. During a test of this program, a VISTA-trained controller steered through a number of unfamiliar roadways, and successfully recovered from near-collisions.

“It’s tough to collect data in these edge cases that humans don’t experience on the road,” Alexander Amini, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL), told MIT News. “In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world.”

MIT Develops System to Help Autonomous Vehicles Navigate Snowy Roads

Snow-covered roads can easily confuse autonomous vehicles, which poses a major challenge for developers. A team from MIT’s Computer Science and Artificial Intelligence Laboratory is designing a system that may help. The system, called localizing ground-penetrating radar (LGPR), mimics ground radar technology used in grave surveying. LGPR scans the earth beneath the vehicle, then creates an image that it uses to map out where the vehicle is on the street. 

Although LGPR was successful at navigating in snow, it does have some weaknesses. Rain, which makes the ground beneath the road soggy and difficult to detect, as well as landslides and earthquakes, can create discrepancies between the original LGPR map and the new one. 

“[These conditions] all may make the maps less accurate over time. Prior knowledge about these events could be used to update maps in specific places,” Ph.D. student Teddy Ort, who led the study, told Popular Mechanics. “Otherwise, our experience shows maps in undisturbed areas can remain valid for many months, or even years.”

“Phantom Images” Can Fool Autopilots

Researchers from Ben-Gurion University of the Negev’s Cyber Security Research Center discovered a potentially dangerous flaw within the autopilot systems used in autonomous and semi-autonomous vehicles. By projecting phantom images onto billboards and roads, the researchers realized they could trick the vehicles into swerving or slamming on their brakes. The researchers fear criminals may take advantage of the defect to provoke accidents, so they are examining how neural network technology might help resolve the problem.

“This type of attack is currently not being taken into consideration by the automobile industry,” lead author Ben Nassi told Vision Spectra. “These are not bugs or poor coding errors, but fundamental flaws in object detectors that are not trained to distinguish between real and fake objects and use feature matching to detect visual objects.”

Getting Ready for Autonomy

Prepare for the latest developments with training in foundational and practical applications of autonomous, connected, and intelligent vehicle technologies. Developed by leading experts in the field, the IEEE Guide to Autonomous Vehicle Technology is a seven-course program offered online.

Connect with an IEEE Content Specialist today to learn more about purchasing the program for your organization.

Interested in purchasing the program for yourself? Access it through the IEEE Learning Network (ILN)!


Mathewson, Rob. (23 March 2020). System trains driverless cars in simulation before they hit the road. MIT News.

(20 March 2020). Researchers Fool Autonomous Vehicle Systems with Phantom Images. Vision Spectra.

Korosec, Kristen. (6 March 2020). Inside the next-gen tech on Waymo’s self-driving Jaguar I-Pace. Tech Crunch.

Wayland, Michael. (4 March 2020). General Motors to spend $20 billion through 2025 on new electric, autonomous vehicles. CNBC.

Wiggers, Kyle. (2 March 2020). 77 autonomous vehicles drove over 500,000 miles across Beijing in 2019. Venture Beat.

Linder, Courtney. (28 February 2020). To Help Self-Driving Cars Navigate the Snow, Researchers Are Looking Underground. Popular Mechanics.

, , , , , , , ,


  1. Machine Learning vs. Deep Learning: What’s the Difference? - IEEE Innovation at Work - January 28, 2021

    […] evolve, they are spurring revolutionary advancements in other emerging technologies, including autonomous vehicles and the internet of […]

Leave a Reply