A new report from the European Union Agency for Cybersecurity (ENISA) and Joint Research Center (JRC) is illuminating the cyber security issues related to artificial intelligence (AI) systems in autonomous vehicles. It suggests a number of ways manufacturers can reduce the risks.
The report, titled “Cybersecurity Challenges in the Uptake of Artificial Intelligence in Autonomous Driving,” arrives as vehicle manufacturers prepare for the cyber security regulations (WP.29) imposed by the United Nations Economic Commission for Europe. The regulations aim to boost cyber security and software updates in connected vehicles. In the European Union, WP.29 will be mandatory for vehicle manufacturers starting July 2024.
“It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks,” JRC Director-General Stephen Quest told Help Net Security. “To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving.”
Why is Cyber Security a Concern for Connected Vehicles?
As discussed in previous posts, vehicles are becoming increasingly connected, and as a result, more vulnerable to malicious attacks. A major concern is the over-the-air updates that AI systems in connected vehicles routinely undergo. If these systems contain weaknesses, they can potentially become backdoors for hackers to take control of a vehicle. Once inside, hackers can tamper with critical systems such as brakes and steering. They could also attempt to access personal data such as a driver’s financial information.
Another way hackers can attack connected vehicles is by manipulating roadside infrastructure. For example, a hacker could trick the vehicle’s AI system by manipulating the numbers it sees on a speed limit sign, thus commanding the vehicle to go dangerously fast.
Recommendations from the ENISA and JRC Report
To mitigate these risks, the ENISA and JRC report suggests the following recommendations:
- Create either proactive or reactive monitoring and maintenance processes for the AI systems. Whereas proactive monitoring helps spot and enhance routine learning for software update delivery, reactive monitoring involves discovering and fixing incorrect outputs.
- Make systematic risk assessments that focus on AI elements across their life cycles.
- Create alternative mechanisms vehicles can switch to during unexpected incidents, such as an attack or manipulation of a road sign.
- Build feedback loops that routinely test and monitor the vehicle’s systems.
- Set up an auditing process that will allow for a forensic examination of the system so cyber security infringements can be understood. This could include keeping audit trails and logging serving data.
- Embrace an appropriate AI security policy throughout the supply chain, including third parties, and guarantee the supply chain is governed by adequate AI culture and privacy policies.
One way to foster sound AI security policy in the supply chain is to embrace “threat intelligence,” in which autonomous vehicle systems developers share lessons collectively across the entire industry. Threat intelligence helps ensure rigorous and routine risk assessment processes that can better identify possible AI-related risks and threats. It also works to inform the whole automotive supply chain of appropriate policies.
Ensuring cyber security within connected vehicles won’t be a simple job. However, by implementing these recommendations, vehicle manufacturers can help ensure their vehicles meet EU regulations while making their vehicles safer and more reliable in the process.
Securing Autonomous Vehicles
As the automotive industry continues to develop intelligent vehicles, there is a growing need to focus on security aspects. Automotive Cyber Security: Protecting the Vehicular Network is a five-course program that aims to foster the discussion on automotive cyber security solutions and requirements for not only intelligent vehicles, but also the infrastructure of intelligent transportation systems.
Contact an IEEE Content Specialist today to learn more about getting access to these courses for your organization.
Interested in the course for yourself? Visit the IEEE Learning Network.
(17 February 2021). Cybersecurity risks connected to AI in autonomous vehicles. Help Net Security.
Anderson, Chad. (21 September 2020). 5 simple steps to bring cyber threat intelligence sharing to your organization. Help Security Net.
European Union Agency for Cybersecurity (ENISA) and Joint Research Center (JRC). (2021). CYBERSECURITY CHALLENGES IN THE UPTAKE OF ARTIFICIAL INTELLIGENCE IN AUTONOMOUS DRIVING.
[…] safety and cyber security standards and regulations addressing autonomous vehicles are starting to take shape, communication standards […]