Artificial intelligence (AI) has the potential to save thousands of lives during the COVID-19 pandemic. However, rapid deployment of these systems, such as the use of apps that combine AI with blockchain to track COVID-19 cases, can pose substantial risks to privacy, according to a group of researchers who recently published their concerns in the Nature Machine Intelligence.
The swift spread of COVID-19 is making it hard to gather adequate data in time to enable AI systems to accurately predict and diagnose cases. In addition to success concerns, a major worry is that AI systems used to assist pandemic decision makers in health care and government lack transparency. If an AI system causes harm, who will be held responsible?
Should AI be leveraged for pandemic-related use cases, it could have a long-lasting influence over public attitudes toward AI. If a system were to cause harm, it would erode trust in AI among an already wary public.
In Emergencies, How Do We Ensure AI Systems Are Safe?
AI ethics have traditionally centered around broad principles such as fairness and accountability. However, these broad principles do not offer guidance in emergency situations, where ethical concerns can easily collide. For instance, if an AI system maker has to choose between developing a system that saves lives versus one that ensures privacy, as we are already seeing during the pandemic, how should they proceed? Broad ethical principles simply cannot provide all the answers.
What’s more, implementing new technology gradually and cautiously over time can give developers valuable leeway to learn from mistakes and make adjustments. However, in an emergency such as a pandemic, AI developers may find themselves in the difficult situation of having to trade safety for speed. This poses a serious danger. When oversight is shrugged off, it leaves the potential for mistakes and harmful decisions that otherwise wouldn’t be made under a more thorough review process.
To solve this problem, the Cambridge University research team mentioned earlier suggests the following advice:
1) Make sure to include ethics experts within the team that is developing the AI system alongside the technical professionals. Known as “ethics by design,” this process ensures that ethics are embedded in your design from start to finish.
2) Because the healthcare industry is massive, reliable, rapid system testing and verification across large-scale industries is crucial for emergency situations. For this reason, you must ensure the implementation of your new AI system is based on best practices and sound research suited to the industry’s needs.
3) The public must trust your AI system. One way to build public trust is to establish an independent oversight body that will review any possible ethical concerns posed by your system. For example, the “red teaming” approach uses an oversight body to deliberately look at the project from an outsider’s point of view in order to point out potential flaws or biases that the design team might have overlooked.
Ensuring that AI systems remain ethical in times of crisis has its challenges. However, by following a rigorous ethics process, AI developers won’t need to trade caution for speed.
Understanding AI and Ethics
As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Artificial Intelligence and Ethics in Design, a two-part online course program, is designed to help organizations apply the theory of ethics to the design and business of AI systems. It also serves as useful supplemental material in academic settings.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
hÉigeartaigh, Seán Ó, Sundaram, Lalitha, Tzachor, Asaf, and Whittlestone, Jess. (22 June 2020). Artificial intelligence in a crisis needs ethics with urgency. Nature Machine Intelligence.
Floridi, Luciano. Cowls, Josh. (1 July 2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.