The COVID-19 pandemic has triggered vast job losses as many businesses are unable to operate during this time. It’s also created a need to quickly identify people who have been infected. As a result, the pandemic is expected to hasten the growth of automation and artificial intelligence (AI). Some people believe that companies may rush to leverage robots to replace roles traditionally occupied by human workers. Others are eyeing AI-enabled phones as an easy way to trace the infected. However, without strong regulations and ethics in place, the rush to AI can have unforeseen consequences. Furthermore, AI ethics in the healthcare industry is a topic of particular importance due to the sensitive nature of medical records.
AI Applications in the Healthcare Field
“In addition to losing sight of the scale of job loss empowered by the use of robots and AI, we may hastily overlook the forms of bias embedded within AI and the invasiveness of the technology that will be used to track the coronavirus’s spread,” Ayanna Howard and Jason Borenstein write in MIT Sloan Management Review.
As the pandemic continues, healthcare systems may turn to AI algorithms to help doctors make difficult triage decisions about which patients to put on ventilators. Although it may be possible to use AI in this manner on a technical level, it’s important to remember that biases can be deeply embedded in the data sets used to train these AI systems. This could have deadly consequences, especially for women and people of color, who are already frequently misdiagnosed with conditions like heart disease.
Another major concern is the potential impact AI systems may have on user privacy, as companies like Google and Apple propose smartphone apps that can be used to contact-trace those who’ve been diagnosed with COVID-19.
“Yet, once the precedent for this type of surveillance is established, how do you remove that power from governments, companies, and others? Are sunset clauses going to be built into organizations’ data collection and use plans?” question Horward and Borenstein.
Companies will also undoubtedly benefit financially from the enormous troves of data these apps will collect. Without strong regulations in place, how can we ensure they will use that data ethically?
“Cases of abuse from covert data collection and sharing are already well documented,” Howard and Borenstein write. “Organizations involved in data collection and analysis—and their oversight—need to address these issues now versus later, when individuals will be less forgiving if their data is appropriated for other uses or used in ethically dubious ways.”
Can an AI Ethics “Toolbox” Ensure Safety and Trust Around AI Systems?
Recently, leading experts from Google Brain, OpenAi, Intel, and almost thirty other prestigious groups published a paper that outlines a “toolbox” of ideas to help AI developers ensure their products’ trustworthiness:
- Hire developers to spot bias in AI algorithms
- Employ a team of hackers to intentionally infiltrate the system in order to locate and correct dangerous vulnerabilities (known as “red teaming”)
- Ensure third-party auditing from independent organizations
- Implement audit trails and step-by-step documentation of how the AI system was developed
- Employ the use of privacy-preserving machine learning, or PPML, which has a goal to secure data and models in machine learning
This process would function similarly to the regulatory safety precautions taken by airlines.
“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety–they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety,” writes Ryan Daws in AI News.
By enacting robust safety plans that address ethical considerations, the AI industry can better develop systems with less biases, privacy concerns, and other violations.
Artificial Intelligence and Ethics
As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Artificial Intelligence and Ethics in Design is designed to help organizations apply the theory of ethics to the design and business of AI systems. This two-part online course program also serves as useful supplemental material in academic settings.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Daws, Ryan. (20 April 2020). Leading AI researchers propose ‘toolbox’ for verifying ethics claims. AI News.
Howard, Ayanna, and Borenstein, Jason. (12 May 2020). AI, Robots, and Ethics in the Age of COVID-19. MIT Sloan Management Review.