Metanav

Five Major Ethical Challenges AI Developers Should Consider

ethical-considerations-artificial-intelligence

Artificial intelligence (AI) is on the rise, but many developers aren’t prepared to deal with the ethical challenges it poses, according to a new report from the consulting firm Capgemini titled  “AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust.” 

The report is based on an international survey conducted between April and May 2020 with input from 2,900 consumers in six countries and 884 executives from ten countries. It found that 90% of organizations knew of at least one circumstance in which an AI system created an ethical dilemma for their business.

Some key findings include:

  • Six out of ten organizations said they garnered legal scrutiny over their AI systems
  • 22% of organizations said they received backlash from customers over AI decision making 
  • Almost seven out of ten consumers expected an organization’s AI systems to be fair and unbiased
  • 67% of consumers said organizations that develop these systems should take responsible for their AI algorithms when decisions go wrong

Additionally, the report found that while customers are increasingly interacting with AI systems during the COVID-19 pandemic, such as applications that contain “no touch” user interfaces, these systems are still being developed without adequate considerations for ethics. 

Five AI Ethical Considerations for Chief Information Officers (CIOs)

While AI may be a boon to business and technology, it’s vital more than ever to prepare for ethical challenges ahead of development. Here are the five top ethical considerations CIOs should consider when it comes to AI, according to The Enterprisers Project

  1. If AI goes wrong, who is responsible? If an AI-powered autonomous taxi kills a pedestrian, who is liable: the rideshare company that owns the taxi, the company that manufactured it, or the programmer who developed the AI system? CIOs should consider how such incidents will affect their organizations and have a plan to prepare.
  2. How do you explain what is impossible to explain? Due to the complexity of the algorithms, it can be difficult, if not impossible, to decipher the decision-making logic of artificial intelligence systems. If an AI system makes a harmful decision, you need to be able to explain how and why. However, when humans try to understand these systems, the conclusions we draw are largely up to our interpretation rather than what actually happened. Currently, there is no way to get around this.
  3. How does AI compromise user data? As an explosion of artificial intelligence technology collects massive amounts of user data, it poses a mounting threat to privacy. As a result, regulators are getting more stringent about data security standards. It’s vital to consider how to best protect consumer data now, before you run into issues. 
  4. How can we prevent AI from being weaponized? Due to how real they look, AI-generated videos known as “deep fakes” can be used as tools for mass manipulation. They pose a serious and growing threat to national security. Deep fakes are just one example of how bad actors—such as hostile nations, terrorist groups, or even individuals—can take advantage of rapidly evolving AI technology to sow social discord and harm people. As AI technologies become more common and sophisticated, how can you prevent them from becoming weaponized? 
  5. Who at your organization is enforcing AI ethics? When it comes to ethics, it’s better to be proactive than reactive. Until governments get serious about regulating AI, tech companies need to monitor themselves, and those who don’t risk running into trouble in the future. One way to solve this problem is to hire a chief ethics officer who can oversee AI ethics within your organization. 

It will be a long time before ethics and artificial intelligence are completely merged. However, by asking themselves these critical questions now, CIOs can better prepare their organizations for what’s ahead. 

Learn about AI and Ethics

As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Designed to help organizations apply the theory of ethics to the design and business of AI systems, Artificial Intelligence and Ethics in Design is a two-part online course program. It also serves as useful supplemental material in academic settings.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Runyon, Mark. (8 October 2020). Artificial Intelligence (AI) ethics: 5 questions CIOs should ask. The Enterprise Project.

Adams, R. Dallon. (2 October 2020). AI and ethics: One-third of executives are not aware of potential AI bias. Tech Republic. 

Page, Carly. (1 October 2020). AI Has Resulted In “Ethical Issues” For 90% Of Businesses. Forbes.

, , , ,

Trackbacks/Pingbacks

  1. U.S. Federal Trade Commission: Seven Ways to Help Ensure Fair AI Systems - IEEE Innovation at Work - May 28, 2021

    […] equity in your company’s use of AI,” offers recommendations for organizations on how they can prevent bias in their AI systems while still benefiting from the […]

  2. Three Ways To Prepare Your Workforce for Artificial Intelligence - IEEE Innovation at Work - September 10, 2021

    […] healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF