Metanav

Are AI Ethics Really in the Eye of the Beholder?

Are AI Ethics Really in the Eye of the Beholder

When it comes to autonomous weapons, some may draw an ethical line. Others may not.

That’s what’s happening at Clarifai, a New York City start-up specializing in technology that instantly recognizes objects in photos and video. The company is working with the Pentagon and when employees questioned the ethics of building artificial intelligence (AI) that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

But as the company pushed further into military applications and facial recognition services, some employees grew increasingly concerned their work would end up feeding automated warfare or mass surveillance.

Turns out they were right. Matt Zeiler, Clarifai founder and chief executive and a prominent AI researcher, held a companywide meeting earlier this year in which he explained that Clarifai technology would one day contribute to autonomous weapons.

Zeiler argues that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “AI is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Clarifai employees worry that the same technological tools that drive facial recognition will ultimately lead to autonomous weapons, and that flaws in these tools will open a Pandora’s box of problems. “We in the industry know that technology can be compromised. Hackers hack. Bias is unavoidable,” reads an open leader to Zeiler from employees.

After all, building ethical AI is an enormously complex task. Some systems are woefully biased. For instance, facial recognition services can be significantly less accurate when trying to identify women or someone with a darker complexion. And when stakeholders realize that ethics are in the eye of the beholder, building ethical AI becomes even more difficult.

Researchers and activists see this as a time when tech employees can use their power to drive change, as in the case of Google employees’ protest about the company’s work on the same Pentagon projects that Clarifai is working on. Their protest led to the tech giant ending its involvement.

Applying the Ethics of AI to Design

Artificial Intelligence and Ethics in Design: Responsible Innovation is a cutting-edge, five-course online training program from IEEE that focuses on integrating AI and autonomous systems within product and systems design. Intended for industry professionals, this program includes the following courses:

  • From Growth to Great
  • The Basis for No Bias
  • Transparency and Accountability for Robots and AI Systems
  • Human Emotion in Devices and Technology
  • Legal and Implementation Issues of Enterprise AI

Connect with an IEEE Content Specialist to learn more about bringing this program to your organization to help it apply the ethics of AI to the business of design.

 

Resources

Metz, Cade. (1 Mar 2019). Is Ethical A.I. Even Possible? The New York Times.

No comments yet.

Leave a Reply