When it comes to autonomous weapons, some may draw an ethical line. Others may not.
That’s what’s happening at Clarifai, a New York City start-up that makes drones that can capture images and videos. Employees of the company worry if the product is ethical when the Pentagon began using the technology. Employees were concerned with artificial intelligence (AI) analyzing the data it captured. The company let them know that their technology will be saving the lives of civilians and soldiers.
As Clarifai moved deeper into military applications and facial recognition, employees’ concern grew with that their work would not help save lives, but instead be used as weapons in warfare and mass surveillance.
Earlier this year Matt Zeiler, said that the technology they are developing will one day be available for autonomous weapons.
Zeiler believes that the weapons, even though labeled for war, can save lives because of its greater accuracy. Zeiler said that “AI is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents.”
The cause of Clarifai employees’ apprehension is that facial recognition in autonomous weapons will cause a plethora of problems. “We in the industry know that technology can be compromised. Hackers hack. Bias is unavoidable,” reads an open leader to Zeiler from employees.
After all, building ethical AI is an enormously complex task. There are currently facial recognition services that have been proven biased because they have a hard time accurately identifying women and those with a darker appearance. Stakeholder buy-in can be difficult once they realize that ethical AI changes person-to-person, relying on what an individual believes to be ethical.
Researchers and activists see this as an opportunity for employees to demand change, as in the case of Google employees’ protest about the company’s work on the same Pentagon projects that Clarifai is working on. Their protest led to the tech giant ending its involvement.
Applying the Ethics of AI to Design
Artificial Intelligence and Ethics in Design: Responsible Innovation is a cutting-edge, five-course online training program from IEEE that focuses on integrating AI and autonomous systems within product and systems design. Intended for industry professionals, this program includes the following courses:
- From Growth to Great
- The Basis for No Bias
- Transparency and Accountability for Robots and AI Systems
- Human Emotion in Devices and Technology
- Legal and Implementation Issues of Enterprise AI
Connect with an IEEE Content Specialist to learn more about bringing this program to your organization to help it apply the ethics of AI to the business of design.
Metz, Cade. (1 Mar 2019). Is Ethical A.I. Even Possible? The New York Times.
No comments yet.