As They Rush To Embrace Artificial Intelligence, Industry Leaders Worry About Ethics


Artificial intelligence systems are quickly proliferating in a number of industrial sectors. However, they carry big risks that are worrying leaders acknowledges a recent report from KPMG, a global network of professional firms providing audit, tax and advisory services. 

According to the KPMG report titled “Thriving in an AI World,” a large number of leaders in industrial manufacturing, financial services, technology, retail, life sciences, health, and government say artificial intelligence (AI) is “moderately to fully functional” in their organizations, with the largest increase in one year coming from financial services, tech, and retail. Furthermore, COVID-19 has expedited the adoption of artificial intelligence, these leaders noted.

While AI delivers huge benefits to organizations, roughly half of surveyed leaders in industrial manufacturing, retail, and tech reported that they are concerned AI is spreading too fast without regard for cybersecurity, strategy, governance, and ethics. 

“Many business leaders do not have a view into what their organizations are doing to control and govern A.I. and may fear risks are developing,” Traci Gusher, the principal A.I. lead for KPMG, told Fortune.

Leaders in the retail industry are particularly worried about the lack of standards around AI. Currently, 87% reported that the government should establish ground rules for how to use the technology.

However, AI governance and ethics are likely to evolve slowly among these various industries. According to the AI Index, this is due to the insufficient consensus as to what such standards should include or how they should measure progress.

Six Frameworks for Building Ethical AI Systems

While there is little general consensus around how to integrate AI systems safely into businesses, researchers are examining ways to design ethical AI systems. When considering how to design AI, it’s important to understand how technology impacts human beings and to design AI systems around the needs of users, a process known as “technology anthropology.”

“Technology anthropology consists of two things—understanding human needs and converting them into a technological product and studying the macro of how these technological interventions change our everyday life,” AI Ethics researcher and technology anthropologist Aparna Ashok told Analytics India Magazine. “My work as a tech anthropologist revolves around the study of the interaction between people and digital solutions, the changing nature of technology and its impacts on society.”

According to Ashok, here are six human frameworks organizations should consider to ensure their AI systems are ethical. 

Design systems that foster connection and competency, as well as offering regular updates to users on the system’s goals. 

Analyze and understand your users’ needs, select diverse groups when training algorithms, and hire team members from diverse backgrounds.

Ensure the system’s data is collected, examined, processed, and shared in a way that respects the ownership of the user. Examples include giving users ownership over their data, as well as informing them how the system accesses their data. You should also consider getting user permission to change access when necessary. 

Design systems that protect users’ emotional, psychological, digital, intellectual, and physical safety. Examples include storing personal data in separate databases, as well as creating alert systems that let them know if their data has been hacked. 

Create systems that are transparent about how they make decisions, that consider biases, and that also allow users to challenge the AI’s decision making. 

Build a reliable platform that encourages genuine engagement. Examples include ensuring that content is verified and that users can access the organization’s principles.
Although these frameworks may seem simple, they are a good guide for making sure your AI systems are designed with the user in mind. They can also help minimize harm and risk. 

Learn about AI and Ethics

As AI continues to grow, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Designed to help organizations apply the theory of ethics to the design and business of AI systems, Artificial Intelligence and Ethics in Design is a two-part online course program. It also serves as useful supplemental material in academic settings.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!


Kahn, Jeremy. (9 March 2021). A.I. is getting more powerful, faster, and cheaper—and that’s starting to freak executives out. Fortune.

Goled, Shraddha. (4 March 2021). How This AI Ethics Researcher Combines Anthropology And Technology To Build Human-First Solutions. Analytics India Magazine.

Ashok, Aparna. (29 December 2021). Ethical Principles for Humane Technology.

, , ,

No comments yet.

Leave a Reply