Artificial intelligence (AI) is more present in our lives than ever. With varied uses, AI can predict what we want to see as we scroll through social media, as well as help to solve global challenges like hunger, environmental changes, and pandemics. This technology has countless applications in the real world. A McKinsey survey illustrates that AI adoption followed an upward trajectory in the year 2021 and continues to do so. According to the survey, “56 percent of all respondents report AI adoption in at least one function.”
However, AI technology is not always beneficial—AI can violate privacy, AI-generated output cannot always be explained, and AI can be biased. When the data feeding an AI system is not representative of the diversity and plurality of our societies, it can produce biased or discriminatory outcomes.
An often-cited example is facial recognition technology. Used to access mobile phones and bank accounts, it’s also being increasingly employed by law enforcement authorities. With emerging problems accurately identifying women and darker-skinned people, facial recognition is far from being perfected. This is not surprising when you look at how AI is developed: only 1 in 10 software developers worldwide are women. Furthermore, developers come overwhelmingly from western countries.
Hardcoding Ethics into AI
Humans can be biased, but people possess the ability to recognize how their conclusions may be biased, discriminatory, or unethical. While there is some recent debate over the “sentient” qualities of AI programs, they cannot “think” or “feel”. AI performance depends entirely on its coding. Because AI does not have this meta-cognitive ability, it is up to people to override unethical decisions when they arise. Unethical AI is not a consequence simply of programming deficiencies, but rather of not fully considering how ethical requirements should be incorporated into the learning algorithm during development.
Organizations using AI need to become more proactive and formulate actionable AI ethics policies by thinking about ethics from the start. This approach already is deemed essential to cyber security products, where “security by design” development principles drives the need to assess risks and hardcode security from the start. This mindset should be applied to the development of AI tools so these can be deployed responsibly and without bias. This process will be critical as societies and cultures change over time, and AI products should always reflect current values.
How to Create an AI Ethics Policy
Aligning AI ethics is not just a moral responsibility, it is also a business imperative. It requires action to build an AI ethics-aware culture. Reid Blackman, CEO of Virtue, recommends instilling actionable ethics into AI systems by following these seven guidelines:
- Bring clarity to AI standards
- Increase awareness among everyone in the organization
- Thoroughly incorporate AI ethics into team culture
- Make sure there are AI experts as part of an AI ethics committee
- Introduce accountability
- Measure everything— set key performance indicators (KPIs) to track whether your organization is meeting its goals for AI standard adoption
- Gain executive sponsorship
Prepare for an AI Future
The AI market size is expected to grow and surpass US$1,597 billion by 2030. Organizations and technology professionals should prepare for a changing landscape when it comes to the future of AI.
Get a jumpstart on learning about ethics in artificial intelligence systems. Check out Artificial Intelligence and Ethics in Design, a five-course program from IEEE that provides the background knowledge needed to integrate AI and autonomous systems within their companies or to their customers and end users.
Contact an IEEE Account Specialist to get organizational access or check it out for yourself on the IEEE Learning Network.
Resources
Bedzow, Ira. (30 June 2022). What It Takes to Create and Implement Ethical Artificial Intelligence. Forbes.
Boston Consulting Group (BCG). (7 July 2022). 87% of Climate and AI Leaders Believe That AI Is Critical in the Fight Against Climate Change. PR Newswire.
Chui, Michael et al. (8 December 2021). The state of AI in 2021. McKinsey.
Henderson, Emily. (10 June 2022). Using artificial intelligence to discover new antivirals against COVID-19 and future pandemics. New Medical.
McKendrick, Joe. (10 June 2022). 7 Steps to More Ethical Artificial Intelligence. Forbes.
Mubarik, Abu. (20 June 2022). This is how former Wall Street trader Sara Menker from Ethiopia is using AI to remove world hunger. Face 2 Face Africa.
Precedence Research. (19 April 2022). Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030. GlobeNewswire.
Ramos, Gabriela and Koukku-Ronde, Ritva. (22 June 2022). A new global standard for AI ethics. UNESCO.
Smith, Wesley. (26 June 2022). Five Reasons AI Programs Are Not ‘Persons’. Mind Matters News.
Yu, Eileen. (30 June 2022). AI ethics should be hardcoded like security by design. ZD Net

Artificial intelligence (AI) systems are evolving fast. However, ethical standards that ensure these systems don’t harm the public, such those that aim to prevent unintentional biases based on the data these systems are trained on, have been less quick to evolve. According to a global survey conducted by MIT Sloan Management Review, which polled over 1,000 executives, 82% of managers in organizations with at least USD $100 million in annual revenues agreed or strongly agreed that responsible AI (RAI) should be included in their top management agenda. At the same time, only 50% reported that RAI is a part of their top management’s agenda.
How can organizations that develop or use artificial intelligence ensure RAI is not just an afterthought? A recent panel of global AI experts, organized by MIT Sloan Management Review and global consulting firm BCG, concluded with the following takeaways:
- Leadership needs to understand why RAI is important to the organization’s strategy. Otherwise, RAI may never make it into the agendas of the organization’s major decision makers.
- Determine whether RAI is part of your AI strategy or a part of your wider organizational goals, such as corporate responsibility. Without an understanding of this, leadership may not fully grasp that it should be integrated into their larger agenda.
- Look at RAI as an urgent need that must be integrated now. Otherwise, you may miss valuable opportunities to prevent risk and harm down the line.
What are the Fundamental Principles of AI Ethics?
Understanding the core principles of AI is the first step to developing an effective AI standards framework. Such a framework should also align with an organization’s mission, as well as any regulations the organization may be affected by through its implementation of the AI system. According to TechTarget, the basic principles of ethical AI include:
- Fairness: The AI system does not contain biases and functions equally well for all groups
- Accountability: The AI system has ways to identify who is responsible across different stages of the AI life cycle if something goes wrong and provides ways for humans to supervise and control the system
- Transparency: When the AI system makes a decision, it allows humans to understand why it came to that conclusion, which is essential for building trust
- Safety: The AI system is equipped with effective security controls
What does incorporating these principles into an AI system look like in practice? During an interview with Analytics India Magazine, Layak Singh, CEO of Artivatic AI, an insurance platform, said the company reduces biases in AI by defining the business problems it wants to solve while considering end users, then configuring data collection methods to be able to incorporate diverse perspectives.
“We also ensure that we clearly understand our training data, as this is where most biases are introduced and can be avoided,” Singh said. “With that aim, we also ensure an ML [machine learning] team that’s assorted as they ask dissimilar queries and thus interact with the AI models in various ways. This leads to identifying errors before the model is underway in production and is the best manner to reduce bias both at the beginning and while retraining models.”
Additionally, there is a major focus on feedback as his company keeps feedback channels, such as forum discussions, open in order to run continual audits and upgrades.
Ensuring AI systems are ethical is becoming essential to building trust with clients and customers. Don’t wait until that trust is already broken— start developing an ethical AI standards framework today.
Incorporating AI Standards at Your Organization
An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Krishna, Sri. (20 April 2022). Talking Ethical AI with Artivatic’s Layak Singh. India Analytics Magazine.
Kiron, David, Renieris, Elizabeth, and Mills, Steven. (19 April 2022). Why Top Management Should Focus on Responsible AI. MIT Sloan Management Review.
Kompella, Kashyap. (1 April 2022). How AI ethics is the cornerstone of governance. TechTarget.

Artificial intelligence systems are quickly proliferating in a number of industrial sectors. However, they carry big risks that are worrying leaders acknowledges a recent report from KPMG, a global network of professional firms providing audit, tax and advisory services.
According to the KPMG report titled “Thriving in an AI World,” a large number of leaders in industrial manufacturing, financial services, technology, retail, life sciences, health, and government say artificial intelligence (AI) is “moderately to fully functional” in their organizations, with the largest increase in one year coming from financial services, tech, and retail. Furthermore, COVID-19 has expedited the adoption of artificial intelligence, these leaders noted.
While AI delivers huge benefits to organizations, roughly half of surveyed leaders in industrial manufacturing, retail, and tech reported that they are concerned AI is spreading too fast without regard for cybersecurity, strategy, governance, and ethics.
“Many business leaders do not have a view into what their organizations are doing to control and govern A.I. and may fear risks are developing,” Traci Gusher, the principal A.I. lead for KPMG, told Fortune.
Leaders in the retail industry are particularly worried about the lack of standards around AI. Currently, 87% reported that the government should establish ground rules for how to use the technology.
However, AI governance and ethics are likely to evolve slowly among these various industries. According to the AI Index, this is due to the insufficient consensus as to what such standards should include or how they should measure progress.
Six Frameworks for Building Ethical AI Systems
While there is little general consensus around how to integrate AI systems safely into businesses, researchers are examining ways to design ethical AI systems. When considering how to design AI, it’s important to understand how technology impacts human beings and to design AI systems around the needs of users, a process known as “technology anthropology.”
“Technology anthropology consists of two things—understanding human needs and converting them into a technological product and studying the macro of how these technological interventions change our everyday life,” AI Ethics researcher and technology anthropologist Aparna Ashok told Analytics India Magazine. “My work as a tech anthropologist revolves around the study of the interaction between people and digital solutions, the changing nature of technology and its impacts on society.”
According to Ashok, here are six human frameworks organizations should consider to ensure their AI systems are ethical.
Well-being:
Design systems that foster connection and competency, as well as offering regular updates to users on the system’s goals.
Inclusion:
Analyze and understand your users’ needs, select diverse groups when training algorithms, and hire team members from diverse backgrounds.
Privacy:
Ensure the system’s data is collected, examined, processed, and shared in a way that respects the ownership of the user. Examples include giving users ownership over their data, as well as informing them how the system accesses their data. You should also consider getting user permission to change access when necessary.
Security:
Design systems that protect users’ emotional, psychological, digital, intellectual, and physical safety. Examples include storing personal data in separate databases, as well as creating alert systems that let them know if their data has been hacked.
Accountability:
Create systems that are transparent about how they make decisions, that consider biases, and that also allow users to challenge the AI’s decision making.
Trust:
Build a reliable platform that encourages genuine engagement. Examples include ensuring that content is verified and that users can access the organization’s principles. Although these frameworks may seem simple, they are a good guide for making sure your AI systems are designed with the user in mind. They can also help minimize harm and risk.
Learn about AI and Ethics
As AI continues to grow, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Designed to help organizations apply the theory of ethics to the design and business of AI systems, Artificial Intelligence and Ethics in Design is a two-part online course program. It also serves as useful supplemental material in academic settings.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Kahn, Jeremy. (9 March 2021). A.I. is getting more powerful, faster, and cheaper—and that’s starting to freak executives out. Fortune.
Goled, Shraddha. (4 March 2021). How This AI Ethics Researcher Combines Anthropology And Technology To Build Human-First Solutions. Analytics India Magazine.
Ashok, Aparna. (29 December 2021). Ethical Principles for Humane Technology. blog.prototypr.io

Artificial intelligence (AI) applications are rapidly expanding, and so are the various threats they pose. From algorithms with embedded racial biases to “black box” systems that give humans no insight into an AI’s decision making process, it’s becoming increasingly clear that AI developers need to take steps to mitigate the risks.
AI “Ethics-As-A-Service”
Google plans to roll out a new “ethics-as-a-service” initiative to customers. Similar to its Google Cloud service, where Google hosts client data in its cloud, it plans to provide “ethics-as-a-service” to help clients spot and fix ethical issues with their AI systems.
The service, which may launch as soon as the end of 2020, is expected to consist of training courses on how to detect AI system ethical issues, as well as how to foster and deploy guidelines around AI ethics. The company may eventually provide consulting services such as audits and reviews. For example, it may inspect a financial client’s AI-enabled project to determine if a lending algorithm contains bias against minority groups.
However, it’s important to note that it’s not yet determined whether some of Google’s ethics services will be free. According to Brian Green, Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University, providing these services for a fee may pose ethical challenges of their own. “They’re legally compelled to make money and while ethics can be compatible with that, it might also cause some decisions not to go in the most ethical direction,” Green told Wired.
Another challenge will be knowing where to draw the line in determining what’s ethical, acknowledges Tracy Frey, an expert on AI strategy in Google’s cloud division. “It is very important to us that we don’t sound like the moral police,” she told Wired.
“Embedded Ethics”
In order for AI systems and enabled devices to be truly ethical, some experts argue that ethics must be considered from the very beginning of the design process, in what’s known as “embedded ethics.” This would require AI developers to involve ethicists whose job is to create “ethical awareness” throughout all stages of a design, according to a recent paper published in Science Daily by a team of researchers from the Technical University of Munich (TUM).
Meanwhile, R2 Data Labs, the data innovation arm of Rolls-Royce, has created an AI ethics framework that takes a similar “ethics-from-the-ground-up” approach. The company says organizations will be able to adopt this framework, which is intended to help build trust for AI systems among the public. The soon-to-be-published findings include:
- An ethical decision-making process: This method will help developers make sure ethics is integrated in its AI decision making.
- Five-layer check system that ensures AI algorithms are trustworthy: This step-by-step process prevents bias from developing in AI systems. It also provides continuous monitoring of results.
The findings, which Rolls-Royce plans to publish sometime this year, are based on the engineering titan’s own experience with AI applications. The company says the findings have been peer reviewed by experts in a variety of industries, including a number of large technology companies, as well as experts in government, pharmaceutical, automotive, and academia.
Understanding AI and Ethics
As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Designed to help organizations apply the theory of ethics to the design and business of AI systems, Artificial Intelligence and Ethics in Design is a two-part online course program. It also serves as useful supplemental material in academic settings.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Johnson, Robin. (3 September 2020). Rolls-Royce claims breakthroughs in artificial intelligence ethics and trustworthiness. BusinessLive.
Technical University of Munich (TUM). (1 September 2020). An embedded ethics approach for AI development. Science Daily.
Simonite, Tom. (28 August 2020). Google Offers to Help Others With the Tricky Ethics of AI. Wired.
Artificial Intelligence (AI) is changing the field of education—particularly continuing education. Learning does not stop following graduation from a formal education program. Many people spend their lives learning new skills or improving on existing ones. Lifelong learning can be accomplished through in-person classes, online courses, or a hybrid of both.
Because people can learn in a variety of ways, AI is able to take learning to the next level by providing the ability to customize the experience to the individual. By tracking facial movements, microexpressions, and other behaviors, AI software can try to identify ways to keep the user engaged.
Benefits of AI in Education
The main benefits of leveraging AI in education include:
- The ability to personalize learning: Everyone is different. AI technology can help people learn in the best format for them. As a result, this can improve learning speeds and success rates. It can can also provide teachers with valuable data on student performance. This information will allow teachers to see when students are struggling. They’ll then be able to intervene early and prevent students from falling behind.
- More time for teachers: A teacher’s work does not end in the classroom. Teachers create tests, check exams and homework, and much more. By using an AI assistant for administrative tasks, a teacher can save time. That time can then be used to improve upon lesson plans and provide guidance to students.
- Remaining current with technology: In order to mitigate the learning curve, employers can make sure their staff is knowledgeable on using AI. This will also make employees feel like they are keeping up with technology rather than being replaced by it.
Drawbacks/Ethical AI Education
While there is no official definition for ethical AI, it is generally interpreted as using AI for the general good of the public.
“We’re seeing AI tools that can take lots of data around career pathways and make recommendations about what students can study. So we’ve got AI informing decisions made by young people,” says Toby Baker of Nesta.
As the use of AI continues to grow and expand into various industries, there will be an increasing number of people who will have to interact with AI without understanding how it arrives at its conclusions and predictions. Baker believes that there needs to be better communication between those who develop the technology and the end-users.
It’s also vital that the users understand how their data is being managed during the machine learning process. It could be concerning to know your facial movements are being monitored without knowing the purpose behind the tracking.
Using Artificial Intelligence Ethically
With growing questions around AI and ethics, there has never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations.
Artificial Intelligence and Ethics in Design, a two-part online course program, helps organizations apply the theory of ethics to the design and business of AI systems. It also serves as useful supplemental material in academic settings.
Interested in getting the course for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Salak, Bill. (8 October 2019). 3 ways AI is changing education right now (and in the future). eSchoolNews
Courtois, Jean-Philippe. (7 October 2019). How AI is transforming education and skills development. Microsoft.
Luckin, Rose; Seldon, Anthony Seldon; Lakhani, Priya. (30 September 2019). The benefits of AI and machine learning. The Guardian.
Frank, Aaron. (24 September 2019). New AI Systems Are Here to Personalize Learning. Singularity Hub.
Freeze, James. (27 September 2019). An AI Education: Overcoming Fear Of The Innovation Cycle. Forbes.