Artificial intelligence (AI) continues to dominate headlines, thanks to its potential to revolutionize countless industries. From manufacturing and healthcare to banking and retail, AI is streamlining automatable and administrative tasks across the board.
Beyond efficiency, AI plays a critical role in high-impact applications. It helps detect cybersecurity threats, prevent retail fraud, and improve autonomous vehicle navigation by recognizing driver patterns and predicting accidents. Additionally, AI enhances customer experiences by personalizing marketing and service interactions.
In essence, machines are now replicating, and even expanding, the capabilities of the human mind. As a result, AI is reshaping the future of business.
A New Industrial Era
Because of its transformative power, the World Economic Forum has dubbed AI part of the “fourth industrial revolution.” This new era merges the physical, digital, and biological worlds, following earlier revolutions driven by steam, electricity, and computing.
Forbes contributor Bernard Marr calls it the “Intelligence Revolution,” underscoring AI’s sweeping impact on society and industry.
AI: A Double-Edged Sword
Although often used interchangeably, AI actually falls into two distinct categories: artificial general intelligence (AGI) and generative artificial intelligence (generative AI).
AGI refers to the ability of machines to understand, learn, and perform intellectual tasks as humans would based on the processing of demonstrated customer patterns. Examples of this include:
- personalized product recommendations provided by Amazon
- customized workouts and health goals suggested by apps (such as the MyFitnessPal app formerly owned by sports apparel and gear provider Under Armour) that base their recommendations on collected health data for physical activity, sleep, and diet
- smart assistants like Alexa and Siri that can control home technology, dial the telephone upon request, and more
Generative AI refers to a form of artificial intelligence that learns the patterns and structure of inputted data and responds by generating text, images, or other media with similar characteristics. An example includes the much-publicized ChatGPT, a chatbot introduced in November 2022 by OpenAI, that can produce output of a desired length, format, style, level of detail, and language on most any topic.
Experts confirm that AI can help businesses enhance their productivity by leaps and bounds. For example, research firm Gartner estimates that AI can save companies around the world over 6 billion employee-hours annually. On an economic level, a recent study by global management consulting firm McKinsey & Company predicts that the analytics enabled by AI could add US $13 trillion to our global GDP by 2030.
At the same time, however, AI also raises its share of issues and ethical concerns. Among them, generative AI can lend itself to the alteration of text, images, and video in the form of inaccurate, misleading, manipulative, and/or potentially dangerous “deep fake” or fraudulent content. It also raises questions about ownership rights of created content and its eligibility for copyright protection.
Helping Industry Navigate the Complex Field of AI
Recognizing both the unprecedented importance and complexity of artificial intelligence, IEEE offers several course programs in AI and machine learning designed to help navigate these exciting, complicated, and rapidly-evolving technologies.
- Machine Learning: Predictive Analysis for Business Decisions— Ideal for computer engineers, business executives, industry executives, industry leaders, business leaders, technical managers, data scientists, and data engineers, this five-course program provides an overview of the different types of machine learning that are fueling businesses today, how these forms of AI use software, algorithms, and models in their design, and how attendees can deploy scalable machine learning into their own processes to achieve their business goals.
- Artificial Intelligence and Ethics in Design— Ideal for data engineers, AI/ML engineers, design engineers, computer engineers, security engineers, electrical engineers, software engineers, UX designers, engineering managers, technical leaders, functional consultants, business users, research engineers, robotics engineers, machine learning engineers, and computer vision engineers, this five-course program covers such topics as law, compliance, and ethics in artificial intelligence, ethical challenges in data protection and safety, and responsible design in the algorithmic era.
- Artificial Intelligence and Ethics in Design: Responsible Innovation— This five-course program is designed to help learners understand the ethics specifications that must be met when designing AI systems for European (and other) markets. Topics include causes of bias, transparency and accountability for robots and AI systems, and legal and implementation issues of enterprise AI.
To discover more IEEE courses about artificial intelligence, browse the IEEE Learning Network catalog.
Resources:
Forbes Technology Council. (13 January 2022). 16 Industries and Functions That Will Benefit from AI In 2022 and Beyond. Forbes.
Fourth Industrial Revolution. World Economic Forum.
Marr, Bernard. (10 August 2020). What Is the Artificial Intelligence Revolution and Why Does It Matter To Your Business? Forbes.
Schroer, Alyssa. (19 May 2023). What Is Artificial Intelligence? Built In.
Mohan, Malethy. (22 March 2023). The Difference Between Generative AI and Traditional AI. LinkedIn.
Kanade, Vijay. What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends. Spiceworks.com.
Rajagopalan, Ramesh. 10 Examples of Artificial Intelligence in Business. Online Degrees.
Elliott, Timo. (9 March 2020). The Power of Artificial Intelligence Vs. the Power Of Human Intelligence. Forbes.
Dilmegani, Cem. (22 April 2023). Generative AI Ethics: Top 6 Concerns. AIMultiple.
Artificial intelligence (AI) is more present in our lives than ever. With varied uses, AI can predict what we want to see as we scroll through social media, as well as help to solve global challenges like hunger, environmental changes, and pandemics. This technology has countless applications in the real world. A McKinsey survey illustrates that AI adoption followed an upward trajectory in the year 2021 and continues to do so. According to the survey, “56 percent of all respondents report AI adoption in at least one function.”
However, AI technology is not always beneficial—AI can violate privacy, AI-generated output cannot always be explained, and AI can be biased. When the data feeding an AI system is not representative of the diversity and plurality of our societies, it can produce biased or discriminatory outcomes.
An often-cited example is facial recognition technology. Used to access mobile phones and bank accounts, it’s also being increasingly employed by law enforcement authorities. With emerging problems accurately identifying women and darker-skinned people, facial recognition is far from being perfected. This is not surprising when you look at how AI is developed: only 1 in 10 software developers worldwide are women. Furthermore, developers come overwhelmingly from western countries.
Hardcoding Ethics into AI
Humans can be biased, but people possess the ability to recognize how their conclusions may be biased, discriminatory, or unethical. While there is some recent debate over the “sentient” qualities of AI programs, they cannot “think” or “feel”. AI performance depends entirely on its coding. Because AI does not have this meta-cognitive ability, it is up to people to override unethical decisions when they arise. Unethical AI is not a consequence simply of programming deficiencies, but rather of not fully considering how ethical requirements should be incorporated into the learning algorithm during development.
Organizations using AI need to become more proactive and formulate actionable AI ethics policies by thinking about ethics from the start. This approach already is deemed essential to cyber security products, where “security by design” development principles drives the need to assess risks and hardcode security from the start. This mindset should be applied to the development of AI tools so these can be deployed responsibly and without bias. This process will be critical as societies and cultures change over time, and AI products should always reflect current values.
How to Create an AI Ethics Policy
Aligning AI ethics is not just a moral responsibility, it is also a business imperative. It requires action to build an AI ethics-aware culture. Reid Blackman, CEO of Virtue, recommends instilling actionable ethics into AI systems by following these seven guidelines:
- Bring clarity to AI standards
- Increase awareness among everyone in the organization
- Thoroughly incorporate AI ethics into team culture
- Make sure there are AI experts as part of an AI ethics committee
- Introduce accountability
- Measure everything— set key performance indicators (KPIs) to track whether your organization is meeting its goals for AI standard adoption
- Gain executive sponsorship
Prepare for an AI Future
The AI market size is expected to grow and surpass US$1,597 billion by 2030. Organizations and technology professionals should prepare for a changing landscape when it comes to the future of AI.
Get a jumpstart on learning about ethics in artificial intelligence systems. Check out Artificial Intelligence and Ethics in Design, a five-course program from IEEE that provides the background knowledge needed to integrate AI and autonomous systems within their companies or to their customers and end users.
Contact an IEEE Account Specialist to get organizational access or check it out for yourself on the IEEE Learning Network.
Resources
Bedzow, Ira. (30 June 2022). What It Takes to Create and Implement Ethical Artificial Intelligence. Forbes.
Boston Consulting Group (BCG). (7 July 2022). 87% of Climate and AI Leaders Believe That AI Is Critical in the Fight Against Climate Change. PR Newswire.
Chui, Michael et al. (8 December 2021). The state of AI in 2021. McKinsey.
Henderson, Emily. (10 June 2022). Using artificial intelligence to discover new antivirals against COVID-19 and future pandemics. New Medical.
McKendrick, Joe. (10 June 2022). 7 Steps to More Ethical Artificial Intelligence. Forbes.
Mubarik, Abu. (20 June 2022). This is how former Wall Street trader Sara Menker from Ethiopia is using AI to remove world hunger. Face 2 Face Africa.
Precedence Research. (19 April 2022). Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030. GlobeNewswire.
Ramos, Gabriela and Koukku-Ronde, Ritva. (22 June 2022). A new global standard for AI ethics. UNESCO.
Smith, Wesley. (26 June 2022). Five Reasons AI Programs Are Not ‘Persons’. Mind Matters News.
Yu, Eileen. (30 June 2022). AI ethics should be hardcoded like security by design. ZD Net
Artificial intelligence (AI) systems are evolving fast. However, ethical standards that ensure these systems don’t harm the public, such those that aim to prevent unintentional biases based on the data these systems are trained on, have been less quick to evolve. According to a global survey conducted by MIT Sloan Management Review, which polled over 1,000 executives, 82% of managers in organizations with at least USD $100 million in annual revenues agreed or strongly agreed that responsible AI (RAI) should be included in their top management agenda. At the same time, only 50% reported that RAI is a part of their top management’s agenda.
How can organizations that develop or use artificial intelligence ensure RAI is not just an afterthought? A recent panel of global AI experts, organized by MIT Sloan Management Review and global consulting firm BCG, concluded with the following takeaways:
- Leadership needs to understand why RAI is important to the organization’s strategy. Otherwise, RAI may never make it into the agendas of the organization’s major decision makers.
- Determine whether RAI is part of your AI strategy or a part of your wider organizational goals, such as corporate responsibility. Without an understanding of this, leadership may not fully grasp that it should be integrated into their larger agenda.
- Look at RAI as an urgent need that must be integrated now. Otherwise, you may miss valuable opportunities to prevent risk and harm down the line.
What are the Fundamental Principles of AI Ethics?
Understanding the core principles of AI is the first step to developing an effective AI standards framework. Such a framework should also align with an organization’s mission. It should also align with any regulations the organization may be affected by through its implementation of the AI system. According to TechTarget, the basic principles of ethical AI include:
- Fairness: The AI system does not contain biases and functions equally well for all groups
- Accountability: The AI system has ways to identify who is responsible across different stages of the AI life cycle if something goes wrong. It also provides ways for humans to supervise and control the system
- Transparency: When the AI system makes a decision, it allows humans to understand why it came to that conclusion. This is essential for building trust
- Safety: The AI system is equipped with effective security controls
Incorporating These Principles into AI Systems
During an interview with Analytics India Magazine, Layak Singh, CEO of Artivatic AI, an insurance platform, said the company reduces biases in AI by defining the business problems it wants to solve while considering end users. They then configure data collection methods to be able to incorporate diverse perspectives.
“We also ensure that we clearly understand our training data, as this is where most biases are introduced and can be avoided,” Singh said. “With that aim, we also ensure an ML [machine learning] team that’s assorted as they ask dissimilar queries and thus interact with the AI models in various ways. This leads to identifying errors before the model is underway in production. It is the best manner to reduce bias both at the beginning and while retraining models.”
Additionally, there is a major focus on feedback as his company keeps feedback channels, such as forum discussions, open in order to run continual audits and upgrades.
Ensuring AI systems are ethical is becoming essential to building trust with clients and customers. Don’t wait until that trust is already broken— start developing an ethical AI standards framework today.
Incorporating AI Standards at Your Organization
An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Krishna, Sri. (20 April 2022). Talking Ethical AI with Artivatic’s Layak Singh. India Analytics Magazine.
Kiron, David, Renieris, Elizabeth, and Mills, Steven. (19 April 2022). Why Top Management Should Focus on Responsible AI. MIT Sloan Management Review.
Kompella, Kashyap. (1 April 2022). How AI ethics is the cornerstone of governance. TechTarget.

A 2019 survey from Gartner found that 37% of businesses and organizations employ artificial intelligence (AI), DataProt reported. However, few organizations are taking steps to mitigate the risk associated with AI systems, such as their propensity for bias and privacy infringements. A 2021 PwC research report found that just 20% of enterprises had instituted an AI ethics framework. Meanwhile, only 35% intended to enhance their AI governance and processes. With governments increasingly moving towards passing AI regulations, the timeframe for organizations to develop ethical AI standards is getting shorter.
During an interview with Analytics India Magazine, Satyakam Mohanty, Chief Product Officer at Fosfor by L&T Infotech, a global technology consulting and digital solutions company, said responsible AI is the only way for organizations to reduce potential risks associated with the technology.
“The great AI debate opens various facets of ethics, but without a common agreement and agreed standard, its impact and repercussions on the way organizations operate is not quantifiable,” Mohanty told the magazine. “Fairness and explainability can be managed and scaled by introducing data bias mitigation practices and algorithmic bias mitigation processes. Additionally, ensuring higher standard explainability frameworks into the implementations and decision-making process helps. By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run. They do this by building robust and innovative solutions from the start.”
How to Develop an AI Standards Framework
How can your organization begin building a successful AI standards framework? Writing in Harvard Business Review, AI ethics experts Reid Blackman and Beena Ammanath recommend that organizations start by putting together a team of senior-level experts. This team should encompass, at minimum, technologists, legal/compliance experts, ethicists, and business leaders who understand what the organization needs to achieve in terms of ethical AI.
Once you have a team in place, they recommend taking these steps:
- First, identify your organization’s AI ethical standard:
What is the minimum ethical standard your organization is willing to meet in terms of AI? If your AI system is discriminatory towards a certain group but is still far less discriminatory towards them than traditional human-run systems, will your organizations consider that an acceptable benchmark? This is a similar dilemma to what autonomous vehicle manufacturers must consider. For example, if autonomous vehicles occasionally kill passengers and pedestrians but at a lower rate than traditional vehicles, should those vehicles be considered safe? Although these are difficult questions to grapple with, asking them will help your organization set the right frameworks and guidelines. This ensures ethical product development. - Determine “gaps” between where your organization is currently and what your standards need:
While there may be plenty of technical solutions to your AI ethics dilemma, none are likely to be enough to reduce the risks substantially enough to safeguard your organization. As such, your AI ethics team will need to ask: what are its skills and knowledge limitations, what are the risks it is trying to reduce, in what ways can software/quantitative analysis help and not be able to help. Also, what needs to be done in terms of qualitative assessments, and how mature does the technology need to be to meet ethics expectations? - Gain insight into what’s behind the bias in your AI and then strategize solutions:
While it’s generally true that biased AI systems are reflections of biased training data and/or societal bias, the real problem is more complex. For example, you need to understand sources of discriminatory outputs, as well as potential biases. Knowing this will help you understand how to decide the best strategy for reducing bias.
Implementing artificial intelligence standards at your organization will take time, but the risk reduction they provide will be well worth the effort. Does your organization have the right knowledge and skills necessary to build an effective AI standards roadmap?
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE. It provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Krishna, Sri. (29 March 2022). Talking Ethical AI with Fosfor’s Satyakam Mohanty. Analytics India Magazine.
Blackman, Reid and Ammanath, Beena (21 March 2022). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review.
Jovanovic, Bojan. (8 March 2022). 55 Fascinating AI Statistics and Trends for 2022. DataProt.
Likens, Scott; Shehab, Michael; Rao, Anand. AI Predictions 2021. PwC Research.
Machine learning is quickly becoming one of the most popular technologies that companies are investing in. Experts are growing increasingly worried that these models have a dangerous propensity for making mistakes when it comes to applications such as image recognition software used to diagnose illnesses, or surveillance software used to recognize human faces. However, advancements in machine learning may soon help reduce bias in these systems.
Data Diversity Key to Overcoming Bias in Neural Networks
A team of researchers from MIT and Harvard have found that training machine learning models on diverse sets of data can help them reduce bias, MIT News reports. Data sets that contain limited data are much more likely to discriminate when they make decisions. For example, facial recognition systems trained on data sets containing images of mostly white men are much more likely to give incorrect results when given images featuring women and people of color.
Relying on a method that used controlled data sets, the researchers sought to learn how training data impacts whether an artificial neural network (a machine learning model that uses brain-like nodes to process data) can figure out how to recognize new objects.
The researchers created data sets that contained an equal number of images of various objects in different positions (for example, photos of a car from multiple angles). They made some of these data sets more diverse by displaying the images from different points of view. Machine learning models the researchers trained on the more diverse data sets were better at generalizing new viewpoints. The result supports the idea that data diversity is necessary for overcoming bias. However, the researchers also found that the better a model gets at recognizing new objects, the worse it gets at recognizing objects it has already seen.
“A neural network can overcome dataset bias, which is encouraging,” Xavier Boix, a research scientist and senior author of the paper, told MIT News. “But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design data sets in the first place.”
The team also found that training a model separately for individual tasks, rather than training a model for each task at the same time, helped models become less biased. This largely has to do with neuron specialization. During separate training, neural networks produce two different kinds of neurons, which Boix finds fascinating. One neuron becomes good at recognizing object categories, and the other learns how to recognize viewpoints. Conversely, if these neurons are trained simultaneously, they can become diluted and confused.
Machine learning has come a long way, but there is still much to learn in order to develop the field. While the technology is promising, organizations should take steps to ensure they are doing their best to prevent bias in the systems they use or create.
What Uses Do You Predict Machine Learning Will Have in Your Company?
By providing AI with the ability to learn from its experiences without needing explicit programming, machine learning plays a critical role in developing the technology. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Zewe, Adam. (21 February 2022). Can machine-learning models overcome biased datasets? MIT News.
When it comes to designing ethical artificial intelligence (AI) systems, developers usually have the best intentions. However, problems often occur when developers fail to follow their intentions, what’s dubbed the “intention-action gap.”
To avoid this, a new report from the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University, titled “Responsible Use of Technology: The Microsoft Case Study,” recommends developers follow the lessons listed below.
AI Standards Lessons
- Before you can innovate responsibly, you must transform your organization’s culture:
To innovate ethically, you need a company culture that encourages introspection and learning from mistakes. For example, by adopting what Microsoft calls a “hub-and-spoke” cultural model across the various departments that influence product development, Microsoft ensures that security, privacy, and accessibility are embedded into all of its products. This “hub” consists of a trio of internal groups that work like “spokes” within its governance: The AI, Ethics, and Effects in Engineering and Research (AETHER) Committee; The Office of Responsible AI (ORA); and the Responsible AI Strategy in Engineering (RAISE) group. Additionally, Microsoft launched the Responsible AI Standard, a series of steps that internal teams have to follow to support the creation of responsible AI systems. - Use tools and methods that make ethics implementation simple:
With the right technical tools, it will be easier to integrate your new ethics model into the many facets of your organization. Microsoft uses several technical tools—Fairlearn, InterpretML, and Error Analysis—to implement ethics. For example, Fairlearn allows data scientists to analyze and enhance the fairness of machine learning models. Each platform offers dashboards that make it easier for workers to visualize performance. By using checklists, role-playing exercises, and stakeholder engagement, these tools also help teams understand the possible consequences of their products. It also fosters more compassion for how underrepresented stakeholders might be affected. - Create employee accountability by measuring impact:
Make sure your employees are aligned with your company’s ethical values by evaluating their performance against your ethics principles. To do this, Microsoft team members meet with managers for bi-yearly performance evaluations and goal settings to establish personal goals in line with those of the company. - Inclusive products are superior products:
By innovating responsibly through the lifecycle of a product, companies will make products that are better and more inclusive. They can do this by creating principles for AI toolkits that set expectations from the outset of product development.
New Healthcare Industry AI Standard Considers Three Areas of Trust
The Consumer Technology Association (CTA), a working group of 64 organizations, recently created a new standard that identifies the basic requirements for establishing reliable AI solutions. Healthcare organizations involved in the project include AdvaMed, America’s Health Insurance Plans, Ginger, Philips, 98point6, and ResMed.
The standard, released in February 2021 and accredited by the American National Standards Institute, considers three ways to create trustworthy and sustainable AI healthcare solutions:
- Human trust: Consider the way humans interact and how they will interpret the AI solution.
- Technical trust: Address data use, such as data access, privacy, quality, integrity, and issues around bias. Additionally, technical trust considers the technical execution and training of an AI design to provide predictable results.
- Regulatory trust: Ensure compliance to regulatory agencies, federal and state laws, accreditation boards, and global standardization frameworks.
Developing standards for AI applications is difficult, but necessary. By having a plan that integrates ethics throughout your organization, you can better ensure your AI systems are reliable and safe.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Green, Brian and Lim, Daniel. (25 February 2021). 4 lessons on designing responsible, ethical tech: Microsoft case study. World Economic Forum.
Landi, Heather. (18 February 2021). AHIP, tech companies create new healthcare AI standard as industry aims to provide more guardrails. Fierce Healthcare.
![]()
Artificial intelligence (AI) is evolving rapidly. According to multinational professional services company, Accenture, businesses spent $306 billion USD on AI applications over the past three years. Despite the advancement of AI, there are currently no specific ethical regulations around the technology—though some governments, including the European Union, are working to establish them. Meanwhile, many organizations are beginning to develop AI standards that will ensure their applications are trustworthy and safe for customers. For example, IBM has taken major steps to build trustworthiness for its AI applications, including the creation of an AI ethics board and AI policies, such as the company’s Principles for Trust and Transparency.
How Can Organizations Establish AI Standards?
When you receive a meal at a restaurant, you know the food is likely safe to eat. This is because a level of trust exists across the various professionalized fields—the farmers, suppliers, ingredient manufacturers, and restaurant staff who worked to create the meal. However, when it comes to the various stakeholders who are developing AI applications, fewer professionalized roles exist. Furthermore, these roles are not well known among the public. Much like food industry stakeholders, which all must follow specific standards, AI developers should also establish standards that ensure trust.
Successful AI developers establish “tactics of professionalization” across their organizations. According to Fernando Lucini, Global Lead Data Science & ML Engineering — Applied Intelligence at Accenture, developers should set up committed multidisciplinary teams, train their employees, and clearly define who within the organization is accountable for the consequences of their AI systems. To achieves this level of professionalization, he recommends the following steps:
- Set up definable AI roles within your organization: In professionalized industries like food and agriculture, the roles of teams and individuals responsible for the final product are clearly established and understood. The same rule needs to apply to the role of your AI professionals.
- Train and educate your AI professionals: Companies need to understand the skills gaps in their AI workforce and provide the necessary supplemental training and education. To keep training consistent, companies should establish career levels for AI professionals and prerequisites. This includes training and coursework designed to help define clear paths for moving up the ranks.
- Establish formal AI processes: Professionalized industries have a standard way of testing and evaluating products and services. Companies involved in the development of AI need to create similar processes for developing, deploying, and managing AI systems. For instance, they should create clear guidance for employees and teams on how to work with one another. These guidelines should also cover select technologies for the creation of AI and how to then apply those technologies.
- AI literacy must be democratized across organizations: Organizations need to ensure all departments are educated in AI, even if they do not work directly with the technology. For example, the more your marketing team knows about the AI technology behind an application, the better they will be at communicating its benefits to customers.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments.
Check out AI Standards: Roadmap for Ethical and Responsible Digital Environments, a new five-course program available on the IEEE Learning Network (ILN) today.
Resources
Rossi, Francesca. (5 November 2020). How IBM Is Working Toward a Fairer AI. Harvard Business Review.
Lucini, Fernando. (24 September 2020). Getting AI results by “going pro.” Accenture Research Report.