Few technologies have been by equal measures as captivating and controversial as the ongoing emergence of artificial intelligence (AI).

Most recently, the tech universe was rocked following the November 2023 firing and unprecedented rehiring of Sam Altman, CEO of OpenAI, a leading artificial intelligence company and maker of ChatGPT. That same month, and roughly a year after its breakthrough introduction in November 2022, ChatGPT announced the release of a powerful upgraded version that, among other capabilities, allows users to make their own customized chatbots and include chatbot creations in the company’s new “GPT Store.” The latest version of ChatGPT also offers a new legal “shield” that reportedly protects professional users against claims of copyright infringement.

However, with the continued growth of AI and on the tail of the aforementioned developments, a host of repercussions and concerns have emerged. Among them, AIHungry.com’s research on Google Trends revealed that a search for the term “AI Taking Jobs” reached record-high levels in November 2023. Furthermore, that the term experienced a 400% increase in search activity between November 2022 and November 2023.

While these statistics reflect society’s unease with what it sees as a growing reality, experts agree that those who don’t understand AI or how to use it are at the greatest risk of being replaced by it. This underscores the importance of acquiring new skills as a way to gain a competitive advantage and future-proof yourself from workplace developments like automation.

The Pros and Cons of AI

According to a recent summary of key statistics and predictions collected from prominent industry sources, publications, and market research firms, the growth and evolution of AI will potentially drive a mixed bag of results. These results are expected to have both positive and negative ramifications on the future of society and the workplace. Among them:

  • AI could replace up to one billion jobs worldwide over the coming decade— though it may also create 97 million new jobs by 2025.
  • One out of three businesses surveyed in a recent study claimed to be replacing at least some human functions in their workflow with AI solutions.
  • Administrative/repetitive functions, as well as jobs in such fields as bookkeeping and proofreading, are most at risk of being replaced by AI solutions. Manual labor jobs, as well as those requiring creativity and/or interpersonal skills (such as writing and legal services), are reportedly at the least risk of being replaced by AI.
  • In a recent study, one in four employees surveyed in the U.S. believes that their job may be replaced by an AI solution in the next five years, with 37% expressing concern over the possibility of this displacement.
  • On the other hand, nearly 20% of workers in that study welcomed the growth of AI based on their belief that it will relieve them of some tedious/repetitive tasks. 85% of those surveyed support the move towards automation for “hazardous or unhealthy” jobs.
  • Three out of four employees surveyed in another study, however, believe that the widespread adoption of AI will end up driving inequality in the workplace, with women being at 10% greater risk of job loss due to automation than their male counterparts.

Government Oversight

As the debate over AI rages on between stakeholders worldwide to determine how the technology can best help– not hurt– citizens, companies, and employees, calls for governmental parameters around the use of artificial intelligence are growing louder.

As shared during an October 2023 hearing by the U.S. Senate Committee on Health, Education, Labor and Pensions’ Subcommittee on Employment and Workplace Safety, a joint World Economic Forum and Accenture report revealed that some 40% of the 19,000 individual tasks across 867 occupations studied could be impacted and/or replaced by the ‘large language model’ (LLM) tools used by AI. With generative AI expected to impact everything from the state of both existing and future jobs to privacy, legal, and ethical considerations and more, industry leaders in the U.S. are asking Congress to establish a “rational, risk-based” regulatory framework for AI that will take the needs of employers, employees, and other constituents into consideration.

The U.S. White House Office of Management of Budget supported this request in October 2023 by asking each of its executive agencies to designate a Chief AI Officer (CAIO) to be in charge of “advancing responsible AI innovation” and “managing risks from the use of AI.” According to the official White House memo, “Artificial intelligence (AI) is one of the most powerful technologies of our time [and] we must seize the opportunities AI presents while managing its risks….particularly those affecting the safety and rights of the public.”

Stay on the Cutting-Edge of AI

The world of AI remains a moving target. With AI systems “advancing so rapidly and unpredictably that even on the rare occasions lawmakers and regulators have tried to tackle them, their proposals quickly become obsolete,” according to New York Times journalists Karen Weise and Cade Metz.

The rapid forward motion of AI will have ramifications on the global labor pool. A summary of key statistics and predictions reports that 120 million workers worldwide will need “upskilling” in the next three years due to developments in artificial intelligence. The key to avoiding AI job automation, according to the report? “Creativity, emotional intelligence, and STEM skills.

Are you on top of the full extent of AI’s direction, impact on society and business, and evolving design requirements? Are you shoring up your skill sets to minimize the risk of replacement by automation? AI-related course programs from IEEE are designed to keep learners abreast of the many opportunities, challenges, and considerations to be taken into account when developing, planning, using, or training for the expansion of artificial intelligence across its many applications.

Artificial intelligence-related courses from IEEE include:

Resources

(1 November 2023). US Senate Subcommittee Focuses on AI in the Workplace. IAPP.

(1 November 2023). White House OMB Issues AI Memorandum to Federal Agencies. IAPP.

Miller, Jim. (11 November 2023). AI Replacing Jobs Statistics: 40 Automation and AI Stats for 2023. AIHungry.com. 

Weise, Karen and Metz, Cade. (8 December 2023). The Morning: AI’s Big Year. The New York Times.

Wilson, Mark. (November 2023). ChatGPT Gets its Biggest Update So Far – Here are 4 Upgrades That Are Coming Soon. TechRadar. 

Perrigo, Billy. (22 November 2023). Sam Altman Returns as OpenAI CEO. Here’s How It Happened. Time.

(September 2023). Jobs of Tomorrow: Large Language Models and Jobs. World Economic Forum/Accenture.

Lufkin, Braun. (18 April 2022). What ‘Upskilling’ Means for the Future of Work. BBC.

Artificial intelligence (AI) continues to dominate headlines, thanks to its potential to revolutionize countless industries. From manufacturing and healthcare to banking and retail, AI is streamlining automatable and administrative tasks across the board.

Beyond efficiency, AI plays a critical role in high-impact applications. It helps detect cybersecurity threats, prevent retail fraud, and improve autonomous vehicle navigation by recognizing driver patterns and predicting accidents. Additionally, AI enhances customer experiences by personalizing marketing and service interactions.

In essence, machines are now replicating, and even expanding, the capabilities of the human mind. As a result, AI is reshaping the future of business.

A New Industrial Era

Because of its transformative power, the World Economic Forum has dubbed AI part of the “fourth industrial revolution.” This new era merges the physical, digital, and biological worlds, following earlier revolutions driven by steam, electricity, and computing.

Forbes contributor Bernard Marr calls it the “Intelligence Revolution,” underscoring AI’s sweeping impact on society and industry.

AI: A Double-Edged Sword

Although often used interchangeably, AI actually falls into two distinct categories: artificial general intelligence (AGI) and generative artificial intelligence (generative AI).

AGI refers to the ability of machines to understand, learn, and perform intellectual tasks as humans would based on the processing of demonstrated customer patterns. Examples of this include:

  • personalized product recommendations provided by Amazon
  • customized workouts and health goals suggested by apps (such as the MyFitnessPal app formerly owned by sports apparel and gear provider Under Armour) that base their recommendations on collected health data for physical activity, sleep, and diet
  • smart assistants like Alexa and Siri that can control home technology, dial the telephone upon request, and more

Generative AI refers to a form of artificial intelligence that learns the patterns and structure of inputted data and responds by generating text, images, or other media with similar characteristics. An example includes the much-publicized ChatGPT, a chatbot introduced in November 2022 by OpenAI, that can produce output of a desired length, format, style, level of detail, and language on most any topic.

Experts confirm that AI can help businesses enhance their productivity by leaps and bounds. For example, research firm Gartner estimates that AI can save companies around the world over 6 billion employee-hours annually. On an economic level, a recent study by global management consulting firm McKinsey & Company predicts that the analytics enabled by AI could add US $13 trillion to our global GDP by 2030.

At the same time, however, AI also raises its share of issues and ethical concerns. Among them, generative AI can lend itself to the alteration of text, images, and video in the form of inaccurate, misleading, manipulative, and/or potentially dangerous “deep fake” or fraudulent content. It also raises questions about ownership rights of created content and its eligibility for copyright protection.

Helping Industry Navigate the Complex Field of AI

Recognizing both the unprecedented importance and complexity of artificial intelligence, IEEE offers several course programs in AI and machine learning designed to help navigate these exciting, complicated, and rapidly-evolving technologies.

  • Machine Learning: Predictive Analysis for Business DecisionsIdeal for computer engineers, business executives, industry executives, industry leaders, business leaders, technical managers, data scientists, and data engineers, this five-course program provides an overview of the different types of machine learning that are fueling businesses today, how these forms of AI use software, algorithms, and models in their design, and how attendees can deploy scalable machine learning into their own processes to achieve their business goals. 
  • Artificial Intelligence and Ethics in DesignIdeal for data engineers, AI/ML engineers, design engineers, computer engineers, security engineers, electrical engineers, software engineers, UX designers, engineering managers, technical leaders, functional consultants, business users, research engineers, robotics engineers, machine learning engineers, and computer vision engineers, this five-course program covers such topics as law, compliance, and ethics in artificial intelligence, ethical challenges in data protection and safety, and responsible design in the algorithmic era. 
  • Artificial Intelligence and Ethics in Design: Responsible InnovationThis five-course program is designed to help learners understand the ethics specifications that must be met when designing AI systems for European (and other) markets. Topics include causes of bias, transparency and accountability for robots and AI systems, and legal and implementation issues of enterprise AI.

To discover more IEEE courses about artificial intelligence, browse the IEEE Learning Network catalog.


 
Resources:

Forbes Technology Council. (13 January 2022). 16 Industries and Functions That Will Benefit from AI In 2022 and Beyond. Forbes.

Fourth Industrial Revolution. World Economic Forum.

Marr, Bernard. (10 August 2020). What Is the Artificial Intelligence Revolution and Why Does It Matter To Your Business? Forbes.

Schroer, Alyssa. (19 May 2023). What Is Artificial Intelligence? Built In.

Mohan, Malethy. (22 March 2023). The Difference Between Generative AI and Traditional AI. LinkedIn.

Kanade, Vijay. What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends. Spiceworks.com.

Rajagopalan, Ramesh. 10 Examples of Artificial Intelligence in Business. Online Degrees.

Elliott, Timo. (9 March 2020). The Power of Artificial Intelligence Vs. the Power Of Human Intelligence. Forbes.

Dilmegani, Cem. (22 April 2023). Generative AI Ethics: Top 6 Concerns. AIMultiple.

Artificial intelligence (AI) is more present in our lives than ever. With varied uses, AI can predict what we want to see as we scroll through social media, as well as help to solve global challenges like hunger, environmental changes, and pandemics. This technology has countless applications in the real world. A McKinsey survey illustrates that AI adoption followed an upward trajectory in the year 2021 and continues to do so. According to the survey, “56 percent of all respondents report AI adoption in at least one function.”

However, AI technology is not always beneficial—AI can violate privacy, AI-generated output cannot always be explained, and AI can be biased. When the data feeding an AI system is not representative of the diversity and plurality of our societies, it can produce biased or discriminatory outcomes.

An often-cited example is facial recognition technology. Used to access mobile phones and bank accounts, it’s also being increasingly employed by law enforcement authorities. With emerging problems accurately identifying women and darker-skinned people, facial recognition is far from being perfected. This is not surprising when you look at how AI is developed: only 1 in 10 software developers worldwide are women. Furthermore, developers come overwhelmingly from western countries. 

Hardcoding Ethics into AI

Humans can be biased, but people possess the ability to recognize how their conclusions may be biased, discriminatory, or unethical. While there is some recent debate over the “sentient” qualities of AI programs, they cannot “think” or “feel”. AI performance depends entirely on its coding. Because AI does not have this meta-cognitive ability, it is up to people to override unethical decisions when they arise. Unethical AI is not a consequence simply of programming deficiencies, but rather of not fully considering how ethical requirements should be incorporated into the learning algorithm during development. 

Organizations using AI need to become more proactive and formulate actionable AI ethics policies by thinking about ethics from the start. This approach already is deemed essential to cyber security products, where “security by design” development principles drives the need to assess risks and hardcode security from the start. This mindset should be applied to the development of AI tools so these can be deployed responsibly and without bias. This process will be critical as societies and cultures change over time, and AI products should always reflect current values.

How to Create an AI Ethics Policy 

Aligning AI ethics is not just a moral responsibility, it is also a business imperative. It requires action to build an AI ethics-aware culture. Reid Blackman, CEO of Virtue, recommends instilling actionable ethics into AI systems by following these seven guidelines: 

  1. Bring clarity to AI standards
  2. Increase awareness among everyone in the organization
  3. Thoroughly incorporate AI ethics into team culture
  4. Make sure there are AI experts as part of an AI ethics committee
  5. Introduce accountability
  6. Measure everything— set key performance indicators (KPIs) to track whether your organization is meeting its goals for AI standard adoption
  7. Gain executive sponsorship

Prepare for an AI Future

The AI market size is expected to grow and surpass US$1,597 billion by 2030. Organizations and technology professionals should prepare for a changing landscape when it comes to the future of AI.  

Get a jumpstart on learning about ethics in artificial intelligence systems. Check out Artificial Intelligence and Ethics in Design, a five-course program from IEEE that provides the background knowledge needed to integrate AI and autonomous systems within their companies or to their customers and end users.

Contact an IEEE Account Specialist to get organizational access or check it out for yourself on the IEEE Learning Network.


Resources

Bedzow, Ira. (30 June 2022). What It Takes to Create and Implement Ethical Artificial Intelligence. Forbes.

Boston Consulting Group (BCG). (7 July 2022). 87% of Climate and AI Leaders Believe That AI Is Critical in the Fight Against Climate Change. PR Newswire. 

Chui, Michael et al. (8 December 2021). The state of AI in 2021. McKinsey.

Henderson, Emily. (10 June 2022). Using artificial intelligence to discover new antivirals against COVID-19 and future pandemics. New Medical.

McKendrick, Joe. (10 June 2022). 7 Steps to More Ethical Artificial Intelligence. Forbes. 

Mubarik, Abu. (20 June 2022). This is how former Wall Street trader Sara Menker from Ethiopia is using AI to remove world hunger. Face 2 Face Africa. 

Precedence Research. (19 April 2022). Artificial Intelligence Market Size to Surpass Around US$ 1,597.1 Bn By 2030. GlobeNewswire.

Ramos, Gabriela and Koukku-Ronde, Ritva. (22 June 2022). A new global standard for AI ethics. UNESCO.

Smith, Wesley. (26 June 2022). Five Reasons AI Programs Are Not ‘Persons’. Mind Matters News.

Yu, Eileen. (30 June 2022). AI ethics should be hardcoded like security by design. ZD Net

Big data is creating exciting new opportunities for artificial intelligence (AI). According to Arvind Krishna, Chairman and Chief CEO of IBM, 2.5 quintillion bytes of data are produced each day. To analyze, distribute, and make use of this data, many organizations are combining AI with hybrid cloud technology.

“The economic opportunity behind these technologies is enormous, given that business is only about 10 percent of the way to realizing A.I.’s full potential,” writes Krishna in Inc.com. “Fortunately, we are making steady progress, with the number of organizations poised to integrate A.I. into their business processes and workflows growing rapidly. A recent IBM study showed that more than a third of the companies surveyed were using some form of A.I. to save time and streamline operations.”

However, for artificial intelligence programs to work effectively, organizations need to successfully manage their data. According to Andrew P. Ayres, a Senior Specialist with HPE’s Enterprise Services practice in the United Kingdom, writing in CIO, you can achieve this by:

  • making “data-centric AI” and “AI-centric data” part of your data management strategy. Metadata and “data fabric” should be the foundational elements of this strategy.
  • establishing policy requirements that include minimum AI data quality to prevent “bias, mislabeling, or irrelevance”
  • determining the right “formats, tools, and metrics for AI-centric data” early on. This way you don’t have to develop new techniques as your AI evolves.
  • ensuring that the data, algorithms, and people within your AI supply chain are diverse. This diversity helps to stay in line with your ethical values.
  • appointing or hiring the right experts internally and externally to oversee data management. These experts are capable of developing effective processes and deployments for your AI.

How to Choose an AI Program That Works Best For Your Employees

As you develop your AI program, keep in mind that while AI can augment your organization in terms of speed and efficiency, it is not necessarily a substitute for human intelligence. 

While AI is good at analyzing data and recognizing patterns, it still has a tendency to miss important context that humans easily spot. This can have potentially devastating consequences if, for example, an AI makes a critical error when analyzing medical documentation. As such, you need to consider how to make your AI work with your human employees in the most effective way possible. 

According to experts from Boston Consulting Group, writing in Fortune, organizations can do this by following the following principles:

  • Know your options in terms of how you can combine humans with AI: Depending on your organization’s unique needs, do you need your AI to act as an illuminator, recommender, decider, or automator? Knowing the difference can help you choose the best AI system for your organization. Choose whether it’s an AI that can make predictions or one that can help you automate operations remotely. 
  • Create a decision tree: A decision tree constitutes the questions you will ask in a sequence. This helps you clearly understand your objectives (goals), context (resources in terms of data), and outcomes (results in terms of deploying AI vs employees). This will help you determine what type of AI system (illuminator, recommender, decider, or automator) you need.
  • Continuously assess and revise your human-AI combinations: Your needs for an AI program may evolve overtime and, as such, so will its relationship to your employees. For this reason it’s important to return to the decision tree occasionally to determine if you need to revise your model.

Knowing how to manage your organization’s data and determining the right AI program are important steps. However, you also need to ensure that your employees are equipped to work with this increasingly complex technology. 

Bringing Ethics to the Forefront at Your Organization

An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. 

Contact an IEEE Content Specialist to learn more about how this program can help your organization create responsible artificial intelligence systems.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Arvind. (18 May 2022). Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Future. Inc. 

Candelon, Francois, Ding, Bowen, Gombeaud, Matthieu. (6 May 2022). Getting the balance right: 3 keys to perfecting the human-A.I. combination for your business. Fortune.

Ayres, Andrew P. (29 April 2022). Don’t Fear Artificial Intelligence; Embrace it Through Data Governance. CIO.

ethical-AI-standards-framework

A 2019 survey from Gartner found that 37% of businesses and organizations employ artificial intelligence (AI), DataProt reported. However, few organizations are taking steps to mitigate the risk associated with AI systems, such as their propensity for bias and privacy infringements. A 2021 PwC research report found that just 20% of enterprises had instituted an AI ethics framework. Meanwhile, only 35% intended to enhance their AI governance and processes. With governments increasingly moving towards passing AI regulations, the timeframe for organizations to develop ethical AI standards is getting shorter. 

During an interview with Analytics India Magazine, Satyakam Mohanty, Chief Product Officer at Fosfor by L&T Infotech, a global technology consulting and digital solutions company, said responsible AI is the only way for organizations to reduce potential risks associated with the technology.

“The great AI debate opens various facets of ethics, but without a common agreement and agreed standard, its impact and repercussions on the way organizations operate is not quantifiable,” Mohanty told the magazine. “Fairness and explainability can be managed and scaled by introducing data bias mitigation practices and algorithmic bias mitigation processes. Additionally, ensuring higher standard explainability frameworks into the implementations and decision-making process helps. By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run. They do this by building robust and innovative solutions from the start.”

How to Develop an AI Standards Framework

How can your organization begin building a successful AI standards framework? Writing in Harvard Business Review, AI ethics experts Reid Blackman and Beena Ammanath recommend that organizations start by putting together a team of senior-level experts. This team should encompass, at minimum, technologists, legal/compliance experts, ethicists, and business leaders who understand what the organization needs to achieve in terms of ethical AI.

Once you have a team in place, they recommend taking these steps:
  1. First, identify your organization’s AI ethical standard:

    What is the minimum ethical standard your organization is willing to meet in terms of AI? If your AI system is discriminatory towards a certain group but is still far less discriminatory towards them than traditional human-run systems, will your organizations consider that an acceptable benchmark? This is a similar dilemma to what autonomous vehicle manufacturers must consider. For example, if autonomous vehicles occasionally kill passengers and pedestrians but at a lower rate than traditional vehicles, should those vehicles be considered safe? Although these are difficult questions to grapple with, asking them will help your organization set the right frameworks and guidelines. This ensures ethical product development.
  2. Determine “gaps” between where your organization is currently and what your standards need:

    While there may be plenty of technical solutions to your AI ethics dilemma, none are likely to be enough to reduce the risks substantially enough to safeguard your organization. As such, your AI ethics team will need to ask: what are its skills and knowledge limitations, what are the risks it is trying to reduce, in what ways can software/quantitative analysis help and not be able to help. Also, what needs to be done in terms of qualitative assessments, and how mature does the technology need to be to meet ethics expectations?
  3. Gain insight into what’s behind the bias in your AI and then strategize solutions:

    While it’s generally true that biased AI systems are reflections of biased training data and/or societal bias, the real problem is more complex. For example, you need to understand sources of discriminatory outputs, as well as potential biases. Knowing this will help you understand how to decide the best strategy for reducing bias. 

Implementing artificial intelligence standards at your organization will take time, but the risk reduction they provide will be well worth the effort. Does your organization have the right knowledge and skills necessary to build an effective AI standards roadmap? 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE. It provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Sri. (29 March 2022). Talking Ethical AI with Fosfor’s Satyakam Mohanty. Analytics India Magazine. 

Blackman, Reid and Ammanath, Beena (21 March 2022). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review. 

Jovanovic, Bojan. (8 March 2022). 55 Fascinating AI Statistics and Trends for 2022. DataProt.

Likens, Scott; Shehab, Michael; Rao, Anand. AI Predictions 2021. PwC Research.

Organizations are increasingly adopting artificial intelligence (AI) standards to mitigate risks associated with the technology, such as its propensity for bias. While developing AI standards is necessary, they also need to be upheld in order to be effective. To do so, organizations can consider establishing a body of experts charged with overseeing AI standards and ethics. 

In general, institutional review boards (IRBs) ensure organizations are upholding their basic ethical principles by authorizing, rejecting, and recommending changes to research projects and products. In the United States, these governing bodies have proven effective at reducing ethical risks in the field of medicine. IRBs can provide similar oversight for organizations involved in artificial intelligence. 

When establishing an IRB for your organization, there are three main issues to consider, according to Harvard Business Review.

Who Should Sit on the Board?

Your IRB should include a diverse group of experts capable of systematically pinpointing and reducing ethical risks in your AI applications. It should include: 

  • engineers and product designers who can explain the technology and its potential impact on users;
  • lawyers and security officers who are knowledgeable about current laws, regulations, and privacy standards;
  • experts who specializes in ethics;
  • subject matter experts from various backgrounds who specialize in the application at hand (for example, a doctor’s oversight could be helpful for AI applications used in hospitals);
  • and at least one expert who is not affiliated with your organization in order to bring a sense of objectivity to the committee’s decision making.

What Jurisdiction Should the IRB Hold?

When it comes to artificial intelligence applications, try to consult institutional review boards as early as possible, preferably even before research or product development begins. After all, it’s a lot easier and cheaper to make alterations to a project before you start working on it. You wouldn’t want to invest time and money on a project that turns out to be a major ethical risk. 

You also need to determine how much authority your IRB will possess. In the medical field, IRBs are given ultimate authority—once an IRB rejects a proposal, it won’t be reconsidered, and if the IRB proposes changes, the revisions must be made. You’ll need to decide if your IRB has this much power, or if, for example, you want to put an appeals process in place. However, you should keep in mind that the more authority your IRB has, the more effective it is likely to be at reducing risk. 

What are the Values That Will Guide Your IRB?

Developing a core set of values for your IRB will be relatively easy. Rather, the more difficult aspect is instituting mechanisms that prevent these values from being twisted or broadly interpreted.

In the medical field, more than just principles guide decisions. For example, medical IRBs typically compare cases to ones decided upon in the past, which allows IRBs to stay consistent in how they apply principles. 

Similarly, institutional review boards charged with AI oversight can look to previous cases to apply their principles consistently. Let’s say, for example, that your IRB declined to approve a contract with a particular country due to ethical risks related to how that government functions. It could apply the reasoning behind that decision to similar cases in the future. Additionally, if a certain case is unprecedented, an IRB can apply fictionalized scenarios to help it understand how it should apply its principles. 

Setting up an IRB in your organization will help you create a ground-up approach to AI oversight. Additionally, it will build trust among your employees and customers, and make your organization more competitive in an environment where concern over AI is higher than ever. 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Blackman, Reid. (1 April 2021). If Your Company Uses AI, It Needs an Institutional Review Board. Harvard Business Review. 

When it comes to designing ethical artificial intelligence (AI) systems, developers usually have the best intentions. However, problems often occur when developers fail to follow their intentions, what’s dubbed the “intention-action gap.”

To avoid this, a new report from the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University, titled “Responsible Use of Technology: The Microsoft Case Study,” recommends developers follow the lessons listed below.

AI Standards Lessons

  1. Before you can innovate responsibly, you must transform your organization’s culture:
    To innovate ethically, you need a company culture that encourages introspection and learning from mistakes. For example, by adopting what Microsoft calls a “hub-and-spoke” cultural model across the various departments that influence product development, Microsoft ensures that security, privacy, and accessibility are embedded into all of its products. This “hub” consists of a trio of internal groups that work like “spokes” within its governance: The AI, Ethics, and Effects in Engineering and Research (AETHER) Committee; The Office of Responsible AI (ORA); and the Responsible AI Strategy in Engineering (RAISE) group. Additionally, Microsoft launched the Responsible AI Standard, a series of steps that internal teams have to follow to support the creation of responsible AI systems.
  2. Use tools and methods that make ethics implementation simple:
    With the right technical tools, it will be easier to integrate your new ethics model into the many facets of your organization. Microsoft uses several technical tools—Fairlearn, InterpretML, and Error Analysis—to implement ethics. For example, Fairlearn allows data scientists to analyze and enhance the fairness of machine learning models. Each platform offers dashboards that make it easier for workers to visualize performance. By using checklists, role-playing exercises, and stakeholder engagement, these tools also help teams understand the possible consequences of their products. It also fosters more compassion for how underrepresented stakeholders might be affected.
  3. Create employee accountability by measuring impact:
    Make sure your employees are aligned with your company’s ethical values by evaluating their performance against your ethics principles. To do this, Microsoft team members meet with managers for bi-yearly performance evaluations and goal settings to establish personal goals in line with those of the company.
  4. Inclusive products are superior products:
    By innovating responsibly through the lifecycle of a product, companies will make products that are better and more inclusive. They can do this by creating principles for AI toolkits that set expectations from the outset of product development.

New Healthcare Industry AI Standard Considers Three Areas of Trust

The Consumer Technology Association (CTA), a working group of 64 organizations, recently created a new standard that identifies the basic requirements for establishing reliable AI solutions. Healthcare organizations involved in the project include AdvaMed, America’s Health Insurance Plans, Ginger, Philips, 98point6, and ResMed.

The standard, released in February 2021 and accredited by the American National Standards Institute, considers three ways to create trustworthy and sustainable AI healthcare solutions:

  • Human trust: Consider the way humans interact and how they will interpret the AI solution.
  • Technical trust: Address data use, such as data access, privacy, quality, integrity, and issues around bias. Additionally, technical trust considers the technical execution and training of an AI design to provide predictable results.
  • Regulatory trust: Ensure compliance to regulatory agencies, federal and state laws, accreditation boards, and global standardization frameworks.

Developing standards for AI applications is difficult, but necessary. By having a plan that integrates ethics throughout your organization, you can better ensure your AI systems are reliable and safe.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Green, Brian and Lim, Daniel. (25 February 2021). 4 lessons on designing responsible, ethical tech: Microsoft case study. World Economic Forum.

Landi, Heather. (18 February 2021). AHIP, tech companies create new healthcare AI standard as industry aims to provide more guardrails. Fierce Healthcare.

establish-ai-standards

Artificial intelligence (AI) is evolving rapidly. According to multinational professional services company, Accenture, businesses spent $306 billion USD on AI applications over the past three years. Despite the advancement of AI, there are currently no specific ethical regulations around the technology—though some governments, including the European Union, are working to establish them. Meanwhile, many organizations are beginning to develop AI standards that will ensure their applications are trustworthy and safe for customers. For example, IBM has taken major steps to build trustworthiness for its AI applications, including the creation of an AI ethics board and AI policies, such as the company’s Principles for Trust and Transparency

How Can Organizations Establish AI Standards?

When you receive a meal at a restaurant, you know the food is likely safe to eat. This is because a level of trust exists across the various professionalized fields—the farmers, suppliers, ingredient manufacturers, and restaurant staff who worked to create the meal. However, when it comes to the various stakeholders who are developing AI applications, fewer professionalized roles exist. Furthermore, these roles are not well known among the public. Much like food industry stakeholders, which all must follow specific standards, AI developers should also establish standards that ensure trust.

Successful AI developers establish “tactics of professionalization” across their organizations. According to Fernando Lucini, Global Lead Data Science & ML Engineering — Applied Intelligence at Accenture, developers should set up committed multidisciplinary teams, train their employees, and clearly define who within the organization is accountable for the consequences of their AI systems. To achieves this level of professionalization, he recommends the following steps:

  1. Set up definable AI roles within your organization: In professionalized industries like food and agriculture, the roles of teams and individuals responsible for the final product are clearly established and understood. The same rule needs to apply to the role of your AI professionals.
  2. Train and educate your AI professionals: Companies need to understand the skills gaps in their AI workforce and provide the necessary supplemental training and education. To keep training consistent, companies should establish career levels for AI professionals and prerequisites. This includes training and coursework designed to help define clear paths for moving up the ranks.
  3. Establish formal AI processes: Professionalized industries have a standard way of testing and evaluating products and services. Companies involved in the development of AI need to create similar processes for developing, deploying, and managing AI systems. For instance, they should create clear guidance for employees and teams on how to work with one another. These guidelines should also cover select technologies for the creation of AI and how to then apply those technologies.
  4. AI literacy must be democratized across organizations: Organizations need to ensure all departments are educated in AI, even if they do not work directly with the technology. For example, the more your marketing team knows about the AI technology behind an application, the better they will be at communicating its benefits to customers.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments.

Check out AI Standards: Roadmap for Ethical and Responsible Digital Environments, a new five-course program available on the IEEE Learning Network (ILN) today.

Resources

Rossi, Francesca. (5 November 2020). How IBM Is Working Toward a Fairer AI. Harvard Business Review.

Lucini, Fernando. (24 September 2020). Getting AI results by “going pro.” Accenture Research Report.

Artificial intelligence (AI) applications are rapidly expanding, and so are the various threats they pose. From algorithms with embedded racial biases to “black box” systems that give humans no insight into an AI’s decision-making process, it’s becoming increasingly clear. AI developers need to take steps to mitigate the risks.

AI “Ethics-As-A-Service”

Google plans to roll out a new “ethics-as-a-service” initiative to customers. Similar to its Google Cloud service, where Google hosts client data in its cloud, it plans to provide “ethics-as-a-service.” This will help clients spot and fix ethical issues with their AI systems.

The service, which may launch as soon as the end of 2020, is expected to consist of training courses. These will teach how to detect AI system ethical issues and foster guidelines around AI ethics. The company may eventually provide consulting services such as audits and reviews. For example, it may inspect a financial client’s AI-enabled project. They will determine if a lending algorithm contains bias against minority groups. 

However, it’s important to note that it’s not yet determined whether some of Google’s ethics services will be free. According to Brian Green, Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University, providing these services for a fee may pose ethical challenges of their own. “They’re legally compelled to make money and while ethics can be compatible with that, it might also cause some decisions not to go in the most ethical direction,” Green told Wired

Another challenge will be knowing where to draw the line in determining what’s ethical, acknowledges Tracy Frey, an expert on AI strategy in Google’s cloud division. “It is very important to us that we don’t sound like the moral police,” she told Wired

“Embedded Ethics”

In order for AI systems and enabled devices to be truly ethical, some experts argue that ethics must be considered from the very beginning of the design process. This approach is known as “embedded ethics.” It requires AI developers to involve ethicists to create “ethical awareness” throughout all stages of a design. This is according to a recent paper published in Science Daily by researchers from the Technical University of Munich (TUM).

Meanwhile, R2 Data Labs, the data innovation arm of Rolls-Royce, has created an AI ethics framework with an “ethics-from-the-ground-up” approach. The company says organizations will be able to adopt this framework. It aims to help build trust for AI systems among the public. The soon-to-be-published findings include:

  • An ethical decision-making process: This method will help developers make sure ethics is integrated in its AI decision making.
  • Five-layer check system that ensures AI algorithms are trustworthy: This step-by-step process prevents bias from developing in AI systems. It also provides continuous monitoring of results.

The findings, which Rolls-Royce plans to publish sometime this year, are based on the engineering titan’s own experience with AI applications. The company says the findings have been peer reviewed by experts in a variety of industries. These include a number of large technology companies, as well as experts in government, pharmaceutical, automotive, and academia. 

Understanding AI and Ethics

As AI continues to grow and integrate with various aspects of business, there’s never been a greater need for practical artificial intelligence and ethics training. IEEE offers continuing education that provides professionals with the knowledge needed to integrate AI within their products and operations. Designed to help organizations apply the theory of ethics to the design and business of AI systems, Artificial Intelligence and Ethics in Design is a two-part online course program. It also serves as useful supplemental material in academic settings.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Johnson, Robin. (3 September 2020). Rolls-Royce claims breakthroughs in artificial intelligence ethics and trustworthiness. BusinessLive.

Technical University of Munich (TUM). (1 September 2020). An embedded ethics approach for AI development. Science Daily.

Simonite, Tom. (28 August 2020). Google Offers to Help Others With the Tricky Ethics of AI. Wired.