develop-an-artificial-intelligence-program

Big data is creating exciting new opportunities for artificial intelligence (AI). According to Arvind Krishna, Chairman and Chief CEO of IBM, 2.5 quintillion bytes of data are produced each day. To analyze, distribute, and make use of this data, many organizations are combining AI with hybrid cloud technology.

“The economic opportunity behind these technologies is enormous, given that business is only about 10 percent of the way to realizing A.I.’s full potential,” writes Krishna in Inc.com. “Fortunately, we are making steady progress, with the number of organizations poised to integrate A.I. into their business processes and workflows growing rapidly. A recent IBM study showed that more than a third of the companies surveyed were using some form of A.I. to save time and streamline operations.”

However, for artificial intelligence programs to work effectively, organizations need to successfully manage their data. According to Andrew P. Ayres, a Senior Specialist with HPE’s Enterprise Services practice in the United Kingdom, writing in CIO, you can achieve this by:

  • making “data-centric AI” and “AI-centric data” part of your data management strategy, and metadata and “data fabric” the foundational elements of this strategy
  • establishing policy requirements that include minimum AI data quality to prevent “bias, mislabeling, or irrelevance”
  • determining the right “formats, tools, and metrics for AI-centric data” early on so you don’t have to develop new techniques as your AI evolves
  • ensuring that the data, algorithms, and people within your AI supply chain are diverse in an effort to stay in line with your ethical values
  • appointing or hiring the right experts internally and externally to oversee data management, who are capable of developing effective processes and deployments for your AI 

How to Choose an AI Program That Works Best For Your Employees

As you develop your AI program, keep in mind that while AI can augment your organization in terms of speed and efficiency, it is not necessarily a substitute for human intelligence. 

While AI is good at analyzing data and recognizing patterns, it still has a tendency to miss important context that humans easily spot. This can have potentially devastating consequences if, for example, an AI makes a critical error when analyzing medical documentation. As such, you need to consider how to make your AI work with your human employees in the most effective way possible. 

According to experts from Boston Consulting Group, writing in Fortune, organizations can do this by following the following principles:

  • Know your options in terms of how you can combine humans with AI: Depending on your organization’s unique needs, do you need your AI to act as an illuminator, recommender, decider, or automator? Knowing the difference can help you choose the best AI system for your organization, whether it’s an AI that can make predictions or one that can help you automate operations remotely. 
  • Create a decision tree: A decision tree constitutes the questions you will ask in a sequence to help you clearly understand your objectives (goals), context (resources in terms of data), and outcomes (results in terms of deploying AI vs employees). This will help you determine what type of AI system (illuminator, recommender, decider, or automator) you need.
  • Continuously assess and revise your human-AI combinations: Your needs for an AI program may evolve overtime and, as such, so will its relationship to your employees. For this reason it’s important to return to the decision tree occasionally to determine if you need to revise your model.

Knowing how to manage your organization’s data and determining the right AI program are important steps. However, you also need to ensure that your employees are equipped to work with this increasingly complex technology. 

Bringing Ethics to the Forefront at Your Organization

An online five-course program, AI Standards: Roadmap for Ethical and Responsible Digital Environments, provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. 

Contact an IEEE Content Specialist to learn more about how this program can help your organization create responsible artificial intelligence systems.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Arvind. (18 May 2022). Why Artificial Intelligence Creates an Unprecedented Era of Opportunity in the Near Future. Inc. 

Candelon, Francois, Ding, Bowen, Gombeaud, Matthieu. (6 May 2022). Getting the balance right: 3 keys to perfecting the human-A.I. combination for your business. Fortune.

Ayres, Andrew P. (29 April 2022). Don’t Fear Artificial Intelligence; Embrace it Through Data Governance. CIO.

ethical-AI-standards-framework

A 2019 survey from Gartner found that 37% of businesses and organizations employ artificial intelligence (AI), DataProt reported. However, few organizations are taking steps to mitigate the risk associated with AI systems, such as their propensity for bias and privacy infringements. A 2021 PwC research report found that just 20% of enterprises had instituted an AI ethics framework, while only 35% intended to enhance their AI governance and processes. With governments increasingly moving towards passing AI regulations, the timeframe for organizations to develop ethical AI standards is getting shorter. 

During an interview with Analytics India Magazine, Satyakam Mohanty, Chief Product Officer at Fosfor by L&T Infotech, a global technology consulting and digital solutions company, said responsible AI is the only way for organizations to reduce potential risks associated with the technology.

“The great AI debate opens various facets of ethics, but without a common agreement and agreed standard, its impact and repercussions on the way organizations operate is not quantifiable,” Mohanty told the magazine. “Fairness and explainability can be managed and scaled by introducing data bias mitigation practices and algorithmic bias mitigation processes and ensuring higher standard explainability frameworks into the implementations and decision-making process. By utilizing ethics as a key decision-making tool, AI-driven companies save time and money in the long run by building robust and innovative solutions from the start.”

How to Develop an AI Standards Framework

How can your organization begin building a successful AI standards framework? Writing in Harvard Business Review, AI ethics experts Reid Blackman and Beena Ammanath recommend that organizations start by putting together a team of senior-level experts that encompasses, at minimum, technologists, legal/compliance experts, ethicists, and business leaders who understand what the organization needs to achieve in terms of ethical AI.

Once you have a team in place, they recommend taking these steps:

  1. First, identify your organization’s AI ethical standard:
    What is the minimum ethical standard your organization is willing to meet in terms of AI? If your AI system is discriminatory towards a certain group but is still far less discriminatory towards them than traditional human-run systems, will your organizations consider that an acceptable benchmark? This is a similar dilemma to what autonomous vehicle manufacturers must consider. For example, if autonomous vehicles occasionally kill passengers and pedestrians but at a lower rate than traditional vehicles, should those vehicles be considered safe? Although these are difficult questions to grapple with, asking them will help your organization set the right frameworks and guidelines to ensure ethical product development.
  2. Determine “gaps” between where your organization is currently and what your standards need:
    While there may be plenty of technical solutions to your AI ethics dilemma, none are likely to be enough to reduce the risks substantially enough to safeguard your organization. As such, your AI ethics team will need to ask: what are its skills and knowledge limitations, what are the risks it is trying to reduce, in what ways can software/quantitative analysis help and not be able to help, what needs to be done in terms of qualitative assessments, and how mature does the technology need to be to meet ethics expectations?
  3. Gain insight into what’s behind the bias in your AI and then strategize solutions:
    While it’s generally true that biased AI systems are reflections of biased training data and/or societal bias, the real problem is more complex. For example, you need to understand sources of discriminatory outputs, as well as potential biases, as knowing this will help you understand how to decide the best strategy for reducing bias. 

Implementing artificial intelligence standards at your organization will take time, but the risk reduction they provide will be well worth the effort. Does your organization have the right knowledge and skills necessary to build an effective AI standards roadmap? 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Krishna, Sri. (29 March 2022). Talking Ethical AI with Fosfor’s Satyakam Mohanty. Analytics India Magazine. 

Blackman, Reid and Ammanath, Beena (21 March 2022). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review. 

Jovanovic, Bojan. (8 March 2022). 55 Fascinating AI Statistics and Trends for 2022. DataProt.

Likens, Scott; Shehab, Michael; Rao, Anand. AI Predictions 2021. PwC Research.

Organizations are increasingly adopting artificial intelligence (AI) standards to mitigate risks associated with the technology, such as its propensity for bias. While developing AI standards is necessary, they also need to be upheld in order to be effective. To do so, organizations can consider establishing a body of experts charged with overseeing AI standards and ethics. 

In general, institutional review boards (IRBs) ensure organizations are upholding their basic ethical principles by authorizing, rejecting, and recommending changes to research projects and products. In the United States, these governing bodies have proven effective at reducing ethical risks in the field of medicine. IRBs can provide similar oversight for organizations involved in artificial intelligence. 

When establishing an IRB for your organization, there are three main issues to consider, according to Harvard Business Review.

Who Should Sit on the Board?

Your IRB should include a diverse group of experts capable of systematically pinpointing and reducing ethical risks in your AI applications. It should include: 

  • engineers and product designers who can explain the technology and its potential impact on users;
  • lawyers and security officers who are knowledgeable about current laws, regulations, and privacy standards;
  • experts who specializes in ethics;
  • subject matter experts from various backgrounds who specialize in the application at hand (for example, a doctor’s oversight could be helpful for AI applications used in hospitals);
  • and at least one expert who is not affiliated with your organization in order to bring a sense of objectivity to the committee’s decision making.

What Jurisdiction Should the IRB Hold?

When it comes to artificial intelligence applications, try to consult institutional review boards as early as possible, preferably even before research or product development begins. After all, it’s a lot easier and cheaper to make alterations to a project before you start working on it. You wouldn’t want to invest time and money on a project that turns out to be a major ethical risk. 

You also need to determine how much authority your IRB will possess. In the medical field, IRBs are given ultimate authority—once an IRB rejects a proposal, it won’t be reconsidered, and if the IRB proposes changes, the revisions must be made. You’ll need to decide if your IRB has this much power, or if, for example, you want to put an appeals process in place. However, you should keep in mind that the more authority your IRB has, the more effective it is likely to be at reducing risk. 

What are the Values That Will Guide Your IRB?

Developing a core set of values for your IRB will be relatively easy. Rather, the more difficult aspect is instituting mechanisms that prevent these values from being twisted or broadly interpreted.

In the medical field, more than just principles guide decisions. For example, medical IRBs typically compare cases to ones decided upon in the past, which allows IRBs to stay consistent in how they apply principles. 

Similarly, institutional review boards charged with AI oversight can look to previous cases to apply their principles consistently. Let’s say, for example, that your IRB declined to approve a contract with a particular country due to ethical risks related to how that government functions. It could apply the reasoning behind that decision to similar cases in the future. Additionally, if a certain case is unprecedented, an IRB can apply fictionalized scenarios to help it understand how it should apply its principles. 

Setting up an IRB in your organization will help you create a ground-up approach to AI oversight. Additionally, it will build trust among your employees and customers, and make your organization more competitive in an environment where concern over AI is higher than ever. 

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Blackman, Reid. (1 April 2021). If Your Company Uses AI, It Needs an Institutional Review Board. Harvard Business Review. 

artificial-intelligence-standards

As artificial intelligence (AI) systems are built on larger and larger quantities of data, the potential threat that they can pose to the public also grows. For example, automated systems could equipped with biased algorithms that discriminate against women and minority groups. Additionally, AI-based software, such as facial recognition technology, can jeopardize the privacy of millions of people. To get ahead of the problem, governments are beginning to regulate rapidly advancing AI, and organizations that develop these systems will eventually be required to comply. 

In Europe, a law known as the General Data Protection Regulation (GDPR) oversees data privacy for EU citizens and residents. While the United States has yet to pass specific AI regulations, the country is expected to begin rolling out state and federal regulations in the coming years. One example is the Algorithmic Accountability Act, which would mandate that organizations examine and repair potentially harmful flaws in computer algorithms. Similar to the General Data Privacy Regulation (GDPR), the Algorithmic Accountability Act would make impact assessments mandatory for automated decision and information systems that are high risk

Additionally, many businesses are taking steps to establish their own AI standards to ensure their systems are ethical and safe, as well as to protect them from potential liability. 

Here are four expert-recommended principles organizations can consider for their AI standards.

What Should AI Standards Include?

Transparency: Huma Abidi, senior director of AI software products at Intel, recommends that AI developers define and create clear, quantifiable standards, and processes that can be measured in terms of quality and robustness. She told Venture Beat that ethical AI systems should be “fair, transparent, [and] explainable.”

One example is a paper titled “Datasheet for Datasets.” The paper focuses on a standardized process for machine learning that documents datasets, which states that “every dataset [should] be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on.” Another example is a machine learning documentation project called “Model Cards for Model Reporting.” The paper explains: “Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.”

According to Abibi, these “basic principles” should be built into workflows.

“My point is that like any other software product, you want to make sure it’s robust and all that, but for AI, you especially—besides having standards and processes—you need to add these additional things,” she told Venture Beat.

A cautious, iterative approach to AI development: According to Rashida Hodge, VP of North America Go-to-Market, Global Markets, at IBM, businesses should develop cautious, iterative approaches to AI development. The process should include a lifecycle that forces organizations to return to it regularly as the data evolves, and tailor their AI models to any changes as necessary.

“Just like how we as humans process information and process nuance, as we read more information, as we go visit a different place, we have different perspectives. And we bring nuance to how we make decisions; we should look at AI applications in the exact same way,” Hodge told Venture Beat.

Oversight: Organizations should avoid siloing their analytics teams, which can inadvertently lead to “analytic city states” that make streamlining technology and ideas challenging. Scott Zoldi, Chief Analytics Officer at FICO recommends organizations appoint a single chief analytic officer to be responsible for creating and enforcing organizational standards.

“You can safely build more houses when you don’t have to draft a new building code for every house. Likewise, you shouldn’t have to worry about rolling the dice as to which artist will be building your model,” Zoldi wrote in Enterprise AI News.

Professionalization: It’s important that everyone in an organization is aware of how their job impacts AI development, even if their role is small. As discussed in a previous post, “tactics of professionalization” is one way organizations can standardize AI development broadly across their enterprises. According to these principles, AI developers should set up committed multidisciplinary teams, train all their employees, and clearly define who within the organization is accountable for the consequences of their AI systems.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems. 

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Colaner, Seth. (3 January 2020). Evolve: Operationalizing diversity, equity, and inclusion in your AI projects. Venture Beat. 

Brumfield, Cynthia. (8 December 2020). New AI privacy, security regulations likely coming with pending federal, state bills. CSO.

Lucini, Fernando. (24 September 2020). Getting AI results by “going pro.” Accenture Research Report.

Zoldi, Scott. (6 November 2020). It’s Time To Set Industry Standards for AI. Enterprise AI.