Metanav

Four AI Standards Lessons To Consider

AI-standards-lessons

When it comes to designing ethical artificial intelligence (AI) systems, developers usually have the best intentions. However, problems often occur when developers fail to follow their intentions, what’s dubbed the “intention-action gap.”

To avoid this, a new report from the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University, titled “Responsible Use of Technology: The Microsoft Case Study,” recommends developers follow the lessons listed below.

AI Standards Lessons

  1. Before you can innovate responsibly, you must transform your organization’s culture:
    To innovate ethically, you need a company culture that encourages introspection and learning from mistakes. For example, by adopting what Microsoft calls a “hub-and-spoke” cultural model across the various departments that influence product development, Microsoft ensures that security, privacy, and accessibility are embedded into all of its products. This “hub” consists of a trio of internal groups that work like “spokes” within its governance: The AI, Ethics, and Effects in Engineering and Research (AETHER) Committee; The Office of Responsible AI (ORA); and the Responsible AI Strategy in Engineering (RAISE) group. Additionally, Microsoft launched the Responsible AI Standard, a series of steps that internal teams have to follow to support the creation of responsible AI systems.
  2. Use tools and methods that make ethics implementation simple:
    With the right technical tools, it will be easier to integrate your new ethics model into the many facets of your organization. Microsoft uses several technical tools—Fairlearn, InterpretML, and Error Analysis—to implement ethics. For example, Fairlearn allows data scientists to analyze and enhance the fairness of machine learning models. Each platform offers dashboards that make it easier for workers to visualize performance. By using checklists, role-playing exercises, and stakeholder engagement, these tools also help teams understand the possible consequences of their products. It also fosters more compassion for how underrepresented stakeholders might be affected.
  3. Create employee accountability by measuring impact:
    Make sure your employees are aligned with your company’s ethical values by evaluating their performance against your ethics principles. To do this, Microsoft team members meet with managers for bi-yearly performance evaluations and goal settings to establish personal goals in line with those of the company.
  4. Inclusive products are superior products:
    By innovating responsibly through the lifecycle of a product, companies will make products that are better and more inclusive. They can do this by creating principles for AI toolkits that set expectations from the outset of product development.

New Healthcare Industry AI Standard Considers Three Areas of Trust

The Consumer Technology Association (CTA), a working group of 64 organizations, recently created a new standard that identifies the basic requirements for establishing reliable AI solutions. Healthcare organizations involved in the project include AdvaMed, America’s Health Insurance Plans, Ginger, Philips, 98point6, and ResMed.

The standard, released in February 2021 and accredited by the American National Standards Institute, considers three ways to create trustworthy and sustainable AI healthcare solutions:

  • Human trust: Consider the way humans interact and how they will interpret the AI solution.
  • Technical trust: Address data use, such as data access, privacy, quality, integrity, and issues around bias. Additionally, technical trust considers the technical execution and training of an AI design to provide predictable results.
  • Regulatory trust: Ensure compliance to regulatory agencies, federal and state laws, accreditation boards, and global standardization frameworks.

Developing standards for AI applications is difficult, but necessary. By having a plan that integrates ethics throughout your organization, you can better ensure your AI systems are reliable and safe.

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!

Resources

Green, Brian and Lim, Daniel. (25 February 2021). 4 lessons on designing responsible, ethical tech: Microsoft case study. World Economic Forum.

Landi, Heather. (18 February 2021). AHIP, tech companies create new healthcare AI standard as industry aims to provide more guardrails. Fierce Healthcare.

, , , , , ,

Trackbacks/Pingbacks

  1. How Institutional Review Boards Can Reduce AI Risks  - IEEE Innovation at Work - April 22, 2021

    […] standards to mitigate risks associated with the technology, such as its propensity for bias. While developing AI standards is necessary, they also need to be upheld in order to be effective. To do so, organizations can […]

Leave a Reply

https://www.googletagmanager.com/gtag/js?id=G-BSTL0YJSGF