European Union Considering “De Facto Global Standard” for AI


The European Union (EU) has proposed a first-ever comprehensive legislative package to regulate artificial intelligence (AI). As The National Law Review reports, the Artificial Intelligence Act (AIA) will “establish a risk-based framework for regulating use of AI anywhere within the EU, including by organizations based outside the EU.” If passed, the law “establishes a de facto global standard for AI,” that would:

  • ban a limited number of “unacceptable AI use cases”; 
  • subject “high-risk use cases” to “prior conformity assessment and wide-ranging new compliance obligations”;
  • subject medium risk functions to “enhanced transparency rules”;
  • allow low-risk use cases to “largely be pursued without any new obligations under the AIA”

AIA would join the EU’s General Data Protection Regulation (GDPR) in placing stricter regulations on the tech industry. GDPR dictates how organizations must protect their customers’ privacy, and hands greater control of personal data over to individuals. However, whereas GDPR pertains specifically to the regulation of data, AIA specifically addresses reducing the potential harms of AI. 

“The EU Commission is proposing a draconian penalty regime in case of non-compliance with the AI Act, with fines potentially reaching 6% of global revenues for the most serious violations of the new AI regime,” states the National Law Review. “This eye-watering number should catch the attention of global boards and other business stakeholders in Europe and across the world.”

Ten Guiding Principles for Good Machine Learning Practice in Medical Device Development

Among the many uses of AI, its deployment in medical devices represents a particular concern, since these devices impact human health. To help regulate use in medical devices, the U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) teamed up to develop ten guiding principles that can guide the development of good machine learning practices.

“These guiding principles will help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML),” the FDA announced in October.

A summary of the guiding principles:

  1. Deployed models are monitored for performance, and re-training risks are managed.
  2. Leverage multi-disciplinary expertise throughout the total product life cycle.
  3. Implement good software engineering and security practices.
  4. Clinical study participants and data sets are representative of the intended patient population.
  5. Training data sets are independent of test sets.
  6. Selected reference datasets are based upon best available methods.
  7. Tailor model design to the available data and reflect the intended use of the device.
  8. Place focus on the performance of the human-AI team.
  9. Testing demonstrates device performance during clinically relevant conditions.
  10. Provide users with clear, essential information.

As governments race to establish AI regulations and guidelines, it’s more important than ever for organizations to develop AI standards that align with these principles. Those who fail to do so may soon find themselves struggling to keep pace with increasingly strict requirements.  

Establishing AI Standards for Your Organization

Artificial intelligence continues to spread across various industries, such as healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.

Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.

Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!


Yu, Eileen. (31 October 2021). China’s personal data protection law kicks in today. ZDNet. 

(27 October 2021). Good Machine Learning Practice for Medical Device Development: Guiding Principles. U.S. Food and Drug Administration. 

Squire Patton Boggs (US) LLP. (22 October 2021). EU Proposed Regulatory Regime for Artificial Intelligence (AI) Could Set Global Standard. National Law Review, Volume XI, Number 295.

, , , , , , , ,


  1. Six Priorities from the UK’s New Artificial Intelligence Roadmap - IEEE Innovation at Work - December 26, 2021

    […] regulations and guidance for organizations that use, sell, and develop AI systems. As previously reported, the European Union recently proposed a first-ever comprehensive legislative package to regulate […]

Leave a Reply