Product development 2024 / 12 / 03

Artificial Intelligence Act of the European Union (EU AI Act): Key Information and Important Points

The EU Artificial Intelligence Act (EU AI Act) is a groundbreaking regulatory framework introduced to govern the use of artificial intelligence (AI) in the European Union. As AI continues to spread and transform industries and society, the EU recognized the need to ensure its development and use are ethical, transparent, and secure.

This article covers the essential aspects of the EU Artificial Intelligence Act, including its purpose, risk levels, obligations for high-risk systems, prohibited practices, and more. It serves as a comprehensive EU Artificial Intelligence Act summary for businesses, developers, and individuals looking to understand its implications.

Why is the European Union’s Artificial Intelligence Act Needed?

The EU Artificial Intelligence Act is meant to regulate how exactly today’s AI technologies are developed and used across Europe. AI has been growing in influence rapidly, and concerns have arisen about systems that infringe on fundamental rights or pose safety risks.

A key issue is the lack of transparency in AI decision-making. Often, it’s difficult to understand why an AI system made a particular decision, such as in a hiring process or public benefit applications. This opacity, combined with potential bias, makes it hard to determine if someone was unfairly treated.

While existing laws provide some protection, they aren’t sufficient to tackle the unique challenges AI systems pose. That’s why the EU AI Act introduces regulations to:
  • Address risks posed by AI and ban harmful practices;
  • Set stricter oversight and requirements for high-risk AI systems;
  • Ensure compliance through pre-market assessments and ongoing monitoring;
  • Establish governance at both EU and national levels.

The EU parliament’s Artificial Intelligence Act establishes a legal framework to manage these risks while encouraging smart innovation. This EU act on Artificial Intelligence aims to ensure consistency across the Union, balancing AI’s vast potential with the need for security and fairness.

EU AI Act: Risk Classification

The EU AI Act promotes a risk-based approach, classifying AI systems based on the potential harm they could cause. The regulation outlines four risk levels to segment and ensure that stricter rules are applied where necessary, particularly for high-risk systems.

Minimal-risk systems

Minimal-risk AI systems, such as basic chatbots or recommendation engines, are largely unregulated by the EU AI Act. These systems pose little to no threat and are free to operate without extensive oversight, allowing for accessible and less restrictive AI innovation.

Limited-risk systems

Lighter oversight is required for AI solutions in this category. Although these systems don’t require the rigorous compliance of high-risk ones, they must remain transparent, and users should be aware they’re interacting with AI. These may include in-house AI solutions for internal business processes, and AI used for hardware performance monitoring.

High-risk systems

High-risk systems are those that significantly impact the safety and rights of individuals. These systems are subject to the strictest obligations under the EU AI Act high-risk use cases, including areas like education, employment, law enforcement, and critical infrastructure.

The EU AI Act risk assessment is being introduced to help ensure specific compliance requirements are met by AI-powered systems that are employed in:
  • Critical infrastructure, where failures could threaten public safety or citizens’ lives (such as transportation or energy);
  • Education and vocational training, where AI impacts access to human opportunities or career paths (e.g., AI used for exam scoring);
  • Product safety components, like AI-assisted technologies in medical procedures (robot-assisted surgeries);
  • Employment and workforce management, such as AI tools used in recruitment (e.g., CV-sorting software);
  • Essential services, both public and private (like AI-driven credit scoring that can affect loan approvals);
  • Law enforcement, where AI may influence individual rights (e.g., evaluating evidence reliability);
  • Migration and border control, with AI automating processes like visa application reviews;
  • Justice systems and democratic processes, such as AI tools used in legal research (for instance, finding court rulings).

Unacceptable risk systems

AI systems that present an unacceptable risk are banned under the EU AI Act. These systems violate rights or compromise safety, threatening autonomy, privacy, or human dignity. The regulation classifies this as the highest risk category.

Obligations for High-Risk Systems

The good news is that everything below the “unacceptable risk” category can be implemented in some form. Suppose developers and deployers must implement a high-risk AI system. In that case, they are required to follow the strict obligations set forth by the EU AI Act, including the conformity assessment discussed below.

The following requirements are in place to manage EU AI Act risk categories and make sure that these technologies operate responsibly.

Data quality and governance

High-risk AI systems must use high-quality datasets that are representative and free from bias. The EU Artificial Intelligence Act risk levels demand that data is properly governed to ensure fairness, especially in sectors like healthcare or employment.

Data protection

High-risk AI solutions must run under proactive data and cybersecurity protection conditions, with their owners employing efficient security measures and keeping up-to-date and compliant with the latest data protection laws.

Documentation and transparency

Clear, concise documentation is required to explain how the AI system functions, the data it uses, and its intended outcomes. This documentation should facilitate audits and help guarantee that EU AI Act risk levels are adhered to.

Overall explainability

Beyond formal documentation, system specifications, and technical instructions, an AI solution’s owner must be able to disclose their AI model’s logic freely, providing clear explanations as to how the AI makes decisions in work.

Human oversight

To prevent fully autonomous decision-making, high-risk systems must include mechanisms for human oversight. This enables human intervention in critical situations and supports ethical decision-making in sectors like law enforcement.

Risk management

The EU AI Act mandates ongoing risk assessment and planning of mitigation approaches during the entire lifecycle of high-risk AI systems. This guarantees ongoing compliance with safety and ethical standards.

Independent approval

Any AI solution entering the EU’s software market for public use must pass a conformity assessment carried out by an independent third party, which certifies the AI software in accordance with the regulatory requirements.

Post-launch compliance

Owners of newly market-introduced AI systems are obliged by the EU AI Act to constantly keep their software in check, monitoring its regulatory compliance across all points, making immediate adjustments, and reporting malfunctions and incidents.

Prohibited Practices

The EU AI Act specifically bans certain harmful or manipulative AI practices to protect human rights and dignity. Regardless of the complexity of the AI you aim to develop and implement or the regulatory category it falls under, these prohibited practices must be avoided or strictly managed.

Social scoring

The use of AI for social scoring, which evaluates people based on their behavior or attributes, is banned. The EU Artificial Intelligence Act prohibits these systems from creating surveillance-driven control over individuals.

Manipulative AI

AI systems designed to manipulate behavior in ways that cause harm or distress—particularly to vulnerable populations, such as those affected by natural disasters or conflicts—are strictly forbidden by the EU AI Act.

Real-time biometric identification

Real-time biometric identification in public spaces, such as facial recognition, is heavily restricted by the EU Artificial Intelligence Act. Exceptions are allowed only in extreme cases, such as for national security (e.g., recognizing criminals).

Biometric categorization

The EU AI Act also prohibits systems that leverage biometric data and categorize individuals’ political views, ethnicity, religious beliefs, or other sensitive attributes. This practice poses privacy risks and is deemed unethical.

Exploiting vulnerabilities

AI systems that exploit the vulnerabilities of specific groups, such as children or economically disadvantaged individuals, are banned under the EU Artificial Intelligence Act to prevent significant harm.

Untargeted scraping

Collecting facial images online or video footage to build facial recognition databases without targeting specific individuals is prohibited, as outlined in the EU AI Act risk categories.

Emotion recognition

AI systems designed to detect emotional states in environments like workplaces or schools are banned under the EU AI Act due to concerns over their reliability and potential misuse.

Conformity Assessment: Achieving AI Compliance

To ensure compliance with the latest regulations, developers of high-risk AI systems must undergo a conformity assessment before deployment, proving their systems meet the EU Artificial Intelligence Act standards. The following efforts can be employed to handle this process efficiently.

Ethical AI guidelines

Developers must follow ethical AI guidelines aligned with EU AI Act requirements, achieving total fairness, transparency, and accountability in their AI solutions.

Self-assessment

Internal audits are required to evaluate the system’s compliance with the EU AI Act high-risk use cases and risk levels for safety and transparency.

Third-party audits

For particularly sensitive applications, such as healthcare or law enforcement, independent third-party auditors must verify the system’s compliance with the EU AI Act.

Ongoing monitoring

Continuous monitoring is required to make sure the AI system maintains compliance with EU AI Act standards throughout its lifecycle.

Regulatory Sandboxes

To simplify the process, the EU AI Act encourages smart innovation by establishing regulatory sandboxes. They provide AI developers with a controlled environment where they can test the developed AI systems under regulatory supervision without the risk of penalties. The sandboxes are designed to promote the safe development of novel AI-based solutions.

All in all, they can be used to handle two major tasks:
  • Test new technologies to achieve their inherent compliance
    Developers can use sandboxes to make sure new AI systems meet the EU Artificial Intelligence Act risk standards before widespread deployment.
  • Collaborate with regulators to maximize compliance
    By collaborating with regulators in sandbox environments, developers can refine their innovations while staying compliant with the European Parliament’s Artificial Intelligence Act requirements.

Understanding the Implications of the EU AI Act

The EU Artificial Intelligence Act is here, and we must all take heed as it actually expands our horizons of AI use. Yes, with its strict risk levels and obligations for high-risk systems, the EU AI Act is meant to enforce truly responsible AI use.

But its main point is not really to limit and stifle today’s AI companies. It’s about laying out the reliable, working ground for the development of unbiased, totally fair AI models focused on true user value. Once we set up and start maintaining this AI ethics-driven “movement,” we will get much more freedom and convenience for creating innovative, market-defining AIs.

Rate this article
5 / 5
(
4 votes
)
Andrew Demchenko
Head of Customer Success

Stay updated with the latest case studies