An overview of the new AI Act
What is the EU AI Act?
The EU AI Act is a regulation of the European Union that creates a uniform legal framework for artificial intelligence.
The EU AI Act (Regulation (EU) 2024/1689) created the first comprehensive AI regulation ever in 2024. Artificial intelligence is playing a central role in more and more large companies and is increasingly becoming standard. It is therefore high time to use the regulation to promote trustworthy AI applications and at the same time protect companies from risky AI systems.
Do you need support in implementing the AI Act? Discover your solution:
Focus on innovation and fundamental rights
Insights into the new regulation
The EU AI Act establishes binding rules for AI developers and companies that use AI in the EU that follow a risk-based approach: AI systems are assigned to different categories depending on their hazard potential – from minimal risk to unjustifiable (unacceptable) risk. The regulation was created accordingly: Some AI applications may continue to be used freely, high-risk AI systems are subject to strict conditions and unacceptable AI practices are prohibited.
The aim of the regulation is to ensure a trustworthy AI landscape. It should enable innovation, but also ensure a high level of safety, health protection and respect for fundamental rights. The regulation is valid in all EU member states and will apply gradually after a transitional period.
Objectives of the AI Act:
- Protect fundamental rights and democracy: Ensure that AI does not violate European values
- Promoting innovation: A framework that supports technological development
- Safety and transparency: Creating a trustworthy environment for the use and development of AI
- European leadership: Positioning Europe as a global leader in the regulation of AI
Timetable:
- August 2024:
Entry into force - Februar 2025:
Phased implementation for unacceptable risks - August 2026:
Full adoption and implementation at national level: Member States need to designate their own authorities and adapt national legislation
- August 2024:
A new path for the EU
Main elements of the AI Act
Categorization of AI systems according to risk:
The AI Act divides AI systems into four risk classes.
- Unacceptable risk: This includes applications such as social scoring, behavioral manipulation or emotional analysis in the workplace, as they endanger fundamental rights and the free will of people.
- High risk: Subject to strict regulations, especially in sensitive areas such as critical infrastructure, healthcare or law enforcement.
- Limited risk: This concerns chatbots, for example, which must be recognizably labeled as AI systems for users.
- Minimal risk: Not subject to regulatory requirements. This includes, for example, spam filters or other everyday applications with little impact on user rights and security.
Prohibited applications:
The AI Act prohibits certain AI applications that jeopardize fundamental rights. These include applications that pose an unacceptable risk, such as the manipulation of behavior, social scoring based on personal characteristics, and emotion recognition in sensitive areas such as education or the workplace. Untargeted facial recognition used to create large databases without consent is also prohibited. In this way, the personal rights of the population are to be safeguarded and an ethical approach to AI ensured.
Corporate Duties and Responsibilities
How to comply with the EU AI Act?
Violations of the AI Act are punishable by fines of up to 35 million euros or 7% of a company’s global annual turnover. These high penalties are intended to ensure that the requirements are taken seriously and consistently complied with. Accordingly, AI compliance is a high priority for companies to avoid legal and financial risks.
Companies that develop or even just use AI systems must therefore ensure at an early stage that the requirements of the corresponding risk classes to which the AI systems they use belong are met.
Obligations for high-risk AI systems:
High-risk AI systems are subject to strict transparency and security requirements. Companies must implement risk management processes, conduct conformity assessments, and provide comprehensive technical documentation. Furthermore, continuous monitoring is required throughout the entire lifecycle — from development to deployment — to detect and minimize risks at an early stage.
AI with a general purpose (general purpose AI, GPAI):
The AI Act provides special requirements for general purpose AI models, especially if they pose a systemic risk. These include basic models such as ChatGPT, which can be used in a variety of ways. These models are subject to additional transparency obligations and technical security standards to ensure their safe use.
Governance and enforcement:
A central EU authority, the AI Office, will be established to monitor compliance with the AI Act. It will oversee the implementation of the regulations and ensure that AI systems meet all legal requirements.
Additionally, an AI Board will serve as an advisory body and facilitate coordination between the European Union and its member states.
The AI Act also applies to providers based outside the EU if their AI systems are used within the EU. Outsourcing development to non-European countries does not exempt companies from compliance.
To manage these obligations effectively, companies should appoint internal AI compliance officers responsible for coordinating and overseeing implementation.
Safeguarding fundamental rights, building trust
Why does the EU AI Act exist?
Anyone who has ever discovered a fake profile of themselves on social media and then had to realize that their entire network was being harassed with spam and phishing messages in their name knows how important privacy and personal rights are.
Discrimination or invasions of privacy and human dignity can become easier with the help of AI systems. These are key risks and challenges of AI use to which the EU responded with the AI Act to protect fundamental values. AI applications must therefore be in line with the EU’s fundamental rights and ethical values.
Threats to fundamental rights:
Fundamental rights can be infringed in various ways — one prominent example is social scoring. In this practice, the behavior and characteristics of individuals or organizations are evaluated based on multiple criteria and converted into a score. Depending on the outcome, individuals may face advantages or disadvantages, such as loan denials or restricted access to services.
Social scoring becomes particularly problematic when used for systematic surveillance, leading to social or economic consequences for individuals. This violates core rights such as equal treatment and informational self-determination — which is why such AI applications are explicitly prohibited.
Creating trust:
When decisions are made with the help of AI systems, they must be comprehensible. The EU AI Act therefore aims to ensure that users must always know when they are interacting with an AI. If a decision is made based on this interaction that is not comprehensible to the user, they have the right to information and a right of appeal under the new regulations.
Standardization:
The EU AI Act as a uniform solution for Europe was long overdue, as there were already many individual regulations in different countries. Compliance with individual laws here and there has caused confusion and uncertainty in the internal market. The EU AI Act creates a level playing field across the EU. It also creates an incentive for companies to develop trustworthy AI themselves.
Future orientation:
Europe aims to become a pioneer in the development and use of ethically justifiable AI and to control risky applications through regulations.
Strict requirements vs. freedom to innovate
Challenges in the creation of the AI Act
The development of the EU AI Act was challenging and characterized by political, economic and technical hurdles. While consumer and civil rights organizations called for strict rules to protect fundamental rights, industry representatives warned of obstacles to innovation due to overly strict requirements.
The result is a compromise, as the former German Minister for Digital Affairs Volker Wissing notes: “II would have liked to see more innovation-friendly regulation. But in the end, it has to be a compromise, which is better than no regulation”. (Source: BMVD)
The challenges and their solutions:
- Balance between innovation & regulation: strict AI rules to protect rights without stifling innovation.
- Economic challenge: New obligations mean costs, up to several thousand euros per high-risk AI system.
- Relief for companies: Exemptions for small providers, AI real laboratories, standardization to reduce bureaucracy.
- Uniform legal basis: Regulation instead of directive to ensure clarity and avoid double regulation.
- Technical challenge: The dynamic nature of AI requires clear but flexible definitions.
- Definition of high-risk AI: Strict requirements for systems with a significant impact on security & fundamental rights.
- Adaptation to technical developments: List of high-risk applications can be updated by legal act.
- Enforcement & supervision: National AI authorities, coordinated by a European AI committee.
Innovation between regulation and competition
Take the next step with us
The EU AI Act sets new benchmarks for the responsible use of artificial intelligence, introducing clear rules for both companies and users. While it imposes new compliance obligations, it also presents an opportunity to gain a competitive edge and build trust through early adoption.
Non-compliance can lead to significant fines and reputational damage — making it essential to address the requirements promptly. Some provisions will already come into effect before 2026, and implementing the necessary internal processes takes time.
The AI Act paves the way for safe, transparent, and human-centric AI. To minimize regulatory risks and foster sustainable innovation, companies should embed risk analysis, transparency, and quality assurance into their AI initiatives. Responsible AI is no longer optional — it is a legal requirement and serves the best interest of all.
Prepare your company for the EU AI Act at an early stage! Our AI Compliance Solution supports you with risk analysis, documentation and the implementation of legal requirements. Arrange a non-binding consultation now and secure your competitive advantage.
Whitepaper
Implementation of the EU AI Act
in Practice
Read our white paper on the EU AI Act to learn how companies can successfully implement the new requirements. Find out about the risks involved and the obligations that apply to high-risk AI, and discover how digital compliance solutions can pave the way to legal certainty.
Contact
Get in touch with us
Do you have questions, need more information or are you interested in our compliance software solutions? Please use our contact form.
Do you have questions, need more information or are you interested in our compliance software solutions? Please contact us, we are looking forward to your inquiry.