SER Blog  Information Governance

AI Act: Regulatory and legal frameworks for artificial intelligence

Artificial intelligence has seamlessly woven itself into the fabric of our daily lives, subtly enhancing everything from routine chores to significant decisions. Recognizing the need for oversight, the European Union has introduced the world’s first legislation tailored specifically to AI—the AI Act. This move aims to neatly categorize AI applications by risk, ensuring that tech giants and startups alike play by the rules designed to safeguard public interest.

In this article, we'll break down the essentials of the AI Act, clarifying what businesses need to know and the potential opportunities it unfolds.

What is the EU’s AI Act?

With the AI Act, the European Union has created a set of rules that includes EU-wide harmonized regulations for AI. The aim is to strengthen the internal market and standardizes the regulatory and legal aspects of how we deal with AI, specifically, how we develop, market, deploy and use AI systems.

Artificial intelligence is defined by the AI Act in Section 4 as “a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities.”

Background to the AI Act

The AI Act is fundamentally designed to bolster trust in AI technologies over the long term. Anchored by the values enshrined in the EU Charter, the intent of the legislation is not to ban or restrict artificial intelligence but to foster the deployment of trustworthy AI systems across all sectors.

Consider industries like food safety, infrastructure management, public services, transport, and logistics—all are leveraging artificial intelligence to enhance societal outcomes. The AI Act aims to safeguard the rights of individuals and companies, as well as protect democracy, the rule of law, and the environment, while capitalizing on the advantages of cutting-edge technology. After all, AI has such incredible potential, capable of outcomes like:

  • Better data analysis and predictions.
  • Reduced employee workloads.
  • Optimized processes.
  • Personalized digital solutions.
  • Efficient resource allocation.
  • Improved competitiveness.
  • Stronger general and professional training.

Despite the regulatory framework introduced by the AI Act, integrating artificial intelligence into your business processes remains highly advantageous. The Act specifically targets minimizing health and safety risks associated with AI and fortifying human rights protections, not limiting potential. Now is still very much the time to get on board.

Effective date and transitional rules

The AI Act is relevant to your business in the following ways:

  • On May 21, 2024, the Council of the 27 EU member states officially adopted the AI Act.
  • On July 27, 2024, the EU published the AI Act in the official journal.
  • The AI Act came into force on August 1, 2024.

Transitional rules are set for the next two years, giving EU member states the chance to incorporate the AI Act into their national legislation. High-risk AI systems, however, benefit from an extended transitional period of 36 months.

For businesses, this timeline means that after the transitional periods end, any AI systems that don't comply with the AI Act cannot be marketed or used. Yet, these periods also provide ample time to adapt and ensure that your AI systems meet the new legal requirements.

Doxis Intelligent Content Automation

With Doxis Intelligent Content Automation SER offers the next level of enterprise content management.

Read now

Who is affected by the AI Act?

If your business develops, uses or sells artificial intelligence (AI), you need to follow the rules of the AI Act. This EU law sorts AI systems into four risk levels, and the level of risk determines the specific rules you must follow. Essentially, the higher the risk, the more regulations you need to comply with.

1. AI systems with minimal risk

If you utilize AI systems that pose minimal risk, you can rest easy—there's no need for any action on your part. This category includes systems like AI-based spellcheckers, search algorithms, AI in video games or spam filters.

2. AI systems with limited risk

The AI Act categorizes AI systems like digital voice assistants (e.g., Doxi), chatbots, or deepfakes as having limited risk. If your business employs these technologies, you're obligated to disclose their use to users. This means informing people when they are interacting with an AI system. Additionally, it's important to be aware of the specific labeling requirements that apply to AI-generated media, such as audio recordings.

3. AI systems with high risk

AI systems that interfere with personal rights are high risk. High-risk AI systems include biometric facial recognition, but also artificial intelligence built into HR management or credit checking tools. The AI Act therefore sees a high risk, for example, when AI decides whether a person is a suitable candidate for a vacant position or whether to grant a loan. AI systems in the fields of healthcare, education or critical infrastructure are also in the high-risk category.

In this case, using artificial intelligence is a compliance issue. The law sets out the following requirements:

  • Assess the extent to which the AI system could violate fundamental rights.
  • Train staff in the use of AI models.
  • Store logs automatically created by the AI system in a legally compliant manner.

4. AI systems with unacceptable risk

AI applications and practices that pose unacceptable risks are prohibited. Article 5 of the AI Act provides an overview. Prohibited AI practices include:

  • AI systems that use manipulative and deceptive techniques to alter people's behavior in order to influence their decision-making.
  • AI systems that predict whether a person might commit a crime based solely on profiling.
  • AI systems that extend facial recognition databases by scanning facial images from the internet or from surveillance footage.

What businesses have to do today

The AI Act affects every business. Even if you are not currently using AI, AI models will become an integral part of the workplace in the future. For businesses that are already using AI, it’s doubly worthwhile to get a head start on dealing with the AI Act.

You can do this today as part of your quality management program:

Assess AI already in use

In a first step, look at the AI models that you’re already using. If you use high-risk AI systems according to Annex III, you are required to carry out the conformity assessment procedure. The AI Act provides two ways to assess quality management systems and technical documentation:

  1. internal controls
  2. external controls by notified bodies

Assessing AI systems based on their risk level is beneficial even if a formal conformity assessment isn't mandatory for your systems. It's always good to be proactive about these things, as it allows you to mitigate or even eliminate risks associated with AI technologies.

Develop AI policies

In the second step, you'll need to define internal company guidelines for using AI based on the AI Act. AI policies regulate how you intend to use AI tools. Which tools are allowed? Which use cases can the tool cover? How do you deal with the issue of data protection? The AI policy answers such questions clearly and transparently. We at SER have a similar AI policy in place.

The benefit: on this basis, you decide today and in the future which AI systems you use, as well as how and where you intend to use them. This also protects against misuse across the enterprise, meaning only use AI in ways that positively influence your business processes.

Executive Summary: Total Economic Impact™ study

Long-time customer SEW-EURODRIVE achieved a 336% ROI over a three-year period and a payback in less than six months. We have summarized the study results for you.

Opportunities for AI in day-to-day business using the example of Doxi

The AI Act really shines a light on how low-risk AI systems can boost business. Take tasks that don't need full AI control, but just a bit of help—AI steps in here to take some of the load off your team. Our AI, Doxi, is a perfect example. It chips away at those everyday, repetitive tasks, making your workday a bit easier.

Hey Doxi, in what areas do you support the day-to-day business?

  • Data extraction: Doxi analyzes documents, summarizes relevant information in seconds, extracts data and makes it instantly available.
  • Purchase-to-pay processes: Doxi responds immediately to bottlenecks, market changes and automates entire workflows, such as processing inbound invoices.
  • Document insight: Doxi makes it easier for you to understand document content and presents it clearly, for example, in Q&A format.

Artificial intelligence: New law assesses risks

As we conclude, it’s clear the AI Act is trying to steer us toward a safer and more transparent use of AI across the EU. It’s a big ask for businesses to keep pace with these rules, which scale up with the risk level of the AI involved. This means companies need to be proactive—assessing their tech, steering clear of the high-risk no-go zones, and putting solid policies in place. It helps us think beyond compliance if nothing else, and to adopt a more ethnical mindset when using AI in ways that genuinely benefit us all. Regardless, the journey toward responsible AI usage is complex, and while the AI Act isn't perfect, it lays down a crucial foundation for balancing innovation with safety.

Want to continue the discussion with the experts? Our AI gurus are happy to chat. Get in touch today.

FAQs about the AI Act

What is the objective of the AI Act?
The AI Act from the EU provides a common framework for how businesses use AI systems. Rules defined in the AI Act aim to minimize risks to health, safety, and fundamental human rights.
What approach does the AI Act follow?
The EU’s AI Act takes a risk-based approach. It divides AI systems into the categories of minimal risk, limited risk, high risk, and unacceptable risk. Systems with unacceptable risks will be banned in the future, while all others will be subject to restrictions.
When did the AI Act go into effect?
The AI Act entered into force on August 1, 2024. From August 2, 2026, the draft law will be mandatory at the end of the transitional periods.

You might also be interested in

The latest digitization trends, laws and guidelines, and helpful tips straight to your inbox: Subscribe to our newsletter.

How can we help you?

+49 (0) 30 498582-0
Please add 9 and 8.

Your message has reached us!

We appreciate your interest and will get back to you shortly.

Contact us