SER Blog  Information Governance

EU AI Act: Mandatory AI literacy from 2 February 2025

Dr. Max Häusler

Starting February 2, 2025, key provisions of the EU AI Act will take effect. Is your business ready? On top of prohibiting certain AI practices classified as high risk, companies are expected to ensure that their staff have the knowledge and skills needed to be considered AI literate. Through this article, we’ll explore why AI literacy is so important for your organization and guide you on how to practically meet the requirements of the EU AI Act.

AI regulation: What companies need to know

The European Union’s AI Act has established a globally unique framework to foster trust in artificial intelligence, drive innovation and mitigate risks to society. Many experts see this as a pioneering step that positions the EU as a leader in the global debate on AI regulation. Adopted in June 2024, the EU AI Act introduces rules and guidelines for AI providers and users. Unlike EU directives, which must first be transposed into national law, the Act’s provisions will take effect directly in all EU member states on February 2, 2025. You can access the full 144-page document in English here.

Mandatory AI literacy: The legal background

Many business leaders are now asking how to implement the changes coming on February 2, 2025: Is it enough to list AI literacy as a requirement in job postings? Will employees need to attend regular AI training sessions? And what exactly does the EU mean by AI literacy?

To help answer these questions, the EU AI Act defines AI literacy as, “skills, knowledge and understanding that allow providers [...], taking into account their respective rights and obligations [...] to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.” (page 49).

This definition makes it clear that AI literacy means that users will need a comprehensive understanding of the tools they’re interacting with. Beyond technical skills, AI literacy requires you to have a deep understanding of risks. Simply put, AI literacy is not just about mastering AI systems but also about responsibly evaluating the risks and potential damages associated with them.

AI literacy: Who needs it?

The scope of people required to possess AI literacy from February 2, 2025 is as broad as the term itself. Article 4 of the AI Act states: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.” (page 51).

In practice, this means that not only those directly operating AI systems in day-to-day tasks must be sufficiently AI literate, but also everyone involved in the operation, development or use of AI systems. For example:

  • Managers responsible for overseeing AI deployment.
  • Developers and IT professionals who design and maintain AI systems.
  • Employees working in sensitive business areas, like automated credit assessments.
  • External service providers developing, implementing or maintaining AI systems for companies.

In short, internal employees, external partners and service providers are affected. The goal? To empower all stakeholders — from technical staff to management — to use AI systems safely and responsibly.

Risks of ignoring AI literacy

Hopping in your car for the weekly grocery run without a driver’s license has clear consequences, but what about companies operating AI systems after February 2, 2025 without ensuring that their staff possesses adequate AI literacy?

Well, it’s not quite as severe. If companies fail to implement measures to ensure AI literacy among their employees and external partners, the EU AI Act doesn’t impose direct fines or penalties. However, decision-makers should not take this lightly, as significant risks remain, as in the future, any damage caused by AI systems could be deemed a violation of the employer’s legal responsibilities. Companies should prepare to face legal accountability if such damages could have been prevented through appropriate training and measures.

Sally Mewies, a Partner in Technology & Digital at Walker Morris, points out that while the EU AI Act doesn't directly impose fines for lack of AI literacy, companies are required to ensure adequate training to mitigate legal risks associated with preventable damages by AI systems — risks that no organization can afford to overlook.

Four steps to achieve AI compliance

While no direct penalties are currently outlined, investing in AI literacy for your employees is a wise move for any company. Instead of viewing mandatory training as another tiresome compliance thing to check off the list, leaders should see it as an opportunity to enhance their organization’s innovation and competitiveness.

To make it a bit easier, we’ve put together four example steps of how you might sustainably establish AI compliance within your organization:

1. Assessing needs: Who requires AI training?

Start by identifying which individuals and departments work directly or indirectly with AI systems and the specific knowledge they need. This will give you a good idea of how to prioritize training needs and avoid unnecessary measures that will make people roll their eyes.

2. Selecting the right AI training programs

Once you know who needs training the most, you can start offering training programs that cover technical, legal and ethical aspects of AI use. Whether you rely on external providers or develop in-house programs is up to you. What matters is that your AI training content aligns with the varied roles and responsibilities within your company.

3. Ensuring ongoing AI education

Plan regular refresher courses to keep your employees’ knowledge up to date, which is something you might already be doing with other topics. AI technologies and regulatory requirements will continue to evolve, along with the demands on AI literacy. A culture of ongoing learning is important any for businesses to stay competitive, after all.

4. Documenting and archiving evidence

It’s generally good practice to maintain comprehensive records of any training you offer, so this is no different. Maintaining an overview of completed AI training, including covered content and participants, allows you to demonstrate compliance with legal requirements in the event of inspections or liability cases.

These four steps should serve as a solid starting point. With a bit of tinkering to make them relevant to your organization, you should feel more prepared to meet the new requirements of the AI Act. Better yet, you’re just as likely to see increased productivity from your AI-trained staff, who can accomplish more in less time.

Best practice: Establish clear responsibilities

On top of offering training courses, it’s good practice to appoint a central role for AI compliance — like a data protection officer under the EU GDPR. This individual or think-tank not only establishes external credibility but also guarantees clear internal structures and efficient coordination of all AI-related activities.

Having these resources sends a strong signal to employees, customers and regulatory bodies that your enterprise recognizes the importance of AI literacy and is proactively addressing it. This helps to mitigates risks while demonstrating active responsibility over a technology that increasingly shapes our daily lives.

EU AI Act: Banned AI practices starting February 2025

AI systems open up incredible opportunities for businesses to boost productivity to unprecedented levels. However, like any powerful technology, AI can also be misused for harmful purposes. The rise of AI-generated fake news on social media is just the tip of the iceberg.

To prevent or at least curb misuse, the EU AI Act is introducing the prohibition of dangerous AI practices, set to take effect on February 2, 2025. Examples of banned practices include:

  • Manipulative technologies: AI systems designed to influence human behavior without their awareness, like an AI subtly influencing shopping habits to induce unplanned purchases.
  • Social scoring: Systems that evaluate and classify individuals based on their behavior or personal attributes, like scenarios rewarding or penalizing citizens for their social media posts or purchasing decisions.
  • Biometric identification in public spaces: Particularly real-time surveillance technologies. This could include scenarios like scanning crowds at public events to find people with outstanding warrants or monitoring busy city streets to detect known criminals, raising significant privacy and ethical concerns.

These new regulations primarily aim to safeguard fundamental rights. Companies using AI will need to example examine whether their applications might fall under these prohibitions.

Want to learn more about the different risk categories of AI systems and their legal classification? Read our comprehensive blog post on the EU AI Act.

AI That You Can Trust

With the AI-powered Doxis Intelligent Content Automation platform, SER is redefining next-generation Enterprise Content Management.

Read now

Dr. Max Häusler

The PhD Germanist loves intelligent automation solutions – especially when they make everyday work easier. But the technology behind them is often difficult to grasp. Max wants to change that: As a Content Writer at SER Group, he makes the functionality of advanced AI systems easy to understand, even for non-technical audiences. For that, his many years of experience in the creative industry and at a tech startup are helpful – but even more so, his two children, who often ask the trickiest questions.

You might also be interested in

The latest digitization trends, laws and guidelines, and helpful tips straight to your inbox: Subscribe to our newsletter.

How can we help you?

+49 (0) 30 498582-0
Please add 8 and 9.

Your message has reached us!

We appreciate your interest and will get back to you shortly.

Contact us