Navigating the Initial Compliance Requirements of the EU AI Act – 5 February 2025

Back

The European Union’s Artificial Intelligence Act (AI Act), which came into force on 1 August 2024, represents the world’s first comprehensive legal framework governing artificial intelligence. While the majority of its provisions will become applicable from 2 August 2026, certain critical obligations took effect earlier, on 2 February 2025. These early requirements focus on prohibiting specific AI practices deemed to pose unacceptable risks and mandate organizations to enhance AI literacy among their personnel. ​

Prohibited AI Practices:

As of 2 February 2025, the AI Act strictly prohibits the development, deployment, or use of AI systems that are considered to carry unacceptable risks to individuals’ fundamental rights and safety. These banned practices include (i) subliminal manipulation (AI systems designed to manipulate human behavior in ways that individuals cannot consciously detect, potentially leading to harm), (ii) exploitation of vulnerabilities (AI applications that exploit vulnerabilities of specific groups due to age, disability, or other factors, resulting in harm), (iii) social scoring (AI systems implemented by public authorities for evaluating or classifying individuals based on social behaviour or personal characteristics, leading to detrimental treatment), and (iv) real-time remote biometric identification (the use of AI for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, with certain exceptions).​

Organizations operating within the EU must ensure that none of their AI systems fall within these prohibited categories. Non-compliance can result in significant penalties, including fines of up to 7% of the company’s total worldwide annual turnover. ​

AI Literacy Obligations: In addition to banning certain AI practices, the AI Act imposes obligations on organizations to promote AI literacy among their staff and stakeholders involved in the operation and use of AI systems. This entails (i) training programs (implementing comprehensive training initiatives to ensure that employees understand the capabilities, limitations, and ethical considerations of AI systems they interact with) and (ii) awareness campaigns (raising awareness about the potential risks and benefits associated with AI applications, fostering a culture of responsible AI usage).

By enhancing AI literacy, organizations can better manage AI-related risks and ensure compliance with the evolving regulatory landscape. ​

Allegiance Law’s Perspective

The AI Act’s initial requirements set a clear tone: organizations must prioritize ethical AI use and proactive compliance to avoid severe penalties. For companies developing or deploying AI systems, the prohibition on high-risk practices and the push for AI literacy demand immediate action.

At Allegiance Law, we recommend the following urgent steps (i) audit AI Systems by mapping and evaluating all AI applications to ensure none violate prohibited practices, with a focus on high-risk areas like biometric identification or behavioral manipulation, (ii) build AI literacy frameworks by developing role-specific training programs to equip staff with the knowledge to navigate AI responsibly, emphasizing ethical and legal implications, (iii) strengthen governance by stablishing internal policies and oversight mechanisms to monitor AI compliance and prepare for future AI Act obligations, and (iv) engage stakeholders byollaborating with vendors and partners to align on AI literacy and compliance standards, ensuring a cohesive approach across supply chains.

We are actively assisting clients in (i) conducting AI system audits to identify and mitigate risks, (ii) designing tailored AI literacy programs for diverse organizational roles, (iii) drafting compliance roadmaps for the AI Act’s phased implementation, and (iv) providing consultation feedback to refine the AI Act’s practical application.

The early obligations of the AI Act are a wake-up call for organizations to integrate compliance into their AI strategies now. By acting decisively, businesses can mitigate risks and build trust in their AI practices.