Starting August 1, 2024, the European Union implements the first continent-wide regulation on artificial intelligence, known as the "AI Act." This landmark legislation, published in the Official Journal of the European Union in June, is the result of years of negotiations among the 27 Member States. It establishes a pioneering regulatory framework with progressive changes in the use and development of AI systems, aiming to ensure the safety and fundamental rights of citizens.
The implementation of the AI Act will be gradual. While the law is active now, full compliance is expected by August 2026. In the first phase, starting February 2, 2025, general provisions and prohibitions for unacceptable risks must be met. Three months later, on May 2, 2025, codes of good practice will be implemented. By August 2, 2025, general rules will be in effect, and each country must have updated its laws to impose sanctions on non-compliant companies.
For "high-risk" systems, such as the most advanced AI models, a 36-month grace period is provided to comply with all obligations. This means that these systems will not be fully regulated until August 2027, when they will need to document their operations and be transparent about their processes.
Challenges in Defining "High Risk"
One of the most complex aspects of the AI Act is defining and classifying AI systems by their level of risk. The law states that "high-risk" systems are those that can significantly impact the health, safety, and fundamental rights of individuals. This group includes systems that affect critical sectors like education, democratic processes, and security.
However, there is controversy regarding the omission of mass surveillance from the "high-risk" category. While live facial recognition is prohibited, the regulation permits biometric identification systems, such as those used for border control. Privacy advocates like EDRi have expressed concerns over these gaps in the legislation.
Mastering Copilot for business efficiency
Focus on Developers and Sanctions
The AI Act primarily targets developers of "high-risk" AI models, leaving out most "minimal risk" systems like filters and chatbots. This approach aims to hold those who create the models accountable rather than the implementers. An example is the relationship between Google and mobile phone manufacturers like Samsung or OPPO, who adapt models like Gemini but do not develop them from scratch. An additional criticism of the law is that "open source" models are not exempt from compliance, including platforms like Meta's LLaMa, Stable Diffusion, and Mistral.
Fines for non-compliance with the AI Act can reach up to 7% of the infringing company's annual global revenue. These fines are higher than those established by the General Data Protection Regulation, which caps fines at 4%, and the Digital Services Act, which reaches 6%. The European Artificial Intelligence Office will be responsible for overseeing compliance, preparing reports, and updating requirements.
The entry into force of the AI Act marks a milestone in regulating artificial intelligence, with Europe leading the way toward a safer and more responsible use of these emerging technologies.