Artificial intelligence (AI) has become a global force impacting almost every industry. In the face of rapid growth, governments such as the European Union (EU), the US, the UK, Brazil and China are working on policies to balance innovation and oversight. The EU's AI law, the US's AI regulation and China's specific laws illustrate the diversity of global efforts.

The global AI market is expected to reach 500 billion dollars by 2024. This growth has led to a need for a legal framework that allows for comprehensive oversight. The EU's AI law was proposed in 2021, followed by the "AI Bill of Rights" in the US, while China has introduced its "Deep Synthesis Provisions" and Brazil is working on its Draft Law No. 2.338/2023" on AI, which does not yet have an official name.

 

A look at the legislation

The EU AI law classifies AI systems into four risk categories: minimal, limited, high and unacceptable. High-risk systems, such as those in the human resources, education and banking sectors, are subject to strict transparency, safety and ethical requirements. The EU law, which will come into full force in 2026, bans systems that manipulate behavior or use social scoring. This will have a significant impact on the global legal framework.

The UK takes an innovation-friendly approach and relies on existing regulators to manage AI risks in their respective areas. Principles for transparency, ethics and accountability have been established, while the government promotes sector-specific oversight that allows regulators to develop flexible and consistent guidelines. The Center for Data Ethics and Innovation is also developing a Center for Algorithmic Transparency to strengthen AI governance.

Brazil has positioned itself as a pioneer in Latin America with a national AI strategy for 2021. The current legislation, Draft Law No. 2.338/2023, establishes rights in terms of fairness and transparency in AI decisions. The proposed law categorizes systems as high, non-high or prohibited risk. The law, which is due to be voted on soon, will serve as a standard for the region.

China has developed a strategy that regulates certain issues such as price manipulation or the generation of false content. Recommendation algorithm regulations prohibit bias in work management, while content regulation requires clear labeling to distinguish between authentic and fake content. New "deep synthesis" rules monitor both the creation and distribution of AI-generated content, setting a precedent for how these challenges can be addressed globally.

The rules vary depending on the regional approach. The EU aims for a comprehensive, risk-based framework, the UK applies flexible sectoral oversight, Brazil is leading the way with an innovative national strategy, and China regulates specific issues relating to content and manipulation.

In the US, the Executive Order and the "AI Bill of Rights" aim to protect consumers, while state laws provide for specific measures, as can be seen in California.