The European Union AI Act, a comprehensive legal framework for the regulation of artificial intelligence (AI), will reshape the landscape of AI governance not only in Europe, but globally. One important aspect of the legislation has gained attention: the promotion of voluntary codes of conduct for AI systems, especially those deemed low-risk.
Introduced in 2021, the AI Act divides AI systems into different levels of risk, with high-risk systems subject to the most stringent regulations. However, the Act also recognizes the importance of guiding the responsible use of AI in all sectors, including those involving low-risk AI applications. To this end, the Act encourages the adoption of voluntary codes of conduct that serve as a framework for the ethical development and use of AI.
Voluntary codes of conduct
Voluntary codes of conduct are not legally binding, but they play a crucial role in promoting ethical AI practices. According to Article 95 of the AI Act, these codes are intended to help companies develop AI systems that are in line with European values of safety, transparency and fairness, even if the systems are not classified as high-risk.
These codes of conduct can cover a wide range of best practices, including avoiding bias, promoting inclusivity in AI development and improving the AI literacy of developers and users. By adopting these codes, companies can demonstrate their commitment to ethical AI and potentially gain a competitive advantage in the marketplace.
In addition, the codes of conduct can serve as a tool for organizations to voluntarily adopt some of the stringent obligations that apply to high-risk AI systems, even if they are not required to do so. This proactive approach can help companies build trust with consumers and stakeholders and ensure that their AI systems are not only legally compliant but also ethical.
Global influence
The EU’s push for voluntary codes of conduct is likely to influence AI governance beyond EU borders. The AI law’s focus on ethical AI practices reflects a growing global consensus on the need for responsible AI development. As other countries look to the EU as a pioneer in AI regulation, the adoption of similar codes of conduct could become a widespread practice.
The law also opens the door for industry-specific codes of conduct that can be tailored to the particular challenges and risks of different sectors. For example, the healthcare industry could develop a code that focuses on avoiding bias in AI-driven diagnoses, while the financial sector could prioritize mitigating systemic risks associated with AI in trading algorithms.
ISO AI certification
In addition to codes of conduct, the AI Act encourages organizations to seek ISO AI certification to further validate their AI systems' compliance with ethical standards. The recently introduced ISO 42001 standard provides a comprehensive framework for AI governance and complements the AI Act by providing detailed guidelines for managing AI risks and ensuring system transparency.
ISO certification, like the voluntary codes of conduct, is not mandatory but can serve as a strong signal of a company’s commitment to responsible AI. Through ISO 42001 certification, organizations can enhance their reputation and credibility both in Europe and globally.
A new era
The promotion of voluntary codes of conduct through the AI Act is an important step towards creating a more ethical and trustworthy AI landscape. As the law nears finalization, it is clear that the EU is not only establishing rules for AI regulation, but also promoting a culture of responsibility and ethical business conduct. Companies that embrace these voluntary guidelines will be well positioned to play a leading role in the new era of AI, where trust and ethics are as important as innovation.