It is often said that regulation lags behind technology. There are many examples of this, and there is no way to reverse the order. The same has happened with artificial intelligence. Although in this case, legislators may have tried to act quickly. The wave of change that AI promises has put regulators on notice. The European Union has made the biggest effort, but many other countries have also worked in this direction.

China has adopted preliminary measures to govern AI, the United States has yet to finalize a framework after Biden's executive order, and several Latin American countries have proposed legislation. Even in the EU, the regulation has not yet fully entered into force. It will be implemented gradually over the next two years (with some provisions taking up to three years to apply).

It is an exaggeration to say that there is a legislative gap. As a rule, there are legal regulations for many areas affected by AI, e.g. the protection of personal data, intellectual property or simply some basic human rights. However, as long as there are no specific regulations, some aspects of artificial intelligence will be subject to self-regulation.

This means that the responsibility for setting boundaries will lie with companies and users. This does not only apply to system developers such as OpenAI, Google and many others, from large technology firms to individual developers. Self-regulation is also an obligation for companies or individual users who deploy AI, i.e. who use it.

 

The role of developers and users

This is a topic for discussion. Even with comprehensive regulation, the role of AI-developing companies will always be scrutinized. They create the algorithms, train them and direct them to specific tasks. The product is already "finished" when it reaches the users.

The developers often have a conflict of interest. Companies are driven by the logic of getting to market quickly, increasing sales and maximizing returns on investment. This can lead to these goals taking precedence over strict ethical oversight aimed at eliminating potential biases, inaccuracies or hallucinations of the algorithm.

In recent years, cases of algorithms with racist and sexist biases have become known, but also of tools whose use can be harmful to society. An example of the latter are deepfake photos, videos and voices, which have become an effective tool for generating misinformation. Again, we would talk about the need for self-regulation of users who abuse these tools. However, nothing can prevent this type of software from falling into the wrong hands. So an ethical design from the outset would limit the damage.

For self-regulation to work, at least until legislation arrives, companies need to develop concrete methods to limit the damage. Today, it is unthinkable to bring an algorithm to market without a prior evaluation process. OpenAI and all companies active in the industry do this. However, this evaluation can focus purely on the performance of the model or go one step further and include an ethical perspective.

The users, which may be other companies, also play an important role as performers of AI applications. First of all, they are responsible for assessing in which use cases the technology can and cannot be used. They also decide how the models are used, a factor that completely influences the results.

 

The limits of self-regulation

Self-regulation can be expected to work to a certain extent. The profit motive of companies can coexist with ethical aspirations and safety guarantees. However, it is also true that this approach has already led to significant problems related to privacy, misinformation and even user manipulation.

Self-regulation can serve as a first line of defense and mitigate certain risks. But it also has its limits. There is, of course, no compulsory formula that enforces an ethical approach. Moreover, without a commitment to transparency, the overall effectiveness depends on the discretion of the company and the information it chooses to share.

A notable example of organized self-regulation is the formation of the Frontier Model Forum. In the middle of last year, four leading companies in the field of generative artificial intelligence — Microsoft, Google, OpenAI and Anthropic — announced this platform, which aims to ensure that companies operating in this sector adopt responsible and safe practices. Subsequently, Amazon and Meta, the two other major tech companies leading the AI market, joined the initiative.