Carme Artigas, who has a three-decade career in the technology sector, was chosen to lead Spain's efforts in the field of artificial intelligence at the beginning of 2020 and was Secretary of State for Digitalization and Artificial Intelligence for almost four years. During this time, she headed both the National Institute for Cybersecurity and the Spanish Agency for AI Supervision.
Towards the end of this period, when Spain held the Presidency of the Council of the European Union, she was the driving force behind the AI Law, the law regulating this technology at European level. Since October last year, she has also co-chaired the United Nations Advisory Council on Artificial Intelligence, a role she continues to perform even after leaving the government.
We had the opportunity to speak with Artigas about regulation, innovation and how Europe is trying to reconcile the two in light of developments in GenAI.
- In all your years working in the ICT industry, would you have thought that we would be at this point of development in 2024?
- Precisely because I have witnessed several waves of innovation. I experienced the beginning of the Internet in the years 95-96. I experienced the beginning of big data in 2006. In fact, I founded a groundbreaking company. But what we are experiencing now is not just a high adoption rate, but an accelerated hyper-adoption. Artificial intelligence has been in development for many years. Another important milestone in 2014 was deep learning. What happened last year with generative AI was a significant quantum leap because of the rapid adoption and the transformative impact it will have on all industries and society as a whole.
- Given the progress of generative AI, do you think the six-month pause recommended by some experts should have been taken?
- No, not at all. First of all, it's not possible, because you can not stop innovation and development. Quite the opposite. Development should not be stopped, but accelerated so that the industry can find solutions to the problems it creates.
Besides, when you talk about stopping... who is stopping? The good, the bad, the normal? Research should not be stopped. What should be stopped is the launch of commercial products that are not yet ready for commercialization. You can not stop research and development. But you also can not launch a beta product and claim it's a finished product.
What we are doing with this European regulation is not stopping innovation or research. But before a product comes onto the market, it has to pass quality tests. It's like introducing a drug without clinical analysis or a car without airbags and brakes.
When ChatGPT came onto the market, it was basically a beta product that was not yet ready, but was launched without any quality control. That's how the insurance industry has done it for the last 40 years and I think that needs to change. To put a commercial product on the market, it has to go through a series of controls, because the impact is a question of risk, safety, but also human and fundamental rights.
- As far as the AI Act is concerned, several months have already passed since it was adopted. From your current perspective and considering the progress... do you think anything has been left out? Is there anything that should be included?
- No, because I think it's a law that is perfectly balanced in terms of benefits and regulation. It does not over-regulate, it does not regulate technology. It only regulates the high-risk applications. But the most important thing is that this law is designed in a completely different way. The law itself contains mechanisms for action. This means that if something new comes on the market tomorrow, for example agents, you can update a set of mechanisms, indicators and control standards in a very dynamic way. So the law was made in the knowledge that technology changes every six months.
We are no longer in the definition phase of the law, but in the adoption phase. The timetables are being adhered to. At European level, the Office for Artificial Intelligence has been created, in Spain the National Agency for Artificial Intelligence has been set up with a Director General, and recommendations and guidelines are being drawn up for the implementation of this law. I believe we are well on schedule. Once the law is published in the Euro-Journal, there will be six months to remove products from the prohibited uses category from the market and 24 months for high-risk uses.
The key now is not in the design of the law, but in its application. We need to learn from past mistakes with the GDPR and focus the efforts of governments and the European Commission on helping SMEs to adopt the law in a simple and cost-effective way.
- In Europe, we have taken the lead in AI regulation, but does not that also hinder innovation in the region a little?
- Not at all, because innovation is not affected. R&D is exempt. Open source software is exempt. It's just that before I go to market, I have to ensure quality, which allows me to build trust. And when you build trust, you create a market. The big problem with AI is that nobody trusts it.
In the US, you can find lawsuits from The New York Times against OpenAI. They do not exist here because AI law requires copyright compliance. In Europe, PRISA, Le Monde and Axel Springer have sold their content to OpenAI to comply with the law. Therefore, I am essentially creating a market for monetizing the data of these content creators.
We are trying to create security for consumers and citizens. Citizens can rest assured that AI in Europe will not be used by governments to control citizens, because that is prohibited. Nor will AI be used to fight crime as in Minority Report, as this is prohibited. We are ensuring the impact on fundamental rights. I already have guarantees that when AI comes onto the market, it will not disadvantage me, discriminate against me or manipulate my thoughts. These are guarantees that we are already establishing in Europe. And if this is the case, we offer a much more trustworthy market. And if you have confidence, you will buy it.
However, the big problem in Europe was never the legislation, but the fragmentation of the legislation. We did not have one single law in Europe, we had 27, one for each country. What we have done in the last five years with the GDPR, the AI Act and the DMA is to make sure that there is one single regulation for the whole of Europe, without changing a comma, that comes into force on the same day. So we are already creating a 'European digital single market", which is the key to competitiveness.
What I see in the US is exactly the opposite. We have Biden's Executive Order, but he can not enforce it at a general level, and we have an AI law in California that is different from those in Texas, Florida and New York. That could lead to legislative fragmentation. I think Europe has learned its lesson and is on the right track.
- How do you see the acceptance of GenAI by companies at a local level, in Spain?
- As always, the big companies in our country have been quick to adopt the technology, but we also need to reach SMEs. That is the key. It's not about competing with a European tool that competes with ChatGPT, because we will have one too. There is the French one, and we could have a Spanish one...
But that will not make the difference. What will make the difference is that 51% of American companies are already using generative AI, 70% of Chinese companies and only 15% of European companies. That's where we need to focus our efforts, on adoption, on using these language models for specific cases, for vertical industries, to find solutions to real problems.
Business will come from the innovation derived from these models in our existing and traditional industries, which will undoubtedly be impacted by this generative AI that will enable unprecedented hyper-automation and significant disruption. The key is in the vertical applications of AI, not so much in generic models.
- And what about public-private collaboration in Spain, Carme?
- I believe that we have pioneered all these initiatives at a European level with many plans and programs, including some that subsidize SMEs in the application of AI, but we still lack a lot of knowledge.
Everyone is starting at the same time. We need to train people in the new skills that these new opportunities will require. AI will not destroy people's jobs overnight, but people who know how to use AI can be ahead of those who do not. Therefore, it is a big challenge to teach these skills and a big challenge for the big Spanish companies to lead the adoption of this generative AI and be globally competitive so that SMEs do not fall behind.
- It is clear that generative AI is starting a revolution, but don't you also think that we are in a bubble where everyone is 'dressed' with AI and selling things with AI, and that some startups are putting that label on a bit?
- Yes, of course. If you want investment, you need to sell yourself as well as you can, but then it will be the intelligence of the investment funds that separates the good from the bad. Importantly, there is a huge opportunity and a wide range of worlds to explore, with lots of opportunities to apply this to specific sectors.
I do not think it's a bubble in the sense of a bubble where you do not know where the value is. I think that there is a great opportunity for disruption, many use cases where a lot can be contributed, in areas such as health, accessibility, education, in every field of activity. But this must also be accompanied by ethical boundaries. There are also great opportunities in the area of security.
We have a duty to ensure that no idea from a Spanish or European entrepreneur falls by the wayside because it cannot find funding. I think this is where we need to put mechanisms in place, like the funds already allocated to the European Union, to lead this revolution. There is nothing to stop us from doing this.
- How does Carme Artigas envision our future thanks to AI in 5 years? What excites or worries you most about this technology?
- The future is not written. We decide it every day with our decisions and actions. Therefore, the idea that we have no choice, that it is an inescapable truth, that a dystopian scenario is upon us, is absolutely wrong. We should not believe in this inevitable scenario.
AI has great benefits, and the key is to know how to distribute the benefits and costs fairly in society so that not only some reap the benefits while others bear the costs. We must strive to ensure that this unprecedented technological development is ethical and does not override fundamental rights and guarantees.
Europe has a very important role to play in ensuring all this. We have principles and values that we project to the rest of the world. I always say that European AI is not a technological standard, not a legal standard, but a moral standard. We tell the world what we expect from AI and what we do not.
Because of my work at the United Nations, where we take a global perspective and see the need to regulate this not just at a national level but with global commitments to make sure we are not circumventing human rights or international law, I believe there is a great opportunity for capacity building in the global South as well.
The advantage we have here is that the traditional digital divide — how do I enable people without means, without education or older people— to access this world - has already been overcome. And that's because there's no barrier to entry, because everyone knows how to speak. Everyone can have access to these benefits. What we need to ensure is that this is not controlled by a few, that there are no black boxes and that there are mechanisms of transparency and control.