In 2009, Facebook was a young social network, just five years old. Nokia cell phones were plentiful, and smartphones on the streets were primitive and few. Neither Instagram nor TikTok existed. That year, when the concept of privacy was still a niche, the Spanish Professional Association for Privacy (APEP) was founded. There were 53 people with unique profiles in the legal world who had some connection to the technology sector.
Among them was Marcos Judel, partner at the law firm Audens and president of APEP since 2019. Data protection is particularly linked to the field of technology law, and specialized data protection profiles have been created over the years. These include not only lawyers, but also specialists in companies that deal with information management.
For years, the institution has been dealing with artificial intelligence systems, such as chatbots or predictive models that collect and use personal data. As part of the AI, Law, and Business congress organized by Lefebvre, a discussion was held with the president of APEP, which now has more than 1,300 members, on the protection of personal data in the midst of the generative AI boom.
How can privacy be guaranteed in this omnipresence of artificial intelligence?
"We have a big advantage: the General Data Protection Regulation (GDPR), which creates a framework based on the identification of risks to the rights and freedoms of individuals. The AI regulation is more product-oriented, similar to CE marking, and is aimed at developers, marketers, and users. They have to carry out a series of risk analyses and take a series of transparency measures. It's like buying a toy or an electronic device that has a CE mark indicating that it complies with European regulations. It's the same with artificial intelligence."
How can these two regulations be combined?
"When AI systems interact directly with personal data, the General Data Protection Regulation comes into play. This is where data protection experts and data protection officers need to be involved. It will be necessary to build an entire governance system around AI, known as AI governance.
"This is crucial as it provides the guidelines and keys as to who needs to be involved in each process, system, and model. There will be moments when AI professionals need to be present and other moments when cybersecurity experts are more important."
And for data protection specialists, too
"There will also be key moments for data protection officers or data protection experts. However, these experts will not always need to be present. For example, if the AI only controls the hydraulic systems of an airplane's landing gear, no personal data will be involved. However, if the AI analyzes a person's characteristics or CV for a selection process and thus decides whether or not to hire them, data protection experts must be involved from the outset."
At what point in the development of an AI system should the data protection officer be involved?
"One of the most important features of the GDPR in conjunction with the AI Regulation is "privacy by design." If a system or AI model involves personal data, the data protection officer must be informed from the outset in order to create the necessary channels to ensure that the system complies with data protection and protects the rights and freedoms of individuals."
Will every company need its own governance system?
"That's a question they need to address. The first thing a company needs to do when looking at these governance systems is to understand what they have and analyze what types of artificial intelligence they are using. Often companies use AI systems that are linked to personal data without realizing it."
Can you give an example?
"A simple chatbot on a website is an AI that could learn from what a person says to it. Imagine a city council and a user asking which bus they should take to hospital X. The chatbot might answer with one option but suggest another for the emergency room. It collects this data along with the user's IP address. The system learns, which could have a significant impact. So, you need to see how you organize the data, understand what you have, and determine if the system is necessary for the specific purpose. This is an important aspect of the Data Protection Regulation. The processing must be legitimate and proportionate to a genuine need. Otherwise, things can get out of hand."
Must this legitimization also apply to the collection of personal data?
"It is becoming easier and easier to collect personal data. However, it depends on how you want to do this, whether you have a legitimate basis, and whether you have informed people that their data will be used for a specific activity. As part of governance, it is important that you know what you have and why. It is also important to consider the lifecycle of this data. These aspects need to be analyzed from the beginning. If an AI system is developed without considering data protection, it could fail."
Could this mean that the AI system cannot be used?
"Yes, it could be banned. The data protection authority could say that data processing cannot be carried out with this system. Other relevant authorities, such as the Medicines Agency (AEMPS), if the AI concerns medical matters, could also intervene."
Does the GDPR embody the data protection aspirations of informed users and associations like yours?
"The GDPR has become a global benchmark. It is not a prescriptive law that mandates certain actions but rather prescribes the protection of individual rights and freedoms. Data protection and privacy are fundamental rights. Therefore, when processing data, the risks to the individual must be assessed. These risks vary greatly depending on how the task in question is approached."
What does this mean for AI systems?
"One company might have a low-risk personnel selection process, while another handles this process in a much more uncontrolled way and includes high-risk factors such as asking about union affiliation or the sexual orientation of applicants. These effects need to be analyzed, and appropriate measures are taken to mitigate these risks. Otherwise, penalties could be imposed. While the fines under the Data Protection Regulation seemed high, the penalties under the AI Regulation are even higher."
Can the two types of penalties be combined?
"They can complement each other. You could be penalized for breaches of both data protection and the AI Regulation. However, this will be more complex as the fines under the AI Regulation are usually aimed at the producer — the developer and marketer— - while the data processing fines are aimed at the user of the AI system. If an activity is prohibited under the AI Regulation, such as subliminally influencing people or certain categorizations, that will be another issue."