Marisa Cruzado has been working in gender equality policy in organizations for more than two decades. Since last year, she has been leading IA+IGUAL, a project that analyzes and examines the ethics of algorithms used in the labor market.
The initiative is based on UNESCO's recommendations on the ethics of artificial intelligence and aims to achieve the Sustainable Development Goals (SDGs) in order to create a more equal framework in employment policy.
Cruzado talks to Neosmart about the importance of bias and how it is more entrenched in the world of HR than many professionals in the field think.
- How did IA+IGUAL come to be?
- The origins date back to 2019, when neither artificial intelligence nor bias were discussed in the way they are today. There was a strong initiative by a group of companies to conduct a study on how the lack of work-life balance affects the economy and gender inequality. I attended a mobility event and met the Director General for Equality of the Community of Madrid. The Directorate General for Equality, with whom I was working, asked me to design a project on technology and gender equality. That's where the idea came from.
I was looking at emerging trends and came across an article by a British journalist who had written a book about AI bias and how it perpetuates bias models because we weren't focusing on that concept and how it impacted the development of tools. I thought about how we would oversee the integration of bias into AI and realized that a review would be necessary.
I proposed to the Madrid community to develop a model to review algorithms to detect bias in a specific niche. We wanted to focus on a critical area such as human resources, where access to the labor market and gender equality issues are priorities for administrations to reduce gender hiring gaps, promote women's participation in STEM professions, etc. We wanted to find out how AI tools used in HR take bias into account. That was the birth of IA+IGUAL.
I developed a project and looked for two partners: a technology partner, IN2, a Barcelona-based technology developer, and ORH, an HR content platform. My company, CVA, takes care of the awareness and communication aspect. The three companies formed a consortium and submitted the project to a call for proposals from the Community of Madrid's Social Innovation Area, which was financed with Next Generation funds. From there, we received the budget for the implementation.
The tender includes a pilot project to analyze 10 AI algorithms used in human resources to determine how bias arises during the training process and how this gap can be closed.
We then plan to translate all of this into a white paper with recommendations to later develop a UNE standard, an ISO standard, or whatever is appropriate. For this part, we have involved the College of Navarra in the project, specifically DATAI, their AI research center. They will translate everything we do in the audit into the white paper from an academic/scientific perspective.
- How much of IA+IGUAL's funding is public and private?
- It's a grant, so it's virtually 100% subsidized. 90% is funded by Next Generation funds. However, there are many additional things we need to do that were not included in the RFP, such as studies, events, webinars, etc.
- You mentioned 10 algorithms. Are these for different functions like selection process, performance, training, dismissals...? Does each of them have a specific function?
- In the world of AI applied to HR, what we call generative AI is relatively new and has been introduced mainly in the field of training. Training platforms that provide services to HR departments are using this technology to innovate processes, make online training less boring, and to collect and analyze data to understand how much time a person devotes to a course, what they learn, etc.
On the other hand, AI is being introduced in business processes, not in HR. HR often outsources these solutions to be efficient in all business processes, especially the critical ones.
Selection, a critical process, is changing in terms of talent acquisition. The focus has shifted from resumes to technical skills, requiring tools to find profiles in niches they haven't traditionally explored. AI is of great use here as it can analyze much more information faster and provide more accurate solutions.
AI can also be useful in processes such as talent retention, finding out why key profiles might leave a company and putting measures in place to retain them. For example, if someone lives far from the office, offering more remote working could help to retain them. Talent retention is the first algorithm we have developed.
We have also looked at diversity management as part of the corporate culture. AI tools are used to define career paths within companies and determine who should move up in management or take an alternative path in production.
Many solutions on the market offer these tools for greater efficiency. However, these are often developed by North American companies. Therefore, it is important to understand where they source their data from, as the American market differs from the European market in terms of regulations and social diversity. This is part of the auditing process.
- Have you already developed the first algorithm or are you working on several?
- We have not developed them. We work with companies that commercialize these algorithms and offer to audit their models based on their use cases.
We review how these models were created, where the data came from, how they were trained, and identify any biases. Once we have identified them, we point them out so that they can be corrected and the correct functioning of the tools is ensured.
We offer companies the opportunity to benefit from Next Generation funds, so the testing process is free of charge. When the project ends in summer 2025, we will have to charge for further audits.
Our project is empirical, not theoretical. We show use cases and results from practice.
- What challenges have you encountered along the way?
- The biggest challenge is dealing with diversity within the project. There are many obstacles due to the bias of the people involved.
We have data analysts (with technical knowledge but without a holistic view), a multidisciplinary advisory board with a historian, a philosopher, a humanities scholar at CSIC, an astrophysicist, a mathematician, an AI bias consultant, a natural language programming algorithm developer, an expert in new training models, a privacy advocate, and an expert in international AI regulations.
In this advisory board, each member is an expert in their field, but except for the developer, they know nothing about algorithms. The challenge is to bring together technical experts, humanities scholars, HR consultants, the College of Navarra with its scientific background and companies. It's complex, but also fascinating, as we bring together a broad spectrum of knowledge.
- Is it difficult to implement a gender equality policy in certain sectors with historical data?
- We have focused on the concept of equity rather than equality. The problem starts with the skewed data and reality.
If you are not aware of it, you could either ignore it and perpetuate the bias or try to intuitively equalize the percentages. Neither approach solves the problem of bias. Therefore, the EU recommends multidisciplinary teams and human decision making. Regular checks ensure that the system is working correctly, and this requires human oversight.
If a selection tool is well managed and free from human bias towards certain groups, it can truly assess candidates based on their skills and talents, without taking into account disabilities, criminal records, etc. It is important to feed the model correctly to avoid unintentionally disadvantaging one group.
For example, if you require applicants to have made National Insurance contributions in the last ten years, women who have taken maternity leave could be excluded. This unintended exclusion not only has reputational and legal implications, but also means that valuable talent is not considered. Technical experts often lack the sensitivity to understand these HR issues.
- How does your initiative fit with EU law on artificial intelligence?
- The law sets important certification standards. The Directorate General for Digital Transformation of the Spanish government is promoting a self-certification initiative. Our thesis is that self-certification is suitable for low-risk applications, but not for the HR department.
HR is considered high risk, with limitations in biometric and emotional recognition. The certification requirement ensures that AI tools used by companies on a daily basis comply with regulations and provide transparency and reliability.
If a company self-certifies and an employee has a problem with how an algorithm has determined their career path, the company faces legal consequences. An independent certifier provides legal certainty.
Our project fits into this framework by developing mechanisms to ensure compliance and certification.
- Can you name sectors or industries that working with you?
- In the white paper, we address general findings and procedures without singling out a specific company.
All sectors are involved, including consumer goods, wineries, construction, large retailers, banks and insurance companies. We have to limit our capacity to ten simultaneous audits.
We have seen a demand for HR audits in various industries, which highlights the lack of AI skills among HR professionals.
As HR departments integrate data analytics, there is a gap between data experts and HR experts. For example, HR professionals often don't realize that when they upload resumes to ChatGPT, they are sharing confidential information with a public tool. This new landscape requires everyone to learn as they go.
The use of AI in certain areas may not make sense if the existing processes are already effective. HR managers go through this learning process, which is exciting and instructive. We develop webinars and training platforms to support this.
- You mention a European focus. Do you working with companies outside Spain, or is this a future goal?
- Currently we are working with companies that have a tax presence in Madrid, as required by the tender. However, we are working with BIAS on the development of theoretical algorithms.
Originally we wanted to focus on Europe, but we have found that Latin America is an interesting field to experiment in.
We are in talks with the Organization of Ibero-American States and are working with governments and universities to develop training projects for Latin America, particularly in the field of human resources. These organizations need to adopt AI, even though there are no regulations there.
- You recently presented a study on the use of AI in human resources, which shows that two thirds of professionals have little knowledge in this area, but half see it as an opportunity. What challenges does generative AI pose for these departments and what surprising survey results did you find?
- Generative AI should be used in HR as a co-pilot, taking over tedious tasks such as information gathering and data analysis so that HR professionals can focus on decision making and direct interaction with people.
Asking the right questions is crucial for AI to deliver nuanced answers rather than generalities.
It was surprising that despite their backgrounds in psychology and the humanities, many recruiters had not thought about how biases affect attitudes or how they influence decisions.