Alejandro Sánchez del Campo is an innovation and legal tech consultant, as well as a professor in these fields and the director of the Master’s in Digital Law, Innovation, and Emerging Technologies at the Madrid Bar Association. His multifaceted expertise at the intersection of law and new technologies is increasingly relevant today.
The anticipated impact of generative AI in the legal world places Sánchez del Campo in a prime position to explore and reflect on this new stage. He has faced the professional changes brought about by the digital age over the past three decades.
Why are LLMs (large language models) expected to have a significant impact on the legal profession?
Because a large part of what lawyers do is language. When you write a contract, file a lawsuit, or present arguments in court, much of a lawyer’s work involves language. Previously, with AI, there was an issue—you had to be somewhat of an expert to leverage it, needing to understand programming logic and how AI works. Generative AI eliminates this hurdle by introducing an intermediate layer where you can speak to it in plain language, and the AI understands you.
Lawyers are not the only ones who work with language. It seems that many professional sectors will be affected by AI...
The other day, I heard a phrase that encapsulates the issue well: this is the first revolution where 'white collars' are more threatened than 'blue collars.' Intellectual jobs are more at risk in this revolution than less intellectual or more manual jobs. That’s a radical change. Machines have always helped us, but humans handled the intellectual part. With AI, especially cognitive AI, this has shifted. Now machines can also perform intellectual tasks, which significantly impacts professions that rely on language.
But not all LLMs are suitable for everything...
Another important point is that the legal world is a specific domain. It has particular rules, content, laws to know, and case law to apply. For example, don't cite a repealed law, as it’s useless. For a lawyer, it's obvious that a Supreme Court ruling carries more weight than a lower court decision. However, this isn't immediately understood by these machines unless they are trained. Ultimately, they are probability-based language models. If an LLM sees a lower court ruling cited frequently compared to a Supreme Court ruling, it will present the lower court ruling as an argument.
Are specialized models like Harvey and Leya designed for this?
Yes, they train an underlying language model and add an upper layer, enabling generative AI to understand legal language better and access content more quickly. Essentially, they add another legal layer.
Who will win the AI race, the agility of small firms or the investment capacity of large firms?
In agility, small firms win—if they have the mindset. Many small firms lack this mentality. They may hear about the technology, try it once, see it doesn’t work for them, and not invest enough time. I believe large firms will clearly win in this game. If you tell a firm like Uría Menéndez or Cuatrecasas that a tool can save 20% of an attorney's time, it’s music to their ears. One of the biggest costs in firms is attorney time. Reducing it by 20% means saving millions of euros.
How will small firms adopt AI?
An external advisor makes more sense for smaller firms. They can't allocate someone to explore AI because it would take time away from other tasks. Unless a firm is large enough to understand technology as an ally and has a tech department to assist them, for most firms, this is just a dream.
How should lawyers manage their expectations when starting to use generative AI?
By understanding the tool and the technology well. If you can't do this because of time constraints, you should hire someone to help. Know what a tool is capable of and what it isn’t, or that it’s only useful if trained properly. Training means having millions of documents and dedicating hours to it. If you can't comprehend it well, get help from someone who does. Otherwise, you might read reports saying it will change everyone’s lives and solve all problems. Expectations are managed by being informed and knowing what the tool can actually do and what it cannot. Not because you’ve heard it, but because you’ve studied and tested it thoroughly.