From California to New York, U.S. bar associations are setting the standards for the ethical use of AI in the legal field.

Bar associations in several U.S. states have begun to systematically outline best practices and ethical guidelines for the use of generative artificial intelligence (AI) technologies in legal practices. This proactive initiative aims to integrate the benefits of AI while protecting the integrity of legal services and client confidentiality.

As AI technology continues to evolve, more and more states, including Texas, Illinois and New Jersey, are expected to develop their own guidelines. The American Bar Association has also expressed intentions to form a group dedicated to examining how AI will impact the practice of law and the ethical issues it raises.

The State Bar of California was one of the first to act, releasing a detailed report last year entitled "Recommendations of the Committee on Professional Responsibility and Conduct on the Regulation of the Use of Generative AI by Licensees." This groundbreaking document emphasizes the importance of understanding both the risks and benefits of AI technologies used in legal services. The guidance insists that lawyers should be aware of their ethical obligations, which may vary depending on the client, the case, the practice area and the type of AI tools employed.

Following California's lead, Florida has also introduced comprehensive guidelines for lawyers using generative AI. These guidelines emphasize the need to take reasonable precautions to protect client confidentiality, develop policies for reasonable oversight of AI use, ensure that fees and costs are reasonable, and comply with applicable ethics and advertising regulations.

More recently, the New York State Bar Association issued its "Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence." This document urges lawyers not to compromise the profession's ethical standards and to remain vigilant about information produced by AI tools, such as generative chatbots and automated legal research.

 

Recommendations

One of the key recommendations across all states is strict protection of client confidentiality. For example, California guidelines suggest that attorneys should not enter any confidential client information into an AI system without adequate security protections. In addition, attorneys in Florida are encouraged to obtain informed consent from clients before employing AI tools in their representation.

The question of how attorneys should charge for AI-enhanced services has also been addressed. Florida's guidelines, for example, suggest that the introduction of AI tools should increase efficiency and, therefore, fees charged must respect existing obligations under U.S. rules, which must be disclosed and reasonable.

Both New York and California recommend that educational programs be established to help legal professionals understand the potential risks, benefits and ethical implications of using generative AI. In addition, they highlight the need for continuing education of legal professionals and law students to adequately prepare them for technological advances in their field.

 

Incidents

Cases of misuse and resulting disciplinary actions underscore the potential risks of AI in legal practices. Notably, one case in Colorado involved an attorney who was suspended for relying on AI-generated case law without proper verification. Such incidents have catalyzed the creation of these guidelines to prevent similar problems in the future.

Another such case occurred in Florida. An attorney with more than 15 years of experience violated a court's rules, as well as the professional conduct of a state's attorney, by also submitting non-existent case law that he had obtained with AI. The opposing counsel realized that the cases set forth in the document did not exist.