Ethical Implications of Generative Artificial Intelligence on the Practice of Law

Sep 13, 2024 1:28:50 PM

  

Ethical Implications of AI (3)

By Jennifer L. Villier, JD

Newly available generative artificial intelligence (AI) tools have the potential to dramatically ease some of the more tedious and time-consuming aspects of practicing law. However, these tools can also cause trouble for the lawyers who use them. How will AI and the law co-evolve? It is a question that members of the legal profession are now grappling with as federal and state courts and state bars around the country (indeed, around the globe) race to formulate rules and guidelines for using AI in the practice of law.

Sam Altman, Chairman and CEO of OpenSourceAI, posits that society and technology must “co-evolve,” and people must decide what will and will not work for them and how they want to use it. The Fifth Circuit Court of Appeals, a handful of judges in various federal district courts and state courts, and the state bars of California, Florida, Michigan, New Jersey, New York, and North Carolina have been among the first to issue requirements and preliminary guidance for the use of AI. As these new guidelines have emerged, they have primarily focused on the implications of AI use on a lawyer’s duties of competence, supervision, confidentiality, communication, candor, and ethical compensation. 

 

Duty of Competence 

The American Bar Association (ABA) Model Rules of Professional Conduct require lawyers to actively stay up-to-date with changes in the law and its practice, including the benefits and risks associated with relevant technology (see in particular Rules 1.1, 1.3, 2.1, 5.1, 5.2, and 5.3). As anyone working in an office with both new and more experienced lawyers can attest, lawyers can fulfill this duty by learning to use new technology or hiring appropriate help to leverage new technology appropriately. But regardless of how each lawyer chooses to fulfill their duty of technical competence, every lawyer must understand in basic terms what the latest AI tools do and do not do. 

 

See how attorneys are (or are not) implementing generative AI tools, and read about other estate planning topics, by downloading the 2024 Industry Trends Report.

 

Reactive AI tools have been around long enough that many lawyers are familiar with their benefits and limitations—predictive text, for example, can be a valuable shortcut to typing in recipients’ addresses in an email message but can also result in amusing suggestions when drafting a reply in that same email. In contrast, generative AI tools such as the now-infamous ChatGPT (as well as Gemini, Lexis Plus AI, and Westlaw Precision AI Assisted, among others) have only recently become widely available, and users are only beginning to understand their uses and pitfalls. Unlike reactive AI, publicly available generative AI tools comb all of the available data on the internet to generate responses to prompts using one of its key features: its ability to generate new data. This ability allows generative AI to respond with creative stories, for example, but can be highly problematic in a fact-based setting. This is because generative AI may also create inaccurate or misleading output that appears to be accurate and true, a phenomenon called an AI hallucination

A New York attorney with 30 years of experience discovered this the hard way when he used generative AI to draft a brief he filed with the United States District Court for the Southern District of New York. The opposing party and the court discovered that many of the citations in the brief were to cases that did not exist, and many of the quotations that the AI tool incorporated into the text of the brief were wholly fabricated. Upon being called into court for an explanation of what the judge called “legal gibberish,” the offending attorney repeatedly apologized, insisting that he thought ChatGPT was a “super search engine” capable of in-depth legal research and did “not comprehend that ChatGPT could fabricate cases.” The case caught national media attention and “reverberated throughout the entire legal profession,” said David Lat, a well-known legal commentator. “It is a little bit like looking at a car wreck.” 

To avoid causing your own car wreck, take particular care to use AI responsibly, as many new guidelines emphasize, by taking the extra step to carefully review any AI output to verify its accuracy and appropriateness. Certain generative AI tools can be beneficial in drafting, reviewing, and summarizing standard documents, such as the primary forms for wills, trusts, and other estate planning documents. They may also be used in estate and trust administration proceedings to provide notices to creditors, beneficiaries, and heirs; to assist in accountings; and even to make investment decisions. But do not be tempted to blindly rely on AI to complete these tasks on your behalf. Lawyers must take the time to review documents and exercise their professional judgment: this is not only a lawyer’s value-add to the process but also their ethical obligation

 

Duty of Supervision 

Just as lawyers are responsible for maintaining their competence, they have a duty under ABA Model Rules 5.1, 5.2, and 5.3 to make reasonable efforts to ensure that other lawyers and nonlawyers in their firm conform to those rules. In the context of the use of generative AI, the California Bar has interpreted this obligation to mean that supervisory lawyers should establish clear policies regarding its use and adopt measures to provide reasonable assurance of compliance by the firm’s lawyers and nonlawyers. 

The Florida Bar has gone even further, asserting that the standards applicable to the supervision of human nonlawyer assistants should be used as guidance when a lawyer uses AI: the attorney must review the work product of generative AI just as they would the work product of a human assistant to ensure its accuracy and sufficiency and may not delegate functions to AI that could constitute the practice of law. Moreover, the obligation to supervise is not excused if the attorney uses generative AI that is managed and operated by a third party. Further, American College of Trust and Estate Counsel Fellow Gerry Beyer recommends that law firms add written policies to their employee handbook regarding the use of AI and train all firm employees regarding the standards that they must meet.

 

Duty of Client Confidentiality 

ABA Model Rule 1.6 requires that lawyers strictly protect their clients’ confidential information. Use of certain AI tools risks breaching that duty: as mentioned above, many generative AI tools comb all of the available internet data to generate responses to a user’s prompt, and each response and each prompt are then added to that internet trove to become eligible fodder for subsequent use by others. In this case, any confidential information entered in the AI prompt may be publicly available. 

When faced with similar risks with the advent of cloud computing and the attendant storage of data on remote servers, many state bars issued guidance in line with lawyers’ pre-existing duty to take reasonable precautions to ensure that the confidentiality of client information is maintained: They clarified that lawyers must ensure that the technology service provider “has in place, or will establish, reasonable procedures to protect the confidentiality of information to which it gains access, and . . . that it fully understands its obligations in this regard.” The ABA further instructed that lawyers are “well-advised to secure from the service provider . . . a written statement of the service provider’s assurance of confidentiality.”

Before using any AI tools, lawyers must determine how that particular AI tool treats the information it receives to ensure that they are not inadvertently breaching their duty of confidentiality or violating attorney-client privilege: How is the information you input stored, retrieved, or retained? Does the AI tool allow you to track and delete any privileged and confidential information? Does the AI tool use the information inputted to train its algorithm and improve its engine? Depending on the answers to these inquiries, the AI tool may risk client confidentiality. 

In addition to confirming that the chosen AI tool is adequately secure to maintain the confidentiality of the client’s information (and obtaining written confirmation from the AI service provider), lawyers may also wish to mitigate risk further by taking care to enter sanitized, generalized information in the AI prompts to avoid supplying anything too specific to their client. Note, however, that the use of an AI tool creates cybersecurity concerns: a cyber intrusion into an AI tool could result in a hacker gaining access not only to any data an attorney has entered but also to the attorney’s searches, allowing the hacker “access into the mind of a lawyer and the arguments they might be raising.”

The Florida Bar noted that the use of an in-house generative AI program rather than one that requires data to be stored by an outside, third-party generative AI program may mitigate confidentiality concerns. However, a proposed formal opinion issued by the North Carolina State Bar cautions that even an in-house program that seems more secure because it is maintained and run on local servers could be more vulnerable to attack if it lacks security features used by larger companies with greater cybersecurity capabilities. Consequently, an attorney who plans to use an in-house generative AI program should consult information technology and cybersecurity professionals about how best to protect client information stored on a local server.

 

Duty to Communicate and Disclose; Duty of Candor 

Preliminary guidelines issued thus far (expanding on existing Model Rules 1.2, 1.4, and 3.4) emphasize the preference for client disclosure or consent before significant AI use and advise lawyers to develop a policy governing AI use in their office and include a description of that policy in client engagement letters. Among the states that have made recommendations, guidance varies regarding whether informed consent or merely disclosure is required before an attorney can input confidential client information into a generative AI program. 

The Florida Bar recommends that lawyers obtain informed consent from a client before using a third-party generative AI program if the use would involve the disclosure of any of the client’s confidential information. However, the Florida Bar further indicates that if the use of a generative AI program does not involve disclosing a client’s confidential information to a third party, a lawyer is not required to obtain informed consent. The California Bar’s guidance is more relaxed, recommending only that attorneys “should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use.” 

Further, disclosures of the use of AI chatbots may be required in communications with prospective clients. For example, the Florida Bar recommends that attorneys who utilize AI chatbots on their websites for client intake ensure that the chatbot is not “overly welcoming” to prospective clients. Rather, it should immediately and clearly disclose that it is an AI chatbot, should not provide legal advice, and should include “clear and reasonably understandable disclaimers” to ensure that a lawyer-client relationship has not been established without the attorney’s knowledge. In addition to state guidance regarding client disclosures or consents related to the ethical use of generative AI, some courts now require counsel to inform the court if specific AI tools are used to prepare any documents filed with the court, and in the case of the Fifth Circuit Court of Appeals, to certify that any generated text was reviewed for accuracy and approved by a human. 

Many commentators expect this guidance to soften as the use of generative AI becomes less novel and more commonplace, similar to the evolution of expectations around lawyers’ use of email for client work and e-filing systems with the courts: “The disclosure question will become less relevant as A.I. becomes more ubiquitous. . . . It’s getting harder and harder to define technology as either an A.I. tool or a non-A.I. tool. . . . We’re getting close to the point that A.I. will somewhat influence everything.”  

 

Duty Regarding Ethical Compensation 

Another big issue facing lawyers who use AI tools to assist with drafting and reviewing client documents is how to appropriately bill clients for that work, as reflected in Model Rule 1.5. If drafting a client’s last will and testament, for example, previously took three hours, and the use of AI tools cuts that down to one hour (including a lawyer’s careful review of the AI tool’s output), ethical guidelines require that lawyers’ billing practices be “accurate, honest and not excessive,” and the client benefits from the lower fee associated with that increased efficiency. Increased efficiency may also benefit the lawyer by allowing more time to perform work for more clients.

It is worth considering whether continuing to bill by the hour for work performed remains the ideal compensation structure if a lawyer anticipates using generative AI tools to assist with a significant amount of client work. It may make more sense for lawyers to use a compensation structure that incorporates flat fees for specified work that may utilize generative AI in the drafting process: such a compensation structure is perfectly appropriate “provided the flat fee charged is not clearly excessive and the client consents to the billing structure.”    

 

What’s Next for AI and Attorneys

AI technology continues to develop and change at a frenetic pace. Nevertheless, ABA Model Rule of Professional Responsibility 1.1, its accompanying comments, and similar state ethics rules require lawyers to continue to make reasonable efforts to keep abreast of changes in the law and its practice, including the use of AI technology in the practice of law and its related risks and benefits. As noted in preliminary guidelines issued by the New Jersey Supreme Court, although “AI does not change the fundamental duties of legal professionals, lawyers must be aware of new applications and potential challenges in the discharge of such responsibilities.” 

Attorneys should be mindful that generative AI may be ethically used “only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations.” Fortunately, it is likely that additional states will soon provide guidance to help attorneys ensure that their use of generative AI programs complies with their professional and ethical obligations. 

In the meantime, prudent attorneys will make reasonable efforts to educate themselves about the risks and benefits of any AI program they intend to use, obtain informed consent from clients before using generative AI, and exercise caution in its use, particularly taking steps to avoid the disclosure of confidential client information and verifying the accuracy of any AI output.

 


Explore attitudes toward generative AI and its impact on estate planning, along with other exciting topics, by downloading the 2024 Industry Trends Report.

 

 

Post a Comment

  • There are no suggestions because the search field is empty.