Artificial intelligence (AI) has the capacity to revolutionize every industry, including the legal profession. But with this potential comes a significant risk of legal malpractice. Attorneys and paralegals who use AI must do so with diligence while being mindful of their fiduciary duties. Those who don’t could be liable for committing malpractice. If your lawyer uses AI resulting in your case being negatively impacted, talk to the legal malpractice team at Stanfield Bechtel Law.
Ways in Which AI Could Result in Malpractice
The use of artificial intelligence in the legal profession is not necessarily a bad thing. Like any other form of software and computing, it can be a valuable tool which enhances an attorney’s ability to more efficiently deliver quality legal services. However, a lawyer who relies too heavily on AI may end up breaching his or her standard of care owed to the client or failing to adhere to the professional standards that are expected of attorneys. This is where AI runs a substantial risk of malpractice. Some specific examples include:
- Using AI to write briefs without reviewing them: Briefs are intended to provide written support for a party’s position in a case, but they require supporting legal authority and compelling argument. AI can and has been used to generate written content to use in briefs, along with cases that are cited as authority. However, an attorney must actually review those cases, and the arguments made concerning them, to ensure they actually support the client’s position.
- Using AI to write briefs that cite false or non-existent authority: A related problem is that the cases which are pulled in support of written briefs sometimes do not even exist. AI programs have been known to include fictional cases and quotations that they “hallucinated.” This practice resulted in sanctions and fines for one New York attorney.
- Chatbots, AI, and attorney-client confidentiality issues: Chatbots which are based on AI work by uploading volumes of material from which the bots draw to generate outputs. However, the bots may contain sensitive attorney-client information. For example, the material might include prompts and questions that an attorney has used with his or her own client, which in turn could reveal the lawyer’s legal strategy or even advice (which should always remain confidential).
How Attorneys Can Prevent the Misuse of AI
The above-mentioned New York attorney who was sanctioned could have used AI more responsibly. In the sanctioning order, the judge wrote, “[T]here is nothing inherently improper about using a reliable artificial intelligence tool for assistance … But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
Put simply, a lawyer cannot delegate his or her legal skill or tasks to a computer program without reviewing the product that is generated. It is improper for an attorney to rely solely on AI to write briefs and other legal material while hoping for the best outcome. As we’ve seen, this can lead to invalid or even fictional legal authority making its way into court, which represents a severe fiduciary oversight and almost certainly constitutes malpractice.
Similarly, law firms must be careful in handling their clients’ sensitive, attorney-client privileged information. Even without the clients’ names or identifying details, the use of this information for chatbot content can inadvertently reveal a lawyer’s legal strategy to opposing counsel. This also may be considered legal malpractice.
Questions About Your Lawyer’s Use of AI? Reach Out to Us
Is your lawyer or law firm using artificial intelligence? Are you concerned that this use of AI might have resulted in poor legal representation or the disclosure of your attorney-client privileged information? Give Stanfield Bechtel Law a call today to discuss a possible legal malpractice case.