March 2026 by Cliff McKinney |
Every significant technological change in law, including computers, email, and online research, has required practical tools to implement ethically. Artificial intelligence is no different. This final installment of the artificial intelligence ethics series offers a “starter kit” for responsible adoption by law firms: A Model Law Firm Policy on the Responsible Use of Artificial Intelligence and a Model Training Program for Legal Professionals.
The model policy provides firms with a framework for governance. This includes how to vet tools before approving them, how to maintain client confidentiality, how to verify all artificial intelligence outputs, and how to integrate disclosure obligations into engagement letters. The training program complements the policy by equipping lawyers and staff with the skills to use approved tools competently and securely. It outlines both baseline training for general use and advanced modules for higher-risk tools, emphasizing verification, confidentiality, and supervision. These resources are not one-size-fits-all solutions, but templates to be adapted by firms of different sizes and practice areas.
In full disclosure, I created both of these resources using artificial intelligence. I employed prompt engineering techniques to develop and refine the final products attached, which are not merely quickly generated by artificial intelligence. In fact, these took hours of refinement and adjustment to create. In addition to providing model forms, I also hoped to demonstrate the work product that can result from combining artificial intelligence tools with traditional legal and writing techniques.
With this installment, the Ethics of Artificial Intelligence for Lawyers series comes full circle. We have moved from the first sanction cases, to the ABA’s initial guidance, to legislative and regulatory developments, and now to practical steps that firms can implement today. Artificial intelligence is here to stay, and lawyers must adapt. Adapting does not mean abandoning judgment to machines. Lawyers can smooth their transition to using artificial intelligence by creating policies and training that preserve professional responsibility in an artificial intelligence-driven world.
But ethics are only half the story. Competence in the age of artificial intelligence requires not just safeguards, but also mastery of the tools themselves. That next step will be the focus of the companion series, Prompt Engineering for Lawyers, which will demonstrate how attorneys can structure prompts, assign personas, refine outputs, and stress-test arguments to use artificial intelligence productively and competently. If the ethics series has shown that resistance to artificial intelligence is futile, the next series will show how lawyers can be empowered to direct, rather than be directed by, this technology.
The above is an excerpt of an article published for Arkansas Law Notes. This is the sixth installment of a ten-part series on the use of artificial intelligence in the legal profession. You may click the link below to read the full article.
A managing member of Quattlebaum, Grooms & Tull PLLC, Cliff McKinney speaks nationwide on the rapidly evolving role of AI in law practice, covering cutting-edge tools, prompt engineering, ethical obligations, risk management, and actionable strategies lawyers can implement immediately. He has presented for organizations including the American Bar Association (ABA), the American College of Real Estate Lawyers (ACREL), the American College of Mortgage Attorneys (ACMA), and has written extensively on AI for ACMA, USLAW, and the Arkansas Law Review. Mr. McKinney holds a Prompt Engineering Specialization certification from Vanderbilt University and is a Fellow of both the American College of Real Estate Lawyers and the American College of Mortgage Attorneys.