I would like to say a big ole Texas-sized “Thank You” to the Texas Bar Journal for publishing Legal Reasoning: Counseling Clients on Managing the Risks of Artificial Intelligence, in the May 2024 issue of the Journal. I wrote this article along with my colleague Kristen Petry, whose work was invaluable in the preparation of this article.
As would only be appropriate for an article of this nature, I asked one of the AI tools to write a blog post about the article and here goes — completely unedited — compare the accuracy for yourself!
Artificial Intelligence (AI) has undeniably transformed the global landscape. According to Stephen Hawking, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.” This assertion underscores the importance of managing the risks associated with AI – a task that falls significantly on legal practitioners advising clients on this innovative technology.
The Role of Legal Counsel in AI Risk Management
From a legal standpoint, AI presents two distinct challenges. First, lawyers must understand the technology and the risks it poses. Second, they must advise clients on the legal aspects of the technology and how to manage associated risks. One of the major hurdles in providing such advice is the fact that the law often lags behind technological advancements. However, lawyers have always navigated unchartered territory by applying known and binding principles, persuasive guidance, and logical reasoning to predict outcomes based on the best available information.
The NIST AI Risk Management Framework
Fortunately, legal practitioners are not operating in a void. The National Institute of Standards and Technology (NIST) and the U.S. Department of Commerce have published the Artificial Intelligence Risk Management Framework (AI RMF) to assist individuals and companies in managing AI risks. Although the AI RMF is not a law or regulation, it provides robust guidance on managing AI risks and is often used as an example of the appropriate standard of care for such issues. The AI RMF addresses risks at every stage of the AI lifecycle, from design and development to deployment and use.
Implementing the AI RMF: Key Considerations for Lawyers
The AI RMF recommends four high-level functions to manage AI risks: Govern, Map, Measure, and Manage. For lawyers, the ‘Govern’ function, which involves understanding, managing, and documenting policies, processes, procedures, and practices, is particularly relevant. Here are some key areas to focus on when advising clients:
- Understand the client: Lawyers should have a thorough understanding of the client’s business, environment, activities, information, and relationships.
- Understand the AI’s intended use: It’s essential to have a clear understanding of the client’s objectives for using AI.
- Analyze legal and regulatory requirements: Lawyers should ensure that any existing legal and regulatory requirements involving AI that pertain to the client are understood, managed, and documented.
- Develop transparent policies: Clients should recognize the need to develop, understand, manage, document, and effectively implement transparent policies, processes, procedures, and practices related to AI.
Addressing AI risks is a continuous, timely process that must be performed throughout the AI system lifecycle. With the NIST AI RMF as a guide, legal practitioners can help clients navigate the complex landscape of AI risk management and ensure the responsible development and use of AI systems.