The Judicial Council’s task force on generative artificial intelligence
The California courts strive to be at the forefront of AI legal technologies
It is undeniable that advances in technology have changed the practice of law; particularly in the last 40 years. The changes can be seen by reading Supreme Court or appellate court opinions from the early 1900s up to the 1980s. You will see that eliminating the quill pen, then the ink pen, getting rid of typewriters and moving into word-processing systems, then computers has led to much longer opinions and far more footnotes. And eliminating large libraries filled with books and conducting research on electronic platforms like Westlaw and Lexis launched us into a new realm as we moved into the millennium that made it easier for small firms, solo practitioners and others to be able to do far more research when drafting briefs.
It’s not surprising that recent advances in artificial intelligence (“AI”) will further change the practice of law. There is no doubt that AI will lead to tremendous results. But it also raises many ethical and practical concerns. For those who refuse to learn about AI, you are likely losing out on many of its benefits. And for those who jump in whole hog and fail to recognize many of its limitations, you need to learn to be careful.
In the end, what history teaches is that new technology, like AI, is not going away and we need to spend time analyzing its pros and cons so that, as lawyers, we can take advantage of the benefits and stave off the perils that come with allowing a computer to single-handedly do the hard work necessary to accurately draft briefs or opinions, run a law firm, or run a court system.
In 2024, recognizing the opportunities to be mined from AI (or Generative AI (“Gen AI”) to be more specific), Chief Justice Patricia Guerrero moved quickly to push the California state courts out in front in analyzing and developing rules to assist the courts in the use of Gen AI. In her March 2024 State of the Judiciary address, the Chief Justice set her sights on generative artificial intelligence as a major priority for the California judicial branch. She explained, “Society, government, and, therefore, our court system must address the many issues and questions presented by the developing field of artificial intelligence. We must do this in careful and deliberative fashion.”
Shortly thereafter, the Chief Justice tasked Administrative Presiding Justice Mary Greenwood (Sixth Appellate District) and Judge Arturo Castro (Alameda County Superior Court) to help identify foundational questions the California court system should consider regarding the appropriate uses of Gen AI.
The 2024 report on Gen AI
In their 2024 report to the Judicial Council, Justice Greenwood and Judge Castro recommended that the judicial branch use Gen AI, but with limitations and safeguards. Their report made clear that avoiding this technology might deprive the branch of significant benefits including in the areas of administration and management, and enhancing research and analysis as well as increasing access to justice for the public. As Judge Castro reported: Gen AI can “help walk self-represented litigants through the process, forms, and procedures they will encounter at the courthouse.”
But Justice Greenwood cautioned, “It’s important to note that generative AI is only a tool.” “It’s not an end, and it’s not a substitute for judicial decision making and due process.”
Following the presentation of Justice Greenwood’s and Judge Castro’s report, the Chief Justice created a new judicial- branch task force to evaluate Gen AI for its potential benefits to courts and court users while mitigating risks to safeguard the public. In doing so, she made clear that “Generative AI brings great promise, but our guiding principle should be safeguarding the integrity of the judicial process.” “That means it will be essential for the branch to assess what protections are necessary as we begin to use this technology.” The Task Force was formed to, among other things, “oversee the consideration and development of branch actions that address generative AI, such as rules of court, technology policies, educational programs, and legislative proposals;” “work with Supreme Court ethics committees to develop guidance on how judicial officers should navigate ethical issues associated with generative AI;” and “provide education for judicial officers, court professionals, and council staff that focuses on the uses, benefits, and risks of generative AI.”
The Task Force is comprised of Administrative Presiding Justice Brad Hill (chair) (Fifth Appellate District, Justice Carin Fujisaki (First Appellate District, Judge Kyle Brodie (San Bernardino Superior Court), Administrative Presiding Justice Mary Greenwood, Judge Arturo Castro, and David Yamasaki (CEO of Orange County Superior Court). It also includes the author of this article. I should point out that the comments and opinions expressed in this article are not those of the Task Force or other members of the Judicial Council unless expressly quoted.
Since its formation, the Task Force met almost monthly and heard from numerous individuals on various issues relating to Gen AI including ethical and practical issues. These included presentations from trial court and appellate court personnel (both administrative and legal); legal scholars and other experts on Gen AI; as well as representatives of WestLaw and Lexis. Information on the formation and work of the Task Force can be found at https://courts.ca.gov/advisory- body/artificial-intelligence-task-force.
The 2025 template Model Use Policy
In February 2025, the work of and a report of the Task Force was presented by Justice Greenwood to the Judicial Council. As part of the report, the Task Force made proposals for a Rule of Court for Gen AI Model Use Policies and a Standard for Judicial Administration. In addition, the Task Force drafted a template Model Use Policy (“Model Policy”) for use by those courts who permit the use of Gen AI. These were distributed publicly as an Invitation to Comment with a deadline for comments in April 2025.
As noted in the Invitation to Comment, the “model policy addresses the confidentiality, privacy, bias, safety, and security risks posed by generative AI systems and addresses supervision, accountability, transparency, and compliance when using those systems. Courts can adopt the model policy as written or add, modify, or delete provisions as needed to address specific goals or operational requirements. The model policy does not require Judicial Council approval. . . , but the task force welcomes comments on the model policy, particularly from courts. The task force asks for specific comments from courts on whether the model policy should address additional issues and whether there are additional guidance documents that would aid courts in developing or applying a generative AI use policy.”
Under the proposed Rule of Court Rule 10.430, if a Superior Court, Court of Appeal, or the Supreme Court permits the use of Gen AI for court-related work, that court must adopt a policy that applies to the use of Gen AI by court staff for any purpose, and by judicial officers for any task outside their adjudicative role. The proposed Standard 10.80 covers the use of Gen AI by judicial officers for tasks within their adjudicative role.
The proposed policies adopted under Rule 10.430 must:
Prohibit the entry of confidential, personal identifying, or other nonpublic information into a public generative AI system, meaning any system that is publicly available or that allows information submitted by users to be accessed by anyone other than judicial officers or court staff;Prohibit the use of generative AI to unlawfully discriminate against or disparately impact individuals or communities based on membership in certain groups, including any classification protected by federal or state law;Require court staff and judicial officers who generate or use generative AI material to review the material for accuracy and completeness, and for potentially erroneous, incomplete, or hallucinated output;Require court staff and judicial officers who generate or use generative AI material to review the material for biased, offensive, or harmful output;Require disclosure of the use or reliance on generative AI if generative AI outputs constitute a substantial portion of the content used in the final version of a written or visual work provided to the public; andRequire compliance with all applicable laws, court policies, and ethical and professional conduct rules, codes, and policies when using generative AI.
(Invitation to Comment, SP25-01, at pp. 2-3.)
The proposed Standard 10.80 covers the use of generative AI by judicial officers for tasks within their adjudicative role. Its provisions are similar to those in rule 10.430. The standard states that judicial officers:
Should not enter confidential, personal identifying, or other nonpublic information into a public generative AI system;Should not use generative AI to unlawfully discriminate against or disparately impact individuals or communities based on membership in certain groups, including any classification protected by federal or state law;Should review generative AI material, including any materials prepared on their behalf by others, for accuracy and completeness, and for potentially erroneous, incomplete, or hallucinated output;Should review generative AI material, including any materials prepared on their behalf by others, for biased, offensive, or harmful output; andShould consider whether to disclose the use of generative AI if it is used to create content provided to the public.
(Invitation to Comment, SP25-01 at p. 4.)
As of the April 17, 2025 deadline, many comments were received from a broad spectrum of the public, justices and judges as well as court administrative staff and legal scholars. Those comments are under review and may result in certain modifications to the proposed Rule of Court and Standard. They also may result in modifications to the Model Policy. The Task Force will provide a further report to the Judicial Council in the future regarding these proposals and ultimately the Judicial Council will vote on the matters.
It is also anticipated that the Task Force will continue to review and consider and make recommendations for the courts on various other issues. The relative infancy of Gen AI will undoubtedly require ongoing review by the Judicial Council either through the Task Force or such other committee/work group as may be deemed necessary to ensure that the courts benefit but avoid any pitfalls associated with this new and quite remarkable technology.
Further, the author recommends that lawyers take note of the fact that the courts recognize the benefits of this technology but also are proactively working to make certain that the potential perils associated with Gen AI do not infect the court system. The now-increasing number of public disclosure of instances of lawyers being sanctioned for allowing Gen AI to draft briefs or other documents without first ensuring against the known presence of hallucinations in Gen AI programs, whether they are public platforms like Claude, ChatGPT, or traditional legal research platforms like Westlaw or Lexis is a cautionary tale. The well-known comment by former President Reagan, “trust but verify” should echo throughout your head when you elect to utilize Gen AI.
Gretchen Nelson
Gretchen Nelson graduated from Georgetown University Law School in 1983. Having earned the respect and clout of her peers, Ms. Nelson has raised attention to increasing diversity within the legal profession – both in law firms and on the bench – and helping to keep female attorneys active in the law while balancing family life. In 2015, Ms. Nelson and Mr. Fraenkel opened the firm of Nelson & Frankel LLP. Here she continues to practice in the area of complex class action litigation, while handling business tort claims. She also represents the victims of accidents on cruise ships and other maritime claims. https://nflawfirm.com/nelson.
Copyright ©
2025
by the author.
For reprint permission, contact the publisher: Advocate Magazine
