Articles
11 min

What you need to know: AI Disclosure Rules in Legal Filings

Courts across the United States are beginning to address attorneys’ use of generative AI in drafting court documents - here's what you need to know.
Written by
Jamie Eggertsen
Published on
February 20, 2025
Introduction

Courts across the United States are beginning to address attorneys’ use of generative AI in drafting court documents. A handful of jurisdictions have enacted or proposed rules requiring lawyers to disclose AI assistance in filings, largely spurred by high-profile incidents where AI-generated content included fake case citations​. These new rules vary by location – some mandate explicit disclosure and certification of human review, while others caution against AI’s unreliability but stop short of requiring disclosure. 

In this article we will survey key jurisdictions with AI disclosure requirements (enacted or proposed) and summarize compliance mandates as they currently stand. We will also outline best practices law firms are adopting – from embedding standard AI-disclosure clauses in templates to using certification attachments – and how AI-forward firms are integrating these into their workflows to remain compliant.

Jurisdiction-Specific AI Disclosure Regulations
Texas

Federal courts in Texas have taken a leading role in requiring AI usage disclosure. In mid-2023, U.S. District Judge Brantley Starr (N.D. Texas) issued a standing order directing attorneys to certify either that no generative AI was used in preparing filings or that any AI-generated content was thoroughly verified by a human​. The order warns of AI “hallucinations” (fabricated quotes or citations) and makes clear that filings missing the required certificate of compliance will be stricken. Judge Starr provided a template certification for attorneys to file, underscoring that lawyers will be held accountable under Rule 11 for all content, “regardless of whether generative artificial intelligence drafted any portion”​. 

Shortly after, the Eastern District of Texas implemented a local rule amendment (effective Dec. 1, 2023) to address AI in court filings. While it doesn’t outright ban AI, it cautions attorneys that if they choose to use tools like ChatGPT or Bard, they remain fully bound by Rule 11 and “must review and verify any computer-generated content” to ensure accuracy​. Comments to the rule stress that technology can aid lawyers but “should never replace the lawyer’s independent judgment,” and flag the risk of AI producing inaccurate content​. In practice, this means Texas lawyers using AI must double-check all AI outputs against reliable sources (e.g. traditional legal research databases) before filing​. The Texas approach combines a mandatory certification (in Judge Starr’s court)​ with broader local-rule guidance reminding all counsel to “trust, but verify” any AI assistance​.

Pennsylvania

In Pennsylvania, a prominent federal judge has implemented AI disclosure rules. Judge Michael Baylson of the Eastern District of Pennsylvania issued a standing order requiring any attorney (or pro se party) appearing before him to affirmatively disclose AI use in the preparation of any filing​. The order mandates “a clear and plain factual statement” in the document if an AI tool was used in any way, and further requires the lawyer to certify that every citation to law or the record has been verified for accuracy​. In practice, Pennsylvania attorneys must attach or include a notice in filings for Judge Baylson if, for example, they used ChatGPT to draft a brief or Harvey AI to summarize evidence. The certification of accuracy mirrors other courts’ emphasis that AI cannot be blindly trusted – attorneys must personally check all references before submitting work product. Failure to disclose AI assistance under this order could violate the court’s requirements and potentially Rule 11. Pennsylvania’s approach here focuses on transparency (telling the court about AI involvement) combined with a human verification requirement to prevent any AI-spawned inaccuracies​.

New Jersey

In the District of New Jersey, Judge Evelyn Padin has proactively addressed generative AI in her courtroom procedures. In late 2023, Judge Padin amended her General Pretrial and Trial Procedures to require a mandatory disclosure whenever attorneys use generative AI for court filings​. Specifically, if any portion of a filing was drafted with an AI tool (e.g. ChatGPT or Google Bard), the lawyer must identify the AI program used, identify which sections of the document were AI-generated, and certify that a human diligently reviewed the AI’s output for accuracy and relevance​. This disclosure/certification must accompany the filing, ensuring the court is informed of AI involvement. Judge Padin’s rule does not forbid using AI; rather, it insists on transparency and human oversight. New Jersey attorneys appearing before her should therefore build into their drafting process a step to add a “GAI Disclosure” statement with those three elements for any AI-assisted writing​. By naming the specific AI tool and vetting its work, lawyers comply with both the letter and spirit of the rule – maintaining candor with the tribunal about how the document was generated, and affirming that they have not simply trustfully pasted AI content without review.

North Carolina

North Carolina has seen both federal court action and state-level proposals on AI in legal filings. In June 2024, the U.S. District Court for the Western District of North Carolina (Charlotte division) issued a standing order significantly tightening AI use rules​. It requires every brief or memorandum to include a certification affirming two points: (1) that no generative AI was used in researching or drafting the document (allowing only standard legal research platforms like Bloomberg, Fastcase, Lexis, or Westlaw), and (2) that a human (attorney or supervised paralegal) has verified “every statement and citation for accuracy” in the filing​. This effectively bars using tools such as ChatGPT to draft briefs in that court, unless perhaps with prior leave. The order was motivated by reports of AI-generated filings containing “fictitious case cites and unsupported arguments,” reflecting the court’s concern over reliability​. Pro se filers are also subject to these requirements​, meaning even self-represented parties must certify no unauthorized AI usage.

At the state level, North Carolina lawmakers have considered legislation on the issue. Notably, North Carolina House Bill 97 (2023–24) proposed requiring litigants to disclose any use of generative AI in court filings​. While H.97 primarily dealt with education appropriations, it included an “AI Pilot” provision mandating that if AI was used in preparing evidence or filings, it must be promptly disclosed. This legislative proposal (as summarized by the NCSL) explicitly “requires litigants to disclose use of generative AI in court filings”​, signaling an emerging interest in codifying AI disclosure at the state court level. (As of this report, the bill’s status is uncertain, but it reflects the broader trend of states examining AI’s role in legal procedure.)

Illinois

Illinois offers a contrasting stance on AI disclosure. While some individual federal judges in Illinois have imposed requirements (discussed below), the Illinois Supreme Court in December 2024 issued guidance that discourages mandatory AI disclosures in state courts​. Acknowledging the recent “high-profile instances of misuse” of generative AI in litigation (e.g. briefs with nonexistent case law​), the Illinois Supreme Court nonetheless took a “pro-AI” stance: it encourages responsible, supervised use of AI by lawyers and judges and “recommends that Illinois state court judges not require lawyers to disclose the use of AI in drafting pleadings.”​. The Court’s guidance emphasizes that existing ethical and procedural rules (like Rule 11 and duties of competence) are sufficient to govern AI use, without needing special disclosure mandates​. In other words, as long as attorneys exercise due diligence – understanding AI’s limitations, verifying its outputs, and maintaining confidentiality – the court system shouldn’t single out AI usage for reporting. This position is rooted in viewing AI as just another tool, akin to a spell-checker or research software, that doesn’t warrant unique disclosure to the court so long as lawyers uphold their existing obligations​. Illinois thus far is unique in explicitly advising against disclosure requirements, favoring integration of AI into practice under general duty of competence.

On the federal side in Illinois, however, some judges do require disclosure. Magistrate Judge Gabriel Fuentes (N.D. Ill.) was among the first, adopting a standing order on May 31, 2023 that attorneys must notify the court of any AI use in filings​. Similarly, Magistrate Judge Jeffrey Cole (N.D. Ill.) issued an order stating: “Any party using AI in the preparation of materials submitted to the court must disclose in the filing that an AI tool was used to conduct legal research and/or was used in any way in the preparation of the submitted document.”​. Judge Cole’s rule also cautions that relying on an AI tool won’t excuse a failure to make a reasonable inquiry under Rule 11​. These Illinois federal orders require a straightforward disclosure within the filing (e.g. a footnote or a statement in the brief) if generative AI played any role in drafting or research. Notably, Judge Iain D. Johnston (N.D. Ill.) has a similar standing order titled “Artificial Intelligence (AI)” on his court page​, indicating multiple judges in the district are aligned on this issue. In summary, Illinois state courts are currently disclosure-optional (with an inclination against requiring it​), whereas Illinois federal courts (select judges) have embraced disclosure mandates to ensure transparency and accuracy in AI-assisted filings.

California

In California, at least two federal judges have addressed AI usage in filings. Magistrate Judge Peter Kang (N.D. California) instituted a requirement that any document submitted to his court which was drafted with any AI assistance must be clearly identified as such​. The order gives lawyers options on how to disclose: for example, flagging AI-generated text in the document’s title or caption, including a notation in a preliminary table or statement, or filing a separate notice concurrently with the document​. Moreover, counsel must maintain records of which portions of text were generated by AI, in case the court asks for more detail​. This means a lawyer using AI to draft sections of a brief for Judge Kang should, at filing time, either label the brief accordingly (e.g. “Brief – Contains AI-Assisted Drafting”) or file a notice stating, for instance, “Portions of this brief were drafted with the assistance of [AI tool].” The lawyer should also keep an internal record of exactly what content was AI-generated versus human-written​. This level of disclosure helps the court evaluate the submission’s reliability and ensures accountability for specific passages.

Meanwhile, Judge Rita F. Lin (N.D. California) has taken a slightly different approach. Rather than requiring an explicit disclosure, Judge Lin’s standing order (May 17, 2024) makes it clear that use of generative AI in drafting filings is “not prohibited,” but that attorneys must personally verify the accuracy of any research conducted by such means​. Her guidance emphasizes that counsel “alone bears ethical responsibility for all statements made in filings,” regardless of whether an AI tool was involved​. In essence, Judge Lin permits AI’s use but puts attorneys on notice that if they use it, they cannot defer blame to the AI for errors: they must double-check citations, quotations, and facts independently. This approach reinforces existing duties (competence, candor, Rule 11) without an affirmative disclosure requirement. California state courts have not yet issued specific rules on AI in filings, so practitioners primarily must heed any judge-specific orders like those of Judge Kang and Judge Lin. In summary, the trend in California’s federal courts is to allow innovative AI tools but demand strict human oversight, with some judges also insisting on clear labeling of AI-assisted documents​

Michigan - Proposed

In late 2023, the Eastern District of Michigan joined the movement by proposing a formal local rule on AI disclosure. On December 8, 2023, the Eastern District’s judges published a Notice of Proposed Amendments that included a new Local Rule 5.1(a)(4) titled “Disclosing Use of Artificial Intelligence.”​ The proposed rule defines “Artificial intelligence” and “Generative AI,” and then mandates: “If generative AI is used to compose or draft any paper presented for filing, the filer must disclose its use and attest that citations of authority have been verified by a human being... and that the language in the paper has been checked for accuracy by the filer.”​. In other words, every filing created with the help of AI would require an accompanying statement (presumably within the document or as a certificate) disclosing that fact and confirming two key compliance steps: all legal citations were checked against actual sources, and the content was reviewed for correctness by a human​. The proposal was opened for public comment through January 19, 2024​. According to a Law360 report, this Michigan rule would “require lawyers to disclose any time they use AI to help them with written filings and verify its citations are real.”​. If adopted, it will make the Eastern District of Michigan one of the first courts to bake an AI disclosure obligation into its formal local rules, as opposed to individual judges’ standing orders. Michigan’s move highlights the institutionalization of these requirements – signaling that verifying AI outputs and notifying the court may become a routine part of filing in that jurisdiction.

Compliance Requirements and Common Themes

Though the specifics vary, these jurisdictional rules share common compliance requirements aimed at ensuring integrity of AI-assisted filings:

  • Explicit Disclosure: Many courts demand that if AI was used, the filing must say so clearly. This could be in the document’s text or title (per Judge Kang’s order)​, via a separate notice or certificate (as in Judge Starr’s and Judge Padin’s requirements)​, or in a standardized compliance statement (per the Fifth Circuit’s proposed language)​. The disclosure usually must include which tool was used (e.g. ChatGPT, Eve, etc.)​ and sometimes even which sections it drafted​. The goal is transparency with the court.
  • Human Verification & Accuracy Checks: Virtually all the rules insist that AI is not a substitute for a lawyer’s diligence. If AI contributed to a filing, attorneys must certify they have verified all citations and quotations against original sources​ and checked the AI text for factual/legal accuracy​. For example, WD North Carolina requires an attorney or paralegal to personally verify “every statement and citation” in an AI-influenced brief​. These measures address the risk of AI “hallucinations” by forcing a human in the loop to confirm the truth of the content.
  • No Reliance on AI for Legal Conclusions: Some jurisdictions effectively prohibit using AI for the substantive legal argument without leave. Texas Judge Starr flatly stated that current generative AI platforms are “not [fit] for legal briefing” due to their propensity for errors​. WDNC’s order bans AI usage except for approved research databases​. Even where AI use isn’t banned, judges like Rita Lin and the Illinois Supreme Court remind attorneys that AI cannot shoulder ethical or legal responsibilities – the lawyer must exercise independent judgment and cannot defer to AI outputs​.
  • Scope of Disclosure – Research vs Drafting: Some rules distinguish between AI-assisted research and drafting. For instance, Judge Cole (N.D. Ill.) requires disclosure if an AI tool was used “to conduct legal research and/or… in any way in the preparation” of the document​. Similarly, Michigan’s proposal covers AI used to “compose or draft” a paper​. Courts generally want disclosure of any AI role that contributes to the content of the filing, whether it’s writing a section of a brief or just generating case summaries that inform the writing. The safest interpretation for lawyers is to disclose if AI had a material hand in wording or researching the document’s substance.
  • Applicability to All Filers: Notably, several orders apply the rules to pro se litigants as well (e.g. WDNC’s order explicitly includes pro se)​. This underscores the courts’ interest in the accuracy of all filings, not just those by attorneys. However, enforcement and awareness may be more challenging with pro se parties.

In summary, whether by standing order, local rule, or proposed rule, the trend is that courts want: (a) to be informed when AI was involved in a filing, and (b) assurance that a human attorney has vetted the work as if they’d done it manually. Where one jurisdiction might simply remind lawyers of their Rule 11 duties (as Illinois and the Fifth Circuit ultimately do)​, another might explicitly require a filed certification of compliance. For practicing attorneys, the key is to know the rules of the forum and be prepared to follow any AI-related mandates to the letter.

Law Firm Best Practices for AI Disclosure Compliance

With a patchwork of AI disclosure rules emerging, law firms are developing strategies to ensure every court filing meets the necessary requirements. Proactive firms are not waiting until an issue arises – they are updating their workflows and templates now to account for AI usage. Here are the best practices we’ve seen from AI-native law firms: 

Standard Disclosure Language in Templates

To streamline compliance, many firms have embedded AI disclosure clauses into their document templates for briefs, motions, and pleadings. This might be a section in the template that can be easily filled out or removed. For instance, a brief template could include a ready-made paragraph (perhaps in the footnotes or an appendix) reading: “Certification of AI Assistance: Counsel [did not utilize]/[utilized] a generative AI tool (specify) in the preparation of this document, and any AI-generated content has been reviewed and verified for accuracy.” Lawyers can then quickly edit this language to fit the situation and satisfy court requirements. By having disclosure text on hand, firms reduce the risk of scrambling to comply at the last minute. 

Best practice: incorporate such standard language into the boilerplate of certificates of service or in the signature block area (where some courts expect certification), so it’s not overlooked.

Using Standardized Certification Attachments

In courts that require a separate certification document (such as Judge Starr’s court in N.D. Texas), law firms are keeping form certificates ready to go. Many courts provide templates (Judge Starr’s standing order included a “Certificate Regarding Judge’s AI Requirements” form​). Firms have saved these forms in their document management systems. Some have created internal AI Disclosure forms that cover the needed info (tool used, sections affected, verification done) which can be attached to any filing when required. By standardizing the attachment, paralegals or junior lawyers can easily include it with the filing packet. As an extra precaution, some firms attach these certificates even when not explicitly mandated, if a particular judge has informally signaled interest in AI usage. 

Here is an example of a standard disclosure that some of Eve’s customers are currently using: 

"This document was generated with the assistance of Eve. I hereby  certify  under  penalty  of  perjury  that,  despite  reliance  on  an  AI  tool,  I  have  independently  reviewed  this  document  to  confirm  accuracy,  legitimacy,  and  use  of  good and applicable law, pursuant to Rule 11 of the Federal Rules of Civil Procedure." 

Best practice: Use standard certifications for all documents you are submitting to the court that you have used AI for. 

As a bonus, here are few more processes and recommendations that can help keep your firm compliant and protected while leveraging the advantage of AI. 

  • Monitoring Jurisdictional Rules: Keeping track of where AI disclosure is required is step one. Firms are using trackers and resources to stay updated on the fast-changing landscape. For example, legal tech platforms have created AI rules trackers and maps (e.g. Ropes & Gray’s color-coded map of U.S. courts, or Law360’s list of AI orders)​. LexisNexis has even provided a “Which Courts Make You Disclose AI Use” tool, paired with a standard certification template for compliance​. Ensuring attorneys check the local rules and judge’s standing orders for any AI provisions before filing is becoming a routine part of case management​. Some firms maintain internal digests or practice group memos summarizing the disclosure mandates in key jurisdictions.
  • Workflow Flags & Checklists: Compliance-oriented firms have added AI disclosure checks into their drafting workflow. For example, a litigation team may update its pre-filing checklist to include: “Did we use any generative AI in producing this document? If yes, have we included the required disclosure or certification?” This kind of reminder, perhaps in the project management software or even a pop-up in the document template, helps prevent omissions. In AI-heavy practices, some firms require attorneys to log AI usage for each deliverable – not only for client billing clarity, but to trigger a compliance step if that deliverable is a court filing. By logging “Used ____ platform to generate first draft of Section II,” the system or supervising attorney can then ensure the final document contains the necessary notice and that all AI-produced material was reviewed. AI-native firms often form internal committees or appoint “AI champions” to oversee such processes, ensuring tech adoption doesn’t run afoul of court rules.
  • Human Review Protocols: Beyond the formal disclosure text, firms are instituting rigorous review protocols for AI-derived content to meet the verification requirements. Many now require a second attorney (or the primary attorney with extra scrutiny) to cite-check and fact-check any AI-generated sections of a brief against original sources. This mirrors what the court rules expect – e.g. verifying that every case citation actually exists and says what the AI claims​. Some firms treat AI output akin to a junior associate’s draft: it must be carefully edited and validated by a senior attorney before it’s filed. In practical terms, lawyers might run AI-suggested case law through Westlaw/Lexis to confirm validity, use redlines to compare AI text against trusted treatises, and generally never “copy-paste-submit” without modification​. By making this a habit, the eventual required certification that “a human checked the AI’s work” is genuinely earned.
  • Training and Awareness: Law firms also recognize that compliance is only as good as the awareness of their attorneys. Thus, many have rolled out trainings on AI tool use and disclosure obligations. These trainings cover the ethical duties (competence, confidentiality, candor) and highlight the new court rules that demand disclosure​. Attorneys are taught, for example, that if they use generative AI, they must be prepared to tell the court and the client in many instances​. Firms emphasize that failing to disclose when required could lead to sanctions or at least embarrassment (no one wants a judge to discover AI involvement that wasn’t reported). In AI-centric firms, this training is often part of onboarding to new AI tools: before a lawyer gets access to the shiny new chatbot, they must agree to the firm’s policies, which include compliance with all disclosure rules and proper use guidelines.
  • Client Communication: As a parallel best practice, firms also consider the client’s perspective. Some court rules don’t directly involve the client, but ethically it’s wise to inform clients if AI was used in their case filings (especially since it will be disclosed to the court). Firms are developing client disclosure policies whereby the engagement letter or a case update will note the use of AI tools in handling the matter, the benefits they bring, and the safeguards in place. While this goes beyond what courts require, it fosters transparency and trust. The Pennsylvania Bar, for instance, has advised that lawyers should inform clients about use of AI tools and explain their capabilities and limits​. Such communication dovetails with the formal court disclosures and ensures the client isn’t caught off-guard reading an AI certificate attached to their brief. 
Conclusion

The landscape of AI disclosure in legal filings is rapidly evolving on a state-by-state (and court-by-court) basis. We’ve seen federal districts like Texas, North Carolina, New Jersey, Illinois, and others impose new requirements that attorneys declare their AI use and stand behind the results​. Even where explicit disclosure isn’t mandated, the clear trend is toward expecting lawyers to exercise caution, thoroughly vet AI contributions, and never shirk responsibility for content​. For practitioners, this means treating generative AI as a helpful assistant – but one whose work must be transparently disclosed and rigorously reviewed whenever required.

Law firms that embrace AI are wisely adopting compliance measures now. By building in disclosure text and verification steps into their drafting process, they ensure they can reap AI’s efficiency benefits without running afoul of new rules. From using standard certification templates​ to training attorneys on local AI requirements​, these best practices minimize risk and demonstrate professionalism. An attorney filing a brief today must consider: Does my jurisdiction require an AI disclosure? Have I double-checked every AI-derived citation? If in doubt, the prudent course is to err on the side of disclosure and verification, since no judge will fault counsel for being too careful with accuracy.

Ultimately, the message from courts is not to ban innovation, but to channel it responsibly. As one court noted, technology can be welcomed “so long as the use is responsible”​. By staying informed of jurisdiction-specific rules and integrating compliance into their workflows, lawyers can responsibly harness AI’s power – using it to enhance legal practice while upholding their duties of candor and competence. The emerging mosaic of AI disclosure regulations is becoming part of the modern lawyer’s playbook. Adhering to these rules through diligent best practices will be essential to practice law in the age of AI, ensuring that justice is augmented – not obstructed – by artificial intelligence.

Monthly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every month.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.