When the law cites ghosts: AI and the crisis of fake precedent

In mid-2025, the UK High Court gave a very unusual warning to the profession. It was a clear warning to all in the profession - lawyers drafting briefs, which were predicated on cases produced by one source generative AI, could be held in contempt of court, and there may be allegations of perverting the course of justice, etc. The warning followed instances where lawyers submitted authorities that did not exist. They were entirely made up, including fake case names, dates and quotations in the style of something a Judge would say. This type of conduct is bad for the justice system. It is bad for the reliability and accuracy of the record, which is the basis Judges and Courts rely on to make decisions about cases.

Courts operate independently based upon trusted authorities. Judges and their clerks check the citation, and rely on counsel to do the same. It is an essential professional ethical obligation for an advocate to verify. If an advocate submits a brief citing a case that did not exist, there has been a waste of time and money for the court in trying to trace meaning in an imagined case. It is not only the opposing counsel who has to respond, it is the court clerks who must verify the citation, and a Judge has an obligation to clarify the record. All of this equates to time, money and public trust in the references made by the court in their merit-based decisions.

The problem is generative AI. It will generate text that can look plausibly generated because it has a case name, modelled language of Judges making claims, and plausible quotations. Generative AI generates content in a sequence of predicted words. Generative AI does not check if a judgement exists in law reports or legal databases. It is important to note - generative AI are not research databases; they are predictive software that is generating text with no factual determination of truth.

Lawyers use generative AI without verification and the rigour of the lawyers are introduced. A machine may suggest a precedent case. A lawyer does not verify that precedent case exists. Fictional examples enter filed briefs, and once it is in the record it is someone else's problem too -- judges and opposing counsel must now waste their time. Worse, a fictional case might even get a judgment for them if it sits too long!

The courts in other jurisdictions have already started taking notice. In 2023 a US court had to sanction lawyers who provided AI fabricated cases as citations in a brief. Sanctioning can include anything from a fine to being referred to a regulatory body. The warning from the UK follows the same line. Regulators and courts will not show any tolerance for careless reliance on generative tools creating fictitious law. A lawyer's basic professional duty of candor and competence requires verification.

There is a consequences to lawyers and law firms immediately. The first is reputational risk. Being associated with, say, fabricated authority creates reputational threat regarding trust with clients and market reputation against competitors. The second is risk to profession. Frivolous reliance on a fictitious citation could lead to a findings of misstated professional conduct, being fined, or being referred to regulators. The third is you will have operational risk. At a minimum law firms will need to create a duty of verification process for their lawyers. That equates to verification of citations, legal reasoning, and legal authority. Law firms need to consider the time cost to verify.

There are appreciating consequences for access to justice too. If courts have to spend more time on verifying law citations, that will slow the court, slow a party's pursuit of justice, and all costs will go up too. A litigant who has less resources will feel the effects of that practice harsher than others with more resources. The profession really prefers a future with a bar that maybe prefers that work only for a litigant with plenty of resources.

Finally there is systemic risk. A web is created by precedent. If an authority does not exist and it is cited, and someone else believes that citation and considers it authority, that can create a chain of error. That web can be costly to unwind. Courts will have to amend their own records, and editors will sometimes have to rewrite their own reasoning. This practice undermines confidence in the body of law and leaves an unsettling sense of uncertainty as to how or whether a future dispute can be resolved fairly.

The prescription for firms is simple. Use generative AI as a drafting tool, not as a source of authority to rely on. Establish mandatory verification practices. Any case cited generated from AI must be verified against primary sources and prior to filing. Work on files that assist wAIth drafting should be supervised or checked by more senior lawyers. Firms should establish checklists and audit trails that demonstrate how every authority, statutory or case, was verified.

Vendors will also need to change. Legal research products should incorporate verification layers, even if support is not part of the primary AI use case. Tools should be able to flag when a purported judgment cannot be found in recognized law reports or neutral citation services, and tools should directly link to original supporting legal sources. Vendors should provide logs of the prompts and searches carried out to generate the output. The logs will assist firms in demonstrating due diligence if something goes wrong.

Regulators and courts will need to update guidance to lawyers. Any existing duties of competence and candour apply; however, they will need to be restated in a world where AI can generate outputs. Regulators must clarify when failure to verify, prior to issuing advice, is negligent or reckless. The courts should take the opportunity to clearly state that knowing or recklessly relying on fabricated authorities or statutes will bring significant penalties.

Finally, training must be part of the future orientation of law schools. Verification must be taught in a world where AI can generate outputs. The practical exercise could have students confirm citations against official sources and reports. Verification should be part of any firm-based practice, in addition to AI hygiene, as base line competence for pupilage and traineeship. Bar examinations or professional qualification examinations should measure the research capabilities of candidates, including their ability to verify machine assisted outputs.

Clients need to be informed and protected. To that end, law firms should include an express disclosure in the engagement letter that notes where AI will be used in drafting documents. The law firm engagement contract must expressly address liability and indemnity for any consequences that flow from AI assisted drafting. Insurers, as well will need to amend their professional indemnity policies to provide coverage for risks related to AI.

Court procedures can also reduce risks. Requiring the use of neutral citations or links to published official reports in Court documents would make fake cases much easier to identify. It would also comply with any pre-existing rules about appropriate citation and framing matters that the City of Toronto enacted. Similarly, requiring opposing counsel certify they have checked all relevant authorities against primary sources of law would provide a way to comply with any pre-existing rules as well. Court could also employ sanctions that were proprotionate to a Party that does to check its legal authorities against the law.

These steps will require cultural change as well. The profession values speed and polished documents. AI enhances both. Lawyers will need to be disciplined and remind themselves not to be too speed in drafting and too little diligent in research (while the AI tool is not doing the initial draft). The expectation of wilful blindess to sloppiness in AI will not be tolerated. We move from the old client rule to check authorities or precedent to a front line professional duty.

International practice adds complexity. A law firm that has lawyers on their team from various jurisdictions must ensure their AI work flow meets the standard within the place in which they act. A fake case in one jurisdiction can have reputational fallout in another jurisdiction. There is also just cross jurisdictional work necessitates all lawyers on a matter have the same system in place to verify.

Ultimately, this is about trust. The authority of law, the legal record, degrees of errors, relies on correctness, accuracy and truthfulness. Once trust it is very difficult to recover any lost trust, as we learned from CRtree and Liu cases. AI can be a powerful aide and also help enhance due diligence, etc. if employed judiciously, but also add to the efficiency of risk without any launch points for verification. The profession must determine compliance of an AI tool - if it is to be accepted or a preferred future nor risk regulatory or court intervention.

Firms, vendors, regulators and educators must decide together and cannot leave things like verification to chance. The integrity of the legal record; accuracy and accuracy depends on it.

Previous
Previous

What Does “Woman” Mean? A Legal Breakdown of the UKSC Ruling

Next
Next

Why does law claim authority