Home Knowledge Managing AI Risk in Irish Litigation

Managing AI Risk in Irish Litigation

The Irish Court of Appeal (Court) has, for the first time, issued general guidance on the use of Artificial Intelligence (AI) in the litigation process.

The judgment in Guerin v O’Doherty [2025] IECA 48, will be of particular interest to those engaged in litigation in Ireland, as it outlines the responsibilities of parties, whether legally represented or self‑represented, when using AI tools to prepare submissions.

Background

The guidance was issued in the context of an appeal where the defendant, a lay litigant, used an AI tool to draft her written submissions. Those submissions included references to non-existent case law, so‑called AI “hallucinations”.

The Court noted that hallucinations are a well‑known and inherent risk associated with AI-generated legal submissions. Crucially, the defendant had not verified whether the cited authorities were genuine, nor had she informed the opposing party or the Court that she had used AI to generate her submissions.

Counsel for the plaintiff confirmed that this caused unnecessary costs and delay, as they were required to attempt to locate the non‑existent authorities.

Key Observations from the Court

The Court reminded parties, whether represented or appearing in person, of the obligation not to mislead the Court, even inadvertently. That includes an obligation not to advance submissions supported by “fake” or non‑existent authorities.

The Court also recognised that the inappropriate use of AI to generate legal submissions can place an unfair burden on the opposing party and undermine the efficient administration of justice.

General Principles Issued by the Court

To support responsible use of AI in litigation, the Court set out five principles of general application:

  1. AI may be used as a research aid, provided it is used responsibly and does not mislead the Court, whether intentionally or inadvertently.
  2. Parties must expressly inform the other side and the Court if they have used AI to assist in preparing submissions.
  3. Self‑represented parties are held to the same standard as legally represented parties in relation to the accuracy and integrity of their submissions.
  4. Any party using AI must independently verify all outputs, including legal propositions and cited authorities.
  5. No authority should be cited without the party first verifying that the judgment exists and that it supports the proposition for which it is relied upon.

Potential Sanctions

Importantly, the Court highlighted that it possesses a “variety of sanctions” to address improper use of AI that could mislead the Court. While the Court accepted that the defendant in this instance did not intend to mislead, parties should be aware that future breaches may attract consequences. This is particularly so where guidance now exists. .

Concluding Remarks

This is the first time an Irish court has formally addressed the use of AI in litigation. Whilst AI is not prohibited in Irish litigation, its use must be transparent. The guidance is timely, given the increasing reliance on AI tools within legal practice and across industries It is important to remember that AI can support and assist certain tasks, but responsibility for accuracy remains firmly with the human user.

The Court’s intervention reflects a wider judicial concern about the misuse of AI in litigation. Most recently, in Von Geitz v Kelly [2026] IECA 29, the Court was critical of unsupported AI-generated propositions that caused unnecessary work and wasted court time, reiterating that parties must not present misleading content in submissions. The Court’s message is clear. It will not tolerate submissions that rely on unverified AI outputs.

For further information or guidance on managing AI risk in litigation, please get in touch with a member of our Litigation & Investigations team.