A barrister’s use of artificial intelligence to generate fake legal authorities has prompted a self-report to regulators and raised urgent concerns over AI verification in legal practice, highlighting the challenge of balancing access to justice with ensuring court integrity.
A barrister who acted as a lay advocate and later represented herself in a family welfare dispute has reported herself to the Bar Standards Board after including four cases in a skeleton argument that the court found did not exist and which she says were generated by an artificial intelligence tool. According to reporting by Legal Futures, Layla Parsons, an unregistered barrister who had been offering paid legal services to members of the public, withdrew the applications that relied on the spurious authorities.
Recorder Howard, sitting at Bournemouth Family Court, decided to name Ms Parsons in his ruling despite her having self-referred to the regulator and her objection that publicity would expose her to harassment. The judge said her self-reporting was “the responsible” course of action but concluded there remained a public interest in identifying her because of the risk she might again offer legal services.
The ruling records the judge’s concern that, notwithstanding her legal qualification, Ms Parsons “still does not really acknowledge or accept that her actions in not checking the citations and propositions she included in her skeleton argument were serious.” The judge treated her as a litigant in person for procedural purposes but emphasised that those who represent themselves are bound by the same duty not to mislead the court.
The decision also notes evidence that Ms Parsons had been available to purchasers of legal document packages from an unnamed website, reinforcing the judge’s assessment that there was “a real and not fanciful possibility that Ms Parsons will in the future offer legal services to members of the public”. He said that factor, combined with what he described as her failure to grasp the seriousness of including unverified authorities, was “a strong and overwhelming factor in favour of naming Ms Parsons”.
Beyond the individual case, legal regulators and tribunals have issued warnings about the risk that AI tools will produce fabricated authorities if outputs are not checked against reliable legal databases. The Upper Tribunal (Immigration and Asylum Chamber) has previously reprimanded a barrister after a fictitious judgment generated by ChatGPT was relied on in submissions and has urged practitioners to verify every citation to avoid regulatory referral or worse. Industry training guidance is increasingly stressing competencies in verifying AI outputs and documenting AI-assisted work.
Recorder Howard said he had tried to limit publication of personal details to what was strictly necessary and rejected Ms Parsons’s argument that criticism of AI use risked discouraging disabled litigants in person from using assistive technologies. The case illustrates the tension courts face between protecting access to justice and ensuring the integrity of proceedings when litigants rely on novel tools. Local family law practitioners said such matters underline the importance of careful case management and verification.
The judge concluded that naming Ms Parsons was necessary and proportionate, finding that the public interest outweighed the potential privacy risks. The episode adds to mounting judicial and regulatory guidance that legal professionals and lay representatives must exercise due diligence when using AI, and that failure to verify authorities can carry professional and regulatory consequences.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The article was published on 30 March 2026, which is within the past seven days, indicating high freshness. No evidence of recycled or republished content was found.
Quotes check
Score:
8
Notes:
The article includes direct quotes from Recorder Howard’s ruling. While the exact wording matches the source, the quotes cannot be independently verified due to the lack of access to the full court ruling. This raises concerns about the accuracy and authenticity of the quotes.
Source reliability
Score:
9
Notes:
The article originates from Legal Futures, a reputable UK-based legal news outlet. However, the reliance on a single source for the quotes and details about Recorder Howard’s ruling introduces potential bias and limits the scope of verification.
Plausibility check
Score:
7
Notes:
The narrative aligns with known issues regarding AI-generated ‘hallucinations’ in legal contexts, as reported in other jurisdictions. However, the lack of independent verification of the quotes and specific details about the court ruling raises questions about the full accuracy of the claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents a timely and plausible account of a barrister self-reporting to the Bar Standards Board after citing fictitious cases generated by AI. However, the inability to independently verify the direct quotes and specific details about Recorder Howard’s ruling, combined with the heavy reliance on a single source, introduces significant concerns about the accuracy and reliability of the content. These issues necessitate cautious consideration before publication.
