In a twist that perfectly encapsulates the risks of artificial intelligence in the courtroom, on Jan. 10th, a Stanford professor hired as an expert witness in a lawsuit about AI-generated deepfakes had his testimony thrown out after it was revealed that AI itself had hallucinated citations in his court filing. The case, which challenges Minnesota’s ban on AI-generated election deepfakes, has now become a cautionary tale about the risks of overreliance on AI-generated information in legal proceedings, Reuters reports.
As I previously discussed in an earlier Forbes article, the legal system is grappling with the role of AI in expert testimony—but this case takes the debate to a new level. If even an AI expert fails to verify AI-generated content, how can courts trust AI-assisted evidence?
The Case: AI Errors Derail Expert Testimony
The case in question centers around a Minnesota law banning AI-generated deepfakes designed to influence elections. The state enlisted Jeffrey T. Hancock, a Stanford University professor specializing in communication and AI, to support its argument that such deepfakes pose a legitimate threat to democracy.
But in an ironic twist, Hancock admitted that he had used ChatGPT-4o to help draft his expert filing. And that’s where things went wrong. The AI hallucinated, a well-documented flaw of generative AI, producing multiple entirely fabricated citations that seemed authoritative but referenced nonexistent sources.
Opposing attorneys representing Minnesota State Representative Mary Franson and YouTuber Christopher Kohls, who were challenging the deepfake law on First Amendment grounds, discovered the bogus citations and pointed them out to the judge. This resulted in the judge striking Hancock’s testimony from the record, stating that his credibility had been shattered. The Minnesota Attorney General’s office requested to submit a revised filing, but the judge denied the motion.
This case exemplifies the core problem with AI-assisted legal analysis—while AI can accelerate research and streamline documentation, it cannot be blindly trusted.
The Irony: An AI Expert Taken Down by AI’s Flaws
It’s hard to overstate the irony here. A leading expert on AI, hired to warn about the dangers of AI-generated misinformation, was himself undone by AI-generated misinformation. His testimony was meant to highlight how AI deepfakes can mislead the public, yet his reliance on an AI tool without verification demonstrated how AI can mislead even trained professionals.
As I’ve written before, AI has a place in forensic and legal work, but it must be rigorously vetted. There are potential pitfalls of AI-assisted expert testimony. While AI can aid research, it cannot replace human judgment in verifying and validating critical evidence.
AI Hallucinations: How They Happen And Why They Matter in Court
As a digital forensic expert, I see firsthand how AI is changing the way we analyze evidence, but I also understand its limitations. AI hallucinations happen when a model like ChatGPT generates plausible but entirely fabricated information, often because:
- AI prioritizes coherence over accuracy – It generates responses that “sound right” rather than ensuring factual correctness.
- Lack of real-world validation – AI models don’t “fact-check” against real legal databases unless specifically trained to do so.
- Echo chamber effect – AI sometimes reinforces existing patterns in data without distinguishing between verified facts and speculative content.
In legal cases, AI hallucinations can have serious consequences:
- False citations: As seen in this case, AI-generated legal citations may reference cases or statutes that don’t exist, misleading attorneys and courts.
- Fabricated forensic evidence: If forensic experts rely on AI-generated reports without validation, they risk introducing inaccurate evidence into courtrooms.
- Erosion of credibility: Once AI-generated errors are exposed, all expert testimony—even the verified portions—becomes suspect.
The Dangers: AI And Expert Testimony
The Minnesota case is not an isolated incident. Courts are seeing more AI-generated documents, expert reports, and filings that contain hallucinated legal references or misleading data analysis. In forensic science, AI is being used to analyze crime scene evidence, authenticate digital images, and even predict criminal behavior—but if experts fail to double-check AI’s findings, they could corrupt the entire legal process.
As a forensic expert, I stress this in every case: AI should assist, not replace human judgment. We must:
- Verify every AI-generated claim – Every citation, forensic analysis or data point must be independently confirmed before submission to the court.
- Educate legal professionals on AI’s limitations – Judges, attorneys and forensic experts need AI literacy training to recognize potential errors.
- Demand transparency in AI-assisted evidence – Courts should consider requiring a clear disclosures when AI is used in legal arguments or expert testimony, and in what capacity.
Courts Must Be Wary of AI “Assistance”
This case should be a wake-up call for attorneys, expert witnesses and judges alike. AI can be a valuable tool, but it cannot be blindly trusted in legal proceedings. The risks of unverified AI-generated information are too great, and the consequences—whether wrongful convictions or thrown-out testimony—are too severe.
As AI continues to permeate forensic investigations and courtroom proceedings, legal professionals must take responsibility for ensuring accuracy. This means rigorous fact-checking, verification, and transparency—principles that are fundamental to both forensic science and the rule of law. If even an AI expert can be fooled by AI hallucinations, courts must tread carefully when integrating artificial intelligence into the justice system.