It started like any other legal filing—until it didn’t. When attorney Steven Feldman submitted documents to a New York federal court, something was conspicuously off. The prose was florid, packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451. But it was the citations that would ultimately doom his case. They were fake. All of them. “The extraordinary sanctions were warranted after the attorney kept responding to requests to correct his filings with documents containing fake citations.” — District Judge Katherine Polk Failla A Filing Worthy of Termination In a rare move on Thursday, District Judge Katherine Polk Failla issued an order terminating the case entirely. The decision came after Feldman repeatedly failed to correct his AI-generated filings, instead submitting new documents that contained the same fundamental problems—fabricated citations and conspicuously artificial prose. One filing was particularly striking. Where Feldman’s previous submissions contained grammatical errors and run-on sentences typical of rushed legal work, this document seemed glaringly different—stylistically overwrought and unmistakably machine-generated. The Ray Bradbury references should have been a red flag. Legal briefs rarely quote science fiction authors unless directly relevant to the case at hand. The inclusion of Fahrenheit 451 passages—seemingly chosen for their poetic weight rather than legal relevance—suggested something more than human creativity at work. The Growing Problem of AI Hallucinations in Court This case joins a growing list of AI-related sanctions in the legal profession. Over the past two years, attorneys across multiple jurisdictions have faced penalties for submitting AI-generated briefs containing hallucinated citations—references to court cases that simply don’t exist. The pattern is consistent: A lawyer, pressed for time or seeking efficiency, turns to AI for help drafting. The AI produces plausible-sounding legal arguments complete with citations. The lawyer fails to verify those citations. The court discovers the fraud. Sanctions follow. What makes this case notable is the severity of the response. While most AI-related sanctions have involved fines or mandatory continuing legal education, Judge Failla’s decision to terminate the case entirely signals a potential shift in how courts view AI misuse—a zero-tolerance approach that treats fabricated citations as fundamentally incompatible with the practice of law. “We’ve moved past the phase where AI misuse in legal filings is treated as a learning experience. Courts are now recognizing it as a fundamental breach of professional responsibility.” — Legal Ethics Scholar What This Means for Legal Tech The legal technology industry has been racing to integrate AI into every aspect of practice—from document review to contract drafting to legal research. Companies promise efficiency gains of 50% or more. Venture capital has poured billions into legal AI startups. But cases like this expose a critical tension. AI tools are powerful but unreliable. They can generate plausible text at scale, but they cannot distinguish between real precedents and convincing fabrications. The burden of verification remains squarely on human attorneys—a burden that some are clearly failing to meet. Professional responsibility rules have not yet caught up with AI capabilities. Most state bars are still debating whether specific AI disclosure requirements are necessary. This case suggests that courts may not wait for regulatory clarity before imposing their own standards through sanctions. For law firms investing heavily in AI tools, the message is clear: verification is not optional. The time saved by using AI can be quickly erased—and then some—by sanctions, reputational damage, and lost cases. The Road Ahead Judge Failla’s order will likely be cited in future AI-related sanctions cases. It establishes that repeated AI misuse, particularly involving fake citations, can rise to the level of case-terminating misconduct. The legal profession now faces a choice: develop robust verification protocols for AI-generated content, or risk similar outcomes. Some firms are already implementing mandatory citation-checking workflows for any AI-assisted work. Others are restricting AI use to tasks where hallucinations are less consequential. What remains unchanged is the fundamental expectation that attorneys stand behind their filings. AI may be a tool, but the professional responsibility remains human. This article was reported by the ArtificialDaily editorial team. For more information, visit Ars Technica. Related posts: OpenEnv in Practice: Evaluating Tool-Using Agents in Real-World Enviro Accelerating science with AI and simulations Flapping Airplanes on the future of AI: ‘We want to try really radical Custom Kernels for All from Codex and Claude Post navigation Adani pledges $100B to build AI data centers as India seeks bigger rol GPT-5.2 derives a new result in theoretical physics