Menace Of Citing AI-Generated Fake Judgments Rampant Not Just In India, Across World : Supreme Court
The Supreme Court has recently flagged that the practice of citing AI‑generated, non‑existent judgments is a serious “menace” now rampant not only in Indian courts but across the world. The remark came in a special‑leave petition before a bench of Justice Rajesh Bindal and Justice Vijay Bishnoi, where the top court was asked to expunge observations made by the Bombay High Court against a director who had cited a fake judgment allegedly produced by artificial intelligence.
What the Court observed
-
The Court noted that the tendency to rely on fabricated or hallucinated AI‑generated case law is “rampant in all Courts now, not only in India rather throughout the world” and warned that everyone—judges, advocates, and litigants—must remain extremely careful.
-
While granting indulgence by allowing the Bombay High Court’s adverse remarks to be expunged in that particular case, the bench made it clear that the larger issue of AI‑created fake precedents is “already seized of” on the judicial side, indicating that the Court intends a more structured review.
Broader context: AI‑generated fake judgments
-
In an earlier connected proceeding (Andhra Pradesh‑origin case), the Supreme Court had already described the use of AI‑generated non‑existent judgments by a trial court as an “institutional concern” affecting the integrity of the adjudicatory process, and warned that reliance on such fake authorities would amount to judicial misconduct, not a mere error.
-
The Court emphasized that AI‑produced information must be verified through traditional, authentic sources and cannot replace human‑judgment‑based scrutiny of precedents, lest it erode public trust and wastes judicial time.
In practical terms, this line of jurisprudence signals that:
-
Advocates must authenticate every judgment cited (especially those surfaced via AI tools) through official reporters or authenticated databases.
-
Judges and law‑firms are expected to frame internal protocols for using AI in legal research, including mandatory cross‑checking of citations against trusted sources.
