Home / Editorial
Constitutional Law
Supreme Court Flags the Risks of Using AI for Drafting Petitions
«18-Feb-2026
Source: The Hindu
Introduction
Recently, a Bench comprising Chief Justice of India Surya Kant, Justice B.V. Nagarathna, and Justice Joymalya Bagchi flagged serious concerns about the indiscriminate use of Generative Artificial Intelligence (GenAI) in legal work. The court's observations came while hearing a petition filed by Kartikeya Rawal, which highlighted the dangers of AI hallucinations — instances where AI systems generate fabricated or entirely fictitious information — leading to the citation of non-existent case laws and judicial precedents in legal petitions.
What Prompted the Court's Concerns?
- The Supreme Court's intervention follows a growing pattern of petitions citing fabricated legal authorities.
- In December 2025, the Chief Justice had already noted the court's awareness of risks stemming from the indiscriminate use of GenAI in legal work, making clear that the judiciary did not wish for artificial intelligence to overshadow or compromise the justice administration process.
- The immediate trigger was the Bench's encounter with petitions that quoted from non-existent portions of actual judgments. A particularly striking instance recalled by Justice Nagarathna involved a lawyer who, while arguing before her Bench, cited a completely non-existent case titled Mercy v. Mankind.
- Additionally, the Bench referenced a separate case before an apex court Bench headed by Justice Dipankar Datta, where non-existing judicial precedents were similarly cited.
What Did the Court Say?
- The Bench was unambiguous in its disapproval, describing the trend as "alarming" and declaring that the use of AI for legal drafting was "absolutely uncalled for."
- The court drew a clear distinction between convenience and accuracy, pointing out that the ease of conducting legal research through AI tools must never come at the cost of factual and legal precision.
- The court also reiterated that the judiciary had no desire for artificial intelligence to overpower the justice administration process — a concern that underscores the institutional stakes involved when technology intersects with the rule of law.
What is AI Hallucination and Why Does it Matter in Law?
- AI hallucination refers to the phenomenon where large language models and generative AI systems produce outputs that are plausible sounding but factually incorrect, fabricated, or entirely invented.
- In ordinary contexts, such errors may be inconvenient. In legal practice, however, the consequences are far graver.
- When a lawyer submits a petition citing a non-existent judgment, it misleads the court, wastes judicial time, and potentially prejudices the outcome of a case.
- It also raises questions of professional misconduct and ethical accountability. The legal profession's foundational duty of candor toward the tribunal is directly imperilled when AI-generated hallucinations find their way into court filings unchecked.
What are the Broader Implications?
- The Supreme Court's observations signal a critical moment for the regulation of AI in professional legal practice in India. Several concerns emerge from this context.
- On the question of professional responsibility, bar councils and legal professional bodies may need to issue clearer guidelines mandating verification of AI-generated research before submission to courts.
- On the question of institutional integrity, the credibility of the judicial process depends on the accuracy of the legal materials placed before courts — fabricated citations erode this foundation.
- On the question of AI governance, this development adds to a broader global conversation about the need for domain-specific guardrails when deploying AI in high-stakes fields such as law, medicine, and public administration.
Conclusion
The Supreme Court's stern caution against the unreflective use of AI in legal drafting is both timely and necessary. While AI tools offer genuine utility in legal research, case management, and document review, their deployment without adequate human oversight poses serious risks to judicial integrity. The court's intervention underscores that technological convenience cannot be a substitute for professional diligence. Lawyers bear an irreducible duty to verify the accuracy of every legal authority they place before a court — a duty that no AI tool can discharge on their behalf.
