Introduction
The Indian Judiciary is currently undergoing a profound structural transformation, embracing a technologically advanced, AI-driven infrastructure. As AI tools increasingly assist in various functions, ranging from real-time transcription to comprehensive case summarization, their propensity for “hallucinations” and reliance on algorithmic decision-making raise troubling concerns about accountability. This article explores and scrutinizes the urgent necessity for auditing AI, positioning it as an essential safeguard that protects the citizens of India from the perils of digital inaccuracies. Through examples and the landmark case of Washington vs Puloka and the alarming fake precedent scandal, the article delves deeply into the intricacies of India’s judicial framework, which is adopting AI in its pursuit of expedited trials. It underscores the imperative for thorough audits of the evidence produced in court that relies on AI technology, ensuring a robust system of checks and balances.
Artificial Intelligence has been introduced to make human life simpler, yet few have anticipated that human existence would one day rely on AI’s decisions. AI is a system capable of performing functions that ordinarily require human intervention, such as reasoning, language comprehension, and decision-making. It has algorithms and prompts to generate a well-structured response when algorithms are fed by prompts. Since AI requires a set of technological advancements, it is believed to become a tool of considerable potential.
Today, even the court found itself on the edge of a new technological era. The features that make AI more dependable and cogent because it is speedy, mostly accurate, and also introduces risks that must be managed carefully. AI assist in legal research, drafting, and preparation of reports and summaries, which makes the judicial system free from more complex deliberations. Thereby, AI enabled platform has enhanced capabilities in transcription, translation and document management, which has improved courtroom
efficiency. As a result, the developments have facilitated a structural transition from paper- based archives to data- centric, technology-enabled judicial infrastructure.
Let's understand with an example what happens if we completely depend on AI in the courtroom.
For example, Rumi, who is 65 years old, lives in a small village and relies on a government Ration Card to get wheat and rice every month. She went to the ration shop, and the dealer denied her wheat and rice, saying that her ration card had declined and she was no longer eligible. As the person who visits the ration shop every month, discovered that the government, with the help of AI, has eliminated outdated databases and fake ration cards.
Need for auditability in AI decisions
Although the implementation of AI in the judicial system across various jurisdictions for purposes ranging from case management to risk assessment has provided clear benefits. However, there will always be risks and persistent concerns relating to confidentiality, and partiality threatens the integrity of the system. This raises the question of why auditability is a non- negotiable requirement of AI in the legal system:
(a) Excessive dependence on AI output can diminish human judgment
Overreliance on the suggestions and outputs generated by AI may be hallucinated or incorrect. AI tools like GenAI are fed by Large Language Models (LLM) which can produce suggestions or output that seem to be correct by are “hallucinated” content. It generally happens because LLMs do not understand law semantically, it predicts the next word based on the correlations in the vast training data (massive datasets acquired from trillions of words from books, websites and articles). GenAI operates opaquely, where the user just enters the inputs and generates the output; one cannot easily see how it generates such answers. This
scenario can be easily understood with an example: Steven Schwartz, a lawyer, was representing a client who was injured in an aeroplane. In order to save his time, he used ChatGPT to find out the precedents that would help him to in the case. The ChatGPT confidently provided him with six such cases with complete names like “ABC vs UV Airlines”. Later, he found out in court that such cases do not exist. The lawyer submitted these fake cases to the judge, and the opposing lawyers tried to find the same cases in law books and on several websites; they found none. The judge became furious, and he was fined with $5000.
(b) Infringement of Intellectual Property
The rise of AI is making people insecure about their creativity. In the year 2023, the New York Times sued OpenAI (the makers of ChatGPT) and Microsoft, because these companies’ copied tons of articles that have been protected by Copyright law, without permission. The stolen articles are being used to train and feed the data in their AI systems, teaching AI how to write like a real human. These raise questions on “creators”, “original work” and “how to protect intellectual property”. Intellectual Property is being protected because of its uniqueness while providing the creator the benefit of using their own creative work for commercial purposes.
This situation can be understood well by this example. In the year 2024, the trend for changing a simple photo from your gallery into Ghibli Style. This was done in minutes by AI, while the form of art created by the artist took years to get recognized through anime and movies. The only reason the AI can do that is that it has been trained with data on thousands of old frames from movies like “Grave of the Fireflies”, “The Wind Rises”, etc. where the characters are made in Ghibli Style art. The infringement took place because Studio Ghibli never gave permission to copy their unique hand – painted art style to AI makers. Now, even if a company wants a “Ghibli – Style” poster, they might just use AI for 2000 Rupees rather than hiring a real artist for 20, 000 Rupees.
(c) Tampering of Evidence and Deepfakes
Photos and videos are considered "contemporary electronic evidence" in court. While electronic evidence is valuable, it comes with certain risks, one of which involves AI- modified images and videos, commonly known as deepfakes. Deepfakes refer to digitally manipulated media content—such as videos, images, and audio clips—that replace authentic visuals and sounds with synthetic ones created by AI.
A notable example of the rise in digitally altered content can be seen in Instagram memes,
where AI is used to create false footage of prime ministers and presidents in provocative and
inappropriate situations.
Courts traditionally authenticate evidence by requiring parties to prove it through external resources, but due to increased modifications in deepfakes, even the judges mistakenly accept the fabricated evidence.
In the case of Washington vs. Puloka, a man named Joshua was accused of a shooting. The entire incident was captured on a low-quality video. The opposing team used an AI tool called “Topaz Labs” to enhance the video’s quality. However, the court refused to accept the AI-generated video as evidence. This was because the AI not only zoomed in and improved the quality but also made assumptions about what the shooter looked like, ultimately creating a new image.
Use of AI in the Indian Judicial System
The Indian Judiciary system has an institutional history of operating cases manually on pen and paper. This model had several limitations, such as delays arising due to the movement of the files and the deterioration of the records. As pendency and backflow grew, the traditional workflow revealed its difficulty in scaling with the case laws. In response, the Ministry of Law and Justice, the e-Committee, Supreme Court have initiated a long-term reform. The project was developed in phases, where in Phase I (2007 – 2015), the software platforms were introduced, and court complexes across the countries were equipped with computers, LAN connectivity and Server rooms. Phase II (2015 – 2023), characterized by rapid innovation by implementation of the Case Information System (CIS 3. 0), which enabled the standard data entry, established a uniform software structure that automated case tracking and information flow among the judicial actors. Phase III (2023 – present), adapting the principle of “maximum ease of justice” through an integrated technological platform. This phase emphasizes on developing of AI tools which are tailored specifically to the institutional
needs of the Indian Judiciary. Such tools are:
1.SUPACE
Stands for Supreme Update Portal for Assistance in Court’s Efficiency. It identifies legally relevant materials, designed to analyze vast case records, by generating case summaries and organizing documents in an accessible manner.
2. SUVAS
Stands for Supreme Court Vidhik Anuvaad Software, which is a tool that precisely addresses the linguistic deficit. It institutionalizes multilingual accessibility to the Supreme Court judgements. In the year 2023, SUVAS enabled the translation of approximately 36,000 Supreme Court Judgements.
3. AI – based transcription
This tool is used in the Supreme Court to automatically capture and convert the oral arguments into real-time text displayed on screens.
4. E- Filing and AI
It is used for detecting defects in filings and scrutiny of petitions and identifying incorrect formatting.
AI with Human Oversight in Courtrooms
The digital transformation of the Indian Judiciary system has been driven by tools like SUPACE, e-filing, and AI, which have automated almost every administrative work that was traditionally performed manually over the years. However, even as AI systems develop self – auditing capabilities, the potential redundancy of legal professionals remains a serious debate. The reliability of AI outputs is limited by the data that are used for training, making human oversight a necessity. While AI is an efficient tool for evidence gathering, the “human in the loop” remains vital because the art of oral advocacy and legal reasoning is beyond the reach of current technology. Within the Indian Judiciary, “human in the loop” refers to the requirement of advocates for final responsibility, accountability, actions or outcomes in the judicial process to the judges and lawyers. Because AI tools learn from the pre – existing data, they may hallucinate judgments, citations, quotes, or refer to any legislation that may not be in existence and can also make factual errors.
Conclusion
Some experts speculate that AI will surpass humans in various fields, but does this include legal professionals? While AI is a powerful tool for managing cases and increasing access to justice, it cannot fully replace the unique capabilities of the human brain. The potential for AI to produce altered evidence presents a risk in the courtroom unless there are robust audit trails to verify its accuracy. Since machines lack empathy and ethical judgment, the legal system in India will always prioritize human discussion and intervention. Although digital transformation offers significant potential for efficiency, AI must be deployed responsibly, with careful consideration of individuals fundamental rights.
- By Srishty Sen