Picture this: a defendant standing before a judge, their fate partly determined not just by legal arguments, but by an algorithm crunching numbers in the background. Sounds like science fiction? It’s already happening in courtrooms around the world, including India.
India’s Justice Crisis
India’s justice system is drowning in cases. As of late 2025, over 5.3 crore (53 million) cases are pending across all court levels, with more than 85% stuck in district courts alone. Earlier NITI Aayog estimates suggested that, at prevailing disposal rates, it could take over 300 years to clear such a backlog without major reforms.
Enter artificial intelligence, the supposed savior. The government has rolled out e-Courts Project Phase III with a massive Rs. 7,210 crore budget, deploying AI tools for intelligent case scheduling and predictive analytics. AI Saransh, developed by NIC for the Supreme Court, generates concise summaries of pleadings and other legal documents. In a historic first, the Court used AI software from Bangalore-based startup TERES to transcribe live proceedings during a Constitution Bench hearing.
But here’s where it gets interesting (and a bit uncomfortable): AI isn’t just helping with paperwork anymore.
When Machines Predict Freedom
Globally, courts are experimenting with something far more consequential: AI tools that predict who should get bail and who might reoffend. In the United States, algorithms are increasingly used to set bail and determine sentences. The most notorious example? COMPAS, a risk assessment tool used in Eric Loomis’s case in 2013, where he was sentenced to six years based partly on an algorithmic prediction that he would commit more crimes.
The problem? A ProPublica investigation found these algorithms tend to falsely flag Black defendants as future criminals at almost twice the rate as white defendants. Research showed that this commercial software was no more accurate or fair than predictions made by people with little criminal justice expertise.
Can Justice Be Algorithmic?
This raises a fundamental question: can justice ever truly be data-driven? The appeal is obvious. Humans are inconsistent, biased, and overwhelmed. A study of 758,027 New York City bail decisions by Kleinberg and co-authors found that judges released about 48.5% of the highest-risk defendants, who then failed to appear at around a 56% rate. An algorithm might catch what tired judges miss.
But algorithms are only as good as the data they learn from. If that data reflects years of biased policing and prosecution, the AI simply automates discrimination. A 2024 Tulane University study on AI-assisted sentencing found that using AI recommendations reduced jail time for low-risk offenders but that Black defendants were still less likely to receive alternative punishments and often received longer jail terms than similar white defendants.
India’s Measured Approach
To its credit, India seems to be taking a more careful path. The focus so far has been on translation, transcription, and case management rather than bail predictions or sentencing. SUVAS (Supreme Court Vidhik Anuvaad Software), an AI-based translation tool developed with MeitY and NIC, translates English judicial documents, orders and judgments into multiple Indian languages and vice versa. Adalat AI has been integrated in over 2,000 courts across eight states, with some courts reporting 30-50% reductions in case timelines.
Researchers have begun building experimental AI-based bail prediction systems, but these remain decision-support or research tools rather than standard practice in Indian courts.
The Takeaway
AI in the courtroom isn’t inherently good or bad. It’s a tool, and like any tool, it can be used well or poorly. The real question isn’t whether algorithms can help judges, but whether we’re building systems that enhance fairness rather than just speeding up injustice.
India has a unique opportunity here. With the world’s largest case backlog and a tech-savvy population, it can pioneer AI systems that genuinely serve justice. But only if we remain vigilant about bias, insist on transparency, and remember that behind every case number is a human life.
Because justice delayed might be justice denied, but justice by biased algorithm? That’s just injustice with better spreadsheets.