The Madras High Court recently reviewed the demonstration of Superlaw Courts, an AI-based assisting tool, and approved its usage in an ongoing arbitration case. The court issued a common order directing the circulation of a notice explaining how its algorithm works. The tool aims to help legal professionals “locate, organise, and understand” case-related information, but it is not supposed to take any judicial decisions, as per the order.
In its order, the court noted that it is “satisfied with the working method of the algorithm.” As part of this pilot phase, the court asked the parties in the case to use the AI tool to identify the specific issues. Chennai Metro Rail Limited, Bank of Baroda, ICICI Bank, and various other parties to this arbitration case agreed to submit their feedback after working with the tool for a week.
To better understand the order, MediaNama spoke to legal professionals, who said the court is not seeking explainability within the AI algorithm. Instead, they explained that the court wants a transparent system that maintains a logbook recording of all interactions so users can see the extent to which “assistance from the algorithm was sought”.
What the AI tool can and will do:
Superlaw Courts is a computer-assisted system that the Madras High Court described as a record-management assistant, limited to the documents filed in a case, unlike general-purpose chatbots like ChatGPT or Gemini. After the AI tool demonstration, the HC noted what it could do and what it shouldn’t.
What the tool can do or is supposed to do:
- Create a sealed digital workspace for each matter.
- Convert scanned files into searchable text using “OCR (Optical Character Recognition).
- Organise records by grouping related material and flagging duplicates.
- Break documents into meaningful sections and build an index of names, dates and subjects.
- Run targeted searches and return relevant, traceable excerpts.
- Summarise retrieved material in plain language.
- Prepare a draft factual order covering pleadings, evidence, arguments and tribunal findings.
- Provide an audit trail of all interactions with the algorithm.
- Say when an answer is not supported by the record.
What the tool can’t and shouldn’t do:
- Shouldn’t use external sources, internet material or general knowledge.
- Shouldn’t perform legal reasoning or substitute for judicial judgment.
- Shouldn’t draw inferences, assess credibility or offer legal views.
- Shouldn’t generate content beyond what the record contains.
- Shouldn’t be relied on without independent verification by counsel and the court.
Ability to Audit the interaction with AI Algorithms
While clarifying that “It is not intended to replace legal reasoning, judicial determination, or counsel’s professional judgement,” the court outlined how the AI tool will work, or rather, how it is supposed to work. The Court also noted that, to increase transparency, whenever there is an interaction with the algorithm, both sides of the case will receive a separate link that can help “ascertain the level of interaction that has taken place with the algorithm.” This raises questions about whether the court seeks AI explainability—a point clarified below.
“In order to bring more transparency, whatever interactions take place with the algorithm on the side of the counsel appearing on either side, as well as the Court, a separate link will be provided, and anyone who wants to ascertain the level of interaction that has taken place with the algorithm can click the link and verify the same. This step will provide more transparency and will also create a comfort level for any person who reads the order to understand the extent to which assistance from the algorithm was sought while deciding the case.” – Madras High Court Order
Is the court asking for the explainability of AI-based algorithms?
In short, no. To begin with, explainability is the extent to which an AI company can explain, in human terms, how its model makes decisions. This matters because AI systems increasingly decide outcomes that directly affect them
For instance, during one of MediaNama’s discussions, speakers explained why explainability is important. A few of the reasons are:
- Users cannot know what is “going on” inside the model to know where a particular decision is coming from.
- So, bias in decisions can go unchecked.
- Accountability becomes difficult because companies may not be able to justify their outcomes to regulators or courts.
- Consumers facing power imbalances may have no choice but to accept opaque decisions.
Explainability matters not just to consumers but also to regulators and legal scrutiny. If someone challenges an AI-driven decision, institutions must explain the model, its output, and its impact, and support that explanation with a clear audit trail. This is challenging when dealing with LLM-based AI systems, especially when datasets or algorithms are not open.
In legal space, too, this argument of logical traceability to know where the decision is coming from matters. But, “They’re not going to ask it to decide anything; it is going to use it to find data or relevant information,” said Rahul Narayan, arbitration specialist and partner at Chandhiok & Mahajan, referring to the court’s order.
“No, this is not explainability. The order says that both parties will receive a separate link, and anyone who wants to ascertain the level of interaction that has taken place with the algorithm, not within the algorithm, can click to verify it,” clarified Dhruv Garg, Partner at the Indian Governance and Policy Project (IGAP).
How Can This AI Tool Be Used in Reality?
“From a perusal of the order, it appears the judge may ask a question such as, ‘Where do I find this finding in the trial court order, or where do I find this clause in the agreement?’ That is the level of assistance this AI indicates. Therefore, any question asked will be shown to both parties, so they can be assured that none of the decisions have been made by AI. The judges are still the ones making the decision, and they know exactly what the AI has been used for,” Narayan further explained.
Agreeing on the merits of the tool, Garg said, “What I like about it is that they are using a tool which stops just before the judicial mind, per se, has to be applied on the merits of the case.”
Garg also questioned whether the ability to cross-check interactions with AI algorithms could be temporary. “Whether they keep doing it in the future or not is not clear from the order,” he said.
What’s Next?
- The court will hear these cases for the final hearing on February 12, 2026, at 2:15 p.m.
- The petitioner’s arguments will finish by February 13, 2026.
- The Court will list for final arguments on February 17 and 18, 2026, at 2:15 p.m.
Also Read:
Support our journalism by subscribing
For YouSource link


