AI in the justice system must promote legitimacy, not just productivity


Legal processes were designed not to be fast, but to be fair and thorough. Those now deploying new technology must understand and respect that, according to Cathal McCarthy of Kore.ai.

Artificial intelligence is entering courtrooms around the world. The technology promises faster documentation, quicker legal research and more efficient case management. But when AI arrives in the justice system, the question should not simply be how much productivity it can deliver, but whether it strengthens the legitimacy of the system itself.

Unlike most enterprise environments, courts operate in a domain where the cost of error is not operational or financial, it is human. Legal decisions determine liberty, reputation and livelihoods. That makes justice fundamentally different from the typical AI deployment scenario.

Understanding that difference is becoming increasingly important as governments begin introducing AI into judicial systems.

In the UK, the justice system is under mounting pressure. England and Wales currently face a Crown Court backlog exceeding 79,000 cases, leaving many people waiting years for resolution.

To address this strain, the government has begun introducing AI across courts and tribunals. These tools are already assisting with transcription, case summarization, and administrative case progression in magistrates’ courts. AI-assisted transcription alone has reportedly recorded more than 150,000 probation meetings, saving around 25,000 staff hours.

“If AI dramatically reduces the cost and complexity of bringing a case, the courts may not simply clear existing backlogs. They may also see a significant increase in litigation.”

The logic behind these deployments is straightforward: if AI can reduce administrative burdens, courts can move cases through the system faster.

But treating justice primarily as a productivity challenge risks missing the deeper issue.

Courts were never designed primarily for speed or efficiency. They were designed to provide legitimate resolution of conflict. People accept unfavourable verdicts not because they agree with them, but because they believe the process was fair. That belief is the foundation of the entire system.

The productivity paradox
AI has the potential to make legal professionals significantly more productive. Research can be completed faster, document review costs can fall dramatically, and preparing legal filings becomes easier.

At first glance, this seems like an obvious improvement.

However, economic history suggests that efficiency gains do not always produce the outcomes we expect. Transport economists have long observed that building faster roads rarely eliminates congestion. Instead, it creates induced demand – more people begin driving because the process has become easier.

The same dynamic could emerge in legal systems, if we’re not careful.

If AI dramatically reduces the cost and complexity of bringing a case, the courts may not simply clear existing backlogs. They may also see a significant increase in litigation.

Some of this would represent genuine progress.

Lower costs could allow legitimate claims to proceed that previously went unheard because pursuing them was too expensive. But reduced friction could also shift disputes into the legal system that might previously have been resolved through negotiation or social mediation.

In that scenario, courts risk becoming the default venue for everyday conflicts, rather than the appropriate venue for serious legal disputes.

This is why AI governance matters. Not every increase in system activity necessarily improves justice.

Historically, the legal system has always contained a degree of friction, including cost, complexity and time. While often criticised, that friction also acts as an informal filter. It discourages disputes that are better resolved through negotiation, mediation or social compromise.

When AI removes that friction entirely, the system may not simply become more accessible; it may also become the easiest place to send conflicts that once resolved themselves elsewhere.

But there are examples of AI being deployed in judicial systems with a different objective.

India’s courts face a far larger challenge than the UK, with tens of millions of pending cases. Yet one of the country’s biggest barriers to justice has historically been accessibility, rather than throughput.

India has 22 official languages and hundreds of dialects and, for many citizens, courtroom proceedings have taken place in languages they cannot fully understand.

To address this, India’s National Informatics Centre worked with tech firm Nvidia to build AI-powered transcription and translation tools capable of operating across multiple Indic languages. These systems allow participants to follow court proceedings in real time in their own language.

At the same time, judges can use AI-assisted research tools that quickly surface relevant precedents from decades of case law. Here, AI is not primarily increasing speed, it is removing barriers that previously prevented people from participating in the justice system at all.

This highlights an important distinction. AI can expand corrective access, restoring justice for people who were structurally excluded. But it can also create induced access, increasing litigation volume without necessarily improving fairness or legitimacy.

Responsible deployment requires us to recognise the difference.

The apex domain
Most AI deployments are evaluated through familiar metrics: efficiency, cost reduction and productivity.

That framework works well in many industries. For example, errors in customer service automation or product recommendations can be corrected through iteration.

But justice systems operate under a very different risk profile.

Mistakes in legal processes can have profound consequences. A flawed interpretation of precedent, an inaccurate case summary, or excessive reliance on automated analysis could undermine decisions that affect people’s lives.

Because of this, courts represent what could be considered an apex domain for AI governance – an environment where technology must augment human judgement rather than quietly substitute for it.

Technology has always entered institutions eventually. Courts have adapted to everything from the printing press to digital record systems. AI will inevitably become part of judicial infrastructure as well.

The question is not whether AI will be used in the justice system. It is whether it will be deployed in ways that strengthen the legitimacy of the courts, rather than erode it.

For organisations adopting advanced AI, whether in government, healthcare or finance, the lesson is broader. In high-consequence domains, the objective cannot simply be automation.

It must be augmentation, with systems designed to enhance human judgement, preserve accountability and maintain trust.

Justice, after all, was never designed to be fast. It was designed to be legitimate.

Cathal McCarthy (pictured above right) is chief strategy officer of Kore.ai

Cathal McCarthy

Learn More →