I’ve seen some noise lately about AI “revolutionising” the government and the judiciary. The promise is always the same: faster processing, consistent decisions and sentencing, and an end to human fatigue. In the face of a multi-year backlog of cases it must seem attractive, but while the technical foundations of AI have evolved at lightning speed, the moral and structural challenges of its application remain remarkably stagnant.

I’ve been revisiting Hannah Fry’s Hello World: Human in the Age of the Machine. Even though it’s nearly a decade old, its exploration of “black box” systems in critical sectors like healthcare and law is more relevant than ever.

An algorithm isn’t a magical arbiter of truth; it is the downstream consumer of a data pipeline. If the historical data feeding that pipeline contains decades of systemic bias, the AI doesn’t solve the problem, it scales it. Hidden assumptions become system rules, automated and propagated at a rate no human ever could. How to sanitise bias remains an open question, there is no one mathmatical model for fiarness.

The Black Box problem is particularly dangerous in the critical industries, like Legal and Medical. In these fields, transparency isn’t just a feature; it’s a requirement of law. If a Senior Data Architect cannot explain why a system reached a specific conclusion, that system is a liability, not an asset.

The real challenge for the next two years isn’t just building faster models. It’s about building accountable, transparent, architectures. We need to find the narrow balance between human intuition and algorithmic efficiency.

As Fry argues, the goal shouldn’t be to replace the human, but to use the machine to highlight our own blind spots. Before we talk about a “revolution,” we need to our focus should be on building systems that support decision-making, not replace it.