AI is changing the world from health to transport - here is why you should trust it like your own brain

Heath Robinson's machines were fantastically complicated, but at least you could just about understand how they worked. What about AI? - William Heath Robinson
Heath Robinson's machines were fantastically complicated, but at least you could just about understand how they worked. What about AI? - William Heath Robinson

DeepMind, Google’s artificial intelligence division famous for jaw-dropping but arcane achievements, like designing an algorithm that can beat the world’s best player of Go, has shown off a prototype of a machine with an equally impressive but this time real-world application - scanning eyes for complex and sometimes fatal diseases.

Why is it Important?

Two reasons. One: it is a practical breakthrough. While justly proud of its triumph in Go, DeepMind co-founder Mustafa Suleyman’s previous big example of an applied win was his company’s prowess in cooling Google’s server farms - which, while important, is less sexy than saving lives. Two: Deepmind promises that its device dishes out diagnoses every bit as good as world-leading eye doctors, but those doctors say it can also explain the reasoning behind its diagnoses. Which brings us on to explainability.

The key issue and its impact 

As AI advances there is increasing concern that its workings become so complex that while it may spit out the “right” answer, no one can understand or explain why. Essentially, we can see the data we feed into the AI machine, and we can see the results after it has been processed, but the actually crunching and calculation process in the middle is opaque. This is known as the Black Box problem.

Black Boxes are a big hurdle to the real-world application of AI because in many fields, like justice or healthcare, explainability is critical. For example a sentencing AI algorithm used to help judges make parole decisions may crunch all kinds of background data to evaluate the risk of a prisoner reoffending. But a yes or no answer is not enough. It needs to be able to explain how it has weighed that data for its judgement to be trusted, especially since data sets are vulnerable to all kinds of bias.  

The Solution

AI black boxes are not related to the emergency recording devices in aircraft, but the recent Boeing Max crashes are a reminder that explainability in complex systems is not just an AI problem.

Jeff Wise has written about how after the Second World War some systems got so complex that an entirely new safety protocol emerged, abandoning the old way, which was to guarantee the safety of each component in a system, in favour of evaluating and ensuring the safety of interactions between components. This new protocol, called the system theoretic process analysis (STPA), was the brainchild of an aeronautics professor called Nancy Leveson.

The question is whether AI processes are susceptible to the same evaluation or are simply far too complex. Opinion is split.

Some experts think that we are merely at a very early stage of AI and that, just as in the early days of computing, very few people understand the whole system. Instead, individuals understand tiny parts of the process. But, they say, as computing developed our understanding improved with better tools and diagnostics, and the same will happen with AI - that future diagnostic tools will be able to monitor AI algorithms just as sensors monitor your car engine today. These experts think such sensors will be able to report when the AI is not performing well, and the whole field will move from today’s era, when AI is dominated by theorists, to one where it is more like engineering.

Others think that our desire to explain AI is slightly irrational. After all, we're used to dealing with "black boxes" all the time. The human brain is a perfect example. We can’t reverse engineer it to demonstrate which out of million hypothetical inputs tipped the balance and made you make a certain decision. You may be good at doing sums in your head. What you can’t do is explain how you know that 7x7 is 49. You know why. But you don’t know how you know why. And if you think you do, and can point to some epiphany at school, some single experience, you are probably post-rationalising. Our brains are good at this, making decisions which we rationalise afterwards. But that’s not the same as explainability.

But a new paper in Nature suggests that, in fact, this similarity between AI black boxes and the human brain should actually provide a way to understand black boxes better. A blog post from researchers at MIT, lead authors on the paper, describes the problem: 

“We interact numerous times each day with thinking machines. We may ask Siri to find the dry cleaner nearest to our home, tell Alexa to order dish soap, or get a medical diagnosis generated by an algorithm. Many such tools that make life easier are in fact “thinking” on their own, acquiring knowledge and building on it and even communicating with other thinking machines to make ever more complex judgments and decisions — and in ways that not even the programmers who wrote their code can fully explain."

Even when explainability is possible, they go on, it is not always practically possible, or stay practically possible.  

"AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be so lengthy and complex as to be indecipherable.”

But fear not, they say. Just because we aren't able to explain precisely why humans or other animals make certain certain decisions, that doesn’t mean they are closed books. Far from it. Through empirical observation and experimentation we can understand a great deal about them and their impact and behaviour. The same goes for AI, the paper says. The solution is a new field of study called “machine behaviour” to study this new breed of “thinking” machines just as we have always studied thinking creatures.

The Bottom Line

To establish trust among human users, it seems certain that some AI processes in critical areas like medicine, justice, autonomous vehicles and planes (to name a few) will need to be able to explain why they decided as they did. Deepmind has plenty of competition in the AI eye-scanning market. But what may set it apart is the ability to explain the diagnoses it reaches, evaluating each of the tens of millions of pixels in each scan. Explainability, in this case, moves from an arcane theoretical concept to a critical competitive advantage in an market already worth billions of dollars each year.

In other areas, however, trust may not be so critical, and we may come to feel happy interacting with black box AI algorithms because our own opaque computing devices - the brains in our heads - observe their impact and decide that, on balance, they seem ok to us.