Bulletin 12 April 2019. Alexa, please explain “Explainability in AI”

A funny thing happened to me as I hosted a webinar last week. Grosso modo (as they say in French), we were talking about the complexities artificial intelligence when, jokingly, I suggested that we could ask a bot to explain it. “Alexa, explain AI,” said a panellist. We chuckled; moments later, a message popped up on my iPad, from a listener: “You just made my Alexa sit up!”

Of course, I was unable to resist this clear opportunity. “Alexa, play Breakfast in America by Supertramp,” I said on. “Yep! Playing out now - good choice,” came a response. Achievement unlocked — controlling someone’s playlist via live internet TV. I know, right? Behind my typically calm, sober facade, I was giggling like a school kid.

I shall try to avoid the distraction of ‘going off on one’ about the potential security risks of such functionality (“Alexa, open all the doors!” shouted through the letterbox) — though it does remind me of the time when computer hardware used to be delivered with all the doors left open, and it was up to the administrator to close them — turn off rsh, anonymous FTP and so on. No doubt these pesky devices will need the same.

Distraction mostly avoided. Back on the AI track, and indeed explaining things, several recent discussions have turned to the need for AI to explain itself. Following some client work last year, I’ve been following the rise of ‘explainability’ in AI: witness the emergence of a new piece of jargon that feels just like a normal word, at least to people who have used it for a while (and cf ‘observability’ from last week’s bulletin).

Explainability in AI, or indeed XAI (You want more jargon? We got it!), has quickly become a mainstream topic due to repeated examples of potential bias in its results. I say ‘potential’ not through any unconscious bias of my own — though I have plenty — but because we lack the information we need to know which elements are actual bias, vs which are not.

Explainability goes much further than justifying why a particular conclusion was reached: it enables us to identify flaws in both inference and training. Inference is largely the domain of the algorithm, basically, we can use a sharp or a blunt instrument to cut the data we have; whilst training is about the data set.

So, yes, guess what? If you train up your image recognition on white faces, you will be less able to identify non-white features. But without explainability, you won’t know which features were undistinguishable due to algorithmic bias in inference; which were down to a lack of suitable training data; and which were ‘simply’ because the software was crap.

The debate in this area is complex: for example, do we need to understand how an algorithm reached its conclusions before we can trust it, as this article suggests? I’m not convinced, as I’m more concerned with why than how; I also think there’s a spectrum of usage models between complete denial of the outputs of any algorithm, and wholesale, automated adoption of what it says.

In addition, we need to be careful not to treat explainability with the same level of hype as we seem to apply to AI as a whole. If one starts from the point of view that we are approaching the singularity (in which AI is as smart as the human brain), then we have reason to panic… but we are decades away from that.

Don’t get me wrong: examples like the algorithm-driven targeting of political ads on Facebook are enough to give us cause for concern. Now I think of it, the Alexa point is linked — in that we allowed our first iterations of AI to just work, but as ever, the poor nephew of governance is having to catch up, running behind the cart of cabbages like Oliver Twist.

As I write this, I notice (https://www.bbc.co.uk/news/technology-47894492) that explainability may be written into US law: critics are suggesting that such a move could “limit the benefits” of AI. I don’t believe this is the case: if we can work out ways to discover amazing things about the data that we hold, we should also be able to say how our algorithms reached the conclusions. Or indeed, offer a legal dispensation for an algorithm which demonstrates clear benefit without bias, for example in a healthcare scenario.

Such a position would leave humans in ultimate control. Whatever our foibles (and indeed, biases), that has to be a good place to start.

Smart Shift: Music, books and the Man In this week’s excerpt from Smart Shift, we look at the nature of streaming services, and their impact on music, video and book publishing. The bottom line: artists are earning more than ever, but there’s more than ever of them.