Bulletin 17 May 2019. On personal epiphanies and Schrödinger’s AI

There must be a law somewhere which states that epiphanies have to be personal. That would explain the feeling that businesses never really learn the basics of, say, agility or how to build software right: each generation has to work it out for themselves, possibly through a-ha moments following periods of trauma.

What’s strange is how we don’t really acknowledge that to be true. We do talk in terms of organisational maturity (there’s a four-stage model for everything), but we don’t recognise that new people can take things back a step or two, as they haven’t learned what the more ‘mature’ might see as the basics.

As such, we normalise where we are, and take for granted that others might share our understanding. Case in point is artificial intelligence, which (as is very apparent) is currently seeing a surge of interest. Or is ‘it’, given how AI means different things to different people? I’ve been fortunate enough to talk to quite a few folks over recent months, which tend to fall into certain camps.

For the uninitiated, AI tends to refer to software algorithms that follow a two-step process: first is to ‘learn’ some rules from a large set of data (e.g. what a cat looks like) and the second is to ‘infer’ information from new data (e.g. “That’s highly likely to be a cat”). The way these algorithms work is, the more data they get, the better they can infer stuff.

So, no, it’s not really intelligence per se, but it’s still pretty handy. Academics have been doing it for years, using various forms of algorithm (such as neural networks): in general the goal is to improve how such algorithms work, oh, and write papers about it all. This thinking exists both inside academia and in business, as that is where AI expertise is most likely to come from,

Outside of academic types, there’s a couple of epiphany-type situations that emerge. The first is how organisations are moving from discrete AI projects to a notion that AI operates like a service, to be applied as and when useful. I’ve seen this a-ha! shift in consulting firms, I’ve seen it manifest as the issue of ‘silos of AI’, and I’ve seen it discussed in terms of how DevOps principles can be applied to AI applications.

A second sudden realisation comes when one starts thinking about whether one can have confidence in the results of a given AI application. AI is both probabilistic (again, “That’s highly likely to be a cat”) and vague — algorithms don’t necessarily know why they have reached a certain conclusion. So-called ‘explainability’ (yes, there’s a term for it) comes in response to very real issue: it’s one thing to mis-identify a cat; it’s another to, say, rule out women from job searches.

On the point of dodgy results, there’s various reasons why it might happen: learning data quality/quantity, algorithm effectiveness or other factors might all play a part. Without explainability, it’s difficult to know the cause. But anyway: the result is an ohmahgawd moment, when all that squeaky clean AI looks like any other bit of flawed software. It’s like that moment after a purchase when you realise, yeah, it’s just a car. Whilst epiphanising, one might even say, “it’s just garbage in, garbage out, innit?”

Clearly (from my anecdotal experience), to the initiated, AI’s potential bias is a very real and challenging issue, and explainability offers at least part of the response. To those not so far down the track it is a bit like car accidents or (cf last week) security breaches, in that they only happen to other people, are not as important as doing the clever suff, etcetera.

Perhaps the bottom line is a choice: either we can impose a need to think about bias and explainability through (legal) governance, or we can adopt a wait-and-see approach in which people work it out, and have epiphanies, by themselves. Given the possible results of the latter approach (amounting to misdiagnosis for undefinable reason), I personally would prefer the former.

Thanks for reading, Jon

P.S. More Smart Shift next week.