Bulletin 5 October 2018. The future isn’t going to happen, and this is why
One of my disappointments about working in IT is that I wasn’t there at the beginning. No, I don’t mean offering tea to Babbage and Lovelace, though that would have been quite the coup. I’m thinking more about the early days of silicon, when things really started to heat up.
Having mentioned Babbage, his situation does illustrate a point which pervades to this present day: the Information Age has been defined as much by what isn’t possible, as what is. I’ve previously talked about threshold theory, in that some things happen simply because they become possible, whereas before they weren’t.
In Babbage’s case, while his plans for a (second) calculating machine were valid, he couldn’t afford to build it — the amount of work required (and therefore cost) to engineer a machine of such complexity was beyond his fundraising ability. It’s been built, subsequently, and it works: you can see it in the Science Museum.
Similarly, I remember speaking to Augustin Huret, just after his (software-based) algorithmic inference capability had been acquired by consulting firm BearingPoint. To the point: his algorithms had originally been devised by his father, in the 1970s, but no machine was powerful enough to be able to run them. When Huret Jr came on the scene, the cost of running them was still too expensive. Today, it costs a few hundred quid a pop.
My point isn’t about threshold theory; it is, however, about the fact that we already get much of the maths, we’re just waiting around for infrastructure that can support it. Not a dig, just an observation. To extrapolate, yes, we will be able to apply such algorithms more broadly in the future. But, unless someone invents a new branch of maths, we will also largely be constrained by them.
What I’m fumbling with is my strange lack of conviction about the singularity, SkyNet, the notion that we will all be creatures of (potentially diabetic) leisure as robots do everything for us, and so on. There’s something missing in the equation, like that flow diagram which incorporates “insert something magical here.”
At the moment, AI is operating in two dimensions. The first is deep: for a well-bounded domain, such as voice, image, or other expected-pattern recognition, we can expect linear improvements in capability, perhaps to a point where it will be difficult to discern whether we are being listened to by a computer, or by a human. For simple, code-able instructions, this sounds quite the thing, if that’s your preferred form of communication.
Meanwhile, we’re looking to go broad - “Give me a data set and offer me insights about it.” Algorithms can offer in one of two ways: either they work on the basis of anomalies, looking for unexpected messages in bottles across an ocean of flotsam; or they require seeding with some kind of domain knowledge, context or hypothesis which can then be tested. Some companies (I was talking to Google yesterday) are getting very good at this stuff.
But yet, as I say, something’s missing from the equation. I don’t know what it is, but I am starting to understand why it is missing. Simply put, on one side we have mathematics, algorithms, patterns and rules; and on the other, biology, hormones, irrationality, caring, identity. What they can’t do is be a bit dumb or flighty, go the extra mile or wait around long after it feels pointless. Nor would they want to, if they had a concept of wanting in the first place.
I’m saying this as I think it’s a really important element of what we will see in years to come. Computers may be able to generate mood music that pushes the right buttons: all algorithms have to do is A/B test every combination and tweak their variables as they home in on the right ones. In other words, they can do what computers can do, sometimes with world changing consequences.
All of this to reinforce that we may be heading towards an augmented world, one in which mundane and mechanical tasks and decisions are automated, and we have extra information to work with. But the notion that we are heading anywhere more futuristic than that remains highly unlikely, in my opinion. To whit: either I am wrong, in which case we need to start creating legislation around a post-singularity, artificially intelligent world, or I am right, in which case, we need to start planning around notions of algorithmic augmentation.
Even if we do arrive at some post-singularity world at some point, the chances are that we will have already spent several decades working through this, far less exciting yet equally risky scenario: we should, therefore, plan for it. This means not being distracted by media-friendly notions such as ‘robot rights’, nor dwelling (as we do) on current issues such as privacy (as it is currently presented). Complacency is not an option: the world of 5-10 years needs a whole new set of laws and norms, based on what algorithmic augmentation brings.
Here’s an article for this week.
Travellers know what they want: all we have to do is listen to them
With my programme-director-for-travel-forward hat on, I’m always amazed at how easily we switch from one psychology to another. The ill-fated Stanford Prison Experiment is one case in point, but we can look closer to home — not least how we create vs how we use technology. In this article, I propose the idea of listening to our travelling selves, rather than trying to second-guess the notional needs of others. I know, crazy, right?
What is the smart shift? And the incredible shrinking transistor
In the final section of Smart Shift’s first chapter, I summarise the book’s purpose — “as the transcontinental train of change hurtles on, is there any handrail that we can grab? The answer, and indeed the premise for this book, is yes.” As it didn’t seem fair to leave you on quite such a cliffhanger, I’ve included the first section of the next chapter as well. Which, not uncoincidentally, covers Babbage, and indeed the rise of silicon. But not Ancient Greece: for that, you’ll have to wait until next week.
Have a good weekend and all the best, Jon