Bulletin 24 August 2018 - We are porcupines, in a hindsight-oriented world

We are under attack. We spend our lives in deliberate, naive ignorance of the thousands of dangers we face, in case we scare ourselves rigid, thereby rendering ourselves completely, and counterproductively, useless.

At the same time, we are constantly calculating risk, like tennis players watching the trajectories of multiple balls and somehow managing to swipe them away before they hit. As a race, we have proved ourselves immensely good at this, by the fact hat we are still here.

Which makes it all the stranger when we are faced with a new cyber-related issue, we seem to default to the former. I use the term “cyber-related” as it’s not strictly security, or privacy, that is always involved. Case in point: the continued attempts by social media giants to get their houses in order.

When something new (like Cambridge Analytica and their ilk) happens, we act like porcupines on the (super)highways of the Information Age, somehow confident that the protections we have developed over aeons will continue to serve us. In the case of political interference and behavioural manipulation however, our in-built mechanisms are clearly inadequate.

From a vulnerability perspective, what took place is (and continues to be) child’s play: identify, through a process of repeated testing, identify what is most likely to get a reaction from a person, then do that thing. However smart we think we are, our inability to do nothing when provoked has been our undoing.

(And, if I’m being too general in the above, let me me be clearer: it’s the “like” or the “retweet” button that we just can’t resist clicking).

Over the years, I’ve talked about security risk being more akin to permeability than any single big nasty. Bad things continue to happen, like waves buffeting the harbour wall: that is their nature. We can keep our guard across areas we understand (not clicking on spam email for example).

It appears, however, that we can’t do the same against areas we don’t (yet) know to be bad. A new attack surface, when it appears, does so largely unprotected. For the past three decades and probably longer, it has been thus.

Which is where it gets strange. Simply put, we don’t have law-making mechanisms that take this into account. We fall down with horror, pick ourselves up and carry on with our lives, as we look to how existing laws and compliance frameworks might deal with this ‘new’ situation, all the while never dealing with the root cause.

And thus we continue to create locks for stable doors, long after the notion of a stable has been superseded several times over. I’m not sure whose interests it serves, beyond anyone that wants to profit from the discrepancy. But, on we go.

Foreword to Smart Shift: From Kibish to Culture

In apropos news, a couple of years ago I wrote a book about how technology was changing how we need to think. It was never published, but has been languishing on my hard drive. Ironic that at first, I didn’t want to start it as it kept going out of date; as things now stand, it was pre-Trump, pre-Cambridge Analytica and pre-GDPR so is already well past its best-before.

At the same time, as I recognised how quickly things change, across the writing it morphed into a history of technology’s impact. I’ve decided to put the whole thing online, both for posterity and potentially, as a foundation for some future work. I’ll be doing this over time but for now you can read the foreword, here.