Retrospective thoughts on Smart Shift

Smart Shift, a book about the impact of technology on society, is now published online. Here’s my thoughts on its multi-year gestation. 

About seven years ago, I decided to write about everything I thought I’d learned, on the impact of technology on society as a whole. Having been down in the weeds of infrastructure (either as a job, or as an analyst), I wanted to express myself, to let some ideas free that had been buzzing in my head for some time. I know, I thought, why not write it as a book. That’ll be simple.

I already had some form, concerning the notion of getting into print. Biographies of a couple of popular bands, a technology-related book and various mini-publications gave me experience, some contacts and, I believed, an approach which was, one way or another, going to work.

Fast forward a few years and many lessons, and we have a book. While I took advice and had interest at beginning, middle and end, while I worked through the process of proposals, of creating a narrative that fitted both what people wanted to read and how they wanted to read it, of having reviews and honing the result, it was never published.

And, perhaps, it was never going to be, nor was it supposed to be, for reasons I didn’t fully understand. The first, so wonderfully exposed recently by screenwriter Christopher McQuarrie, is the lottery nature of many areas of the arts: writing, film and music.

The crucial point is that the lottery is symptom, not cause: a mathematically inevitable consequence of the imbalance between a gloriously rich seam of talent-infused material, and a set of corporate channels that have limited bandwidth, flexibility and indeed, creativity, all of which is navigating a distracting ocean of flotsam and jetsam. While the background is open to debate, the consequences are the same: just “doing the thing” right doesn’t inevitably lead to what the industry defines as success.

Much to unpick: a different thread, of course, could be that my own book is either flotsam or jetsam. A better line of thinking still, is to recognise a number of factors that are spawned from the above, not least, what is it all for?

Before answering this broader question (broadest of all questions?) it’s worth pointing out the nature of this particular beast. Let me put it this way: any treatise that starts with the notion that things are changing (e.g. anything about technology) is signing its own best-before warrant. The window of opportunity, and therefore one’s ability to deliver, is constrained by the time period about which one is covering, and the rate of change therein. 

In other words, over the period of writing, I was always out of date. No sooner had I written one thing than the facts, the data points, the anecdotes started to wilt, to wither on the vine I had created for them. It isn’t by accident that I ended up delving into the history of tech, as I had already captured several zeitgeists only to see them die and desiccate before my eyes. 

On the upside, I now have a book which could (still) be revised: each chapter is structured on the principle of starting with something old, and using that as a foundation to describe the new. Canny, eh?

Returning to “what is it for”, one point spawns from this: there’s a place for history in the now. I know, that’s not blindingly insightful, but the link between the two is often shunned in technological circles which prefer to major on revolutions than deeper-rooted truths.

Meanwhile, and speaking of the now, one needs to accept the singular consequence of both lottery culture and rapid change, simply put: if you’re a technologist, the chances of getting your message out there in book form are miniscule, if you rely on a relatively slow-moving industry. Which very much begs the question, what is the point? If the answer is to be published, then you may be asking the wrong question but, as Christopher intimates, good luck to you. 

At this point, I’d like to bring in another lesson from my experiences with singing in a band, or in particular, what happens when only a handful of people shows up. It happens, but it doesn’t have to be a disaster: what I have learned is, if one person in the room is enjoying themselves, they become the audience. It’s humbling, uplifting and incredibly freeing to give just one or two people a great time through music. 

Put everything together and the most significant lesson from Smart Shift is this: my job, and my passion is to capture, then share an understanding. The job, then, is to balance reach with timing: better that a handful of people get something at the moment that it matters, than a thousand receive old news. 

The bottom line is just do it, get it out there. Grow your audience by all means, build a list of people who want to hear what you have to say, and have something to give back in response. But start with the right ones, with the person at the back of the room that claps along. Not because of any narcissistic ideal but because, if the job is to communicate, an active audience of one is infinitely more powerful than not being heard at all. 

 

Retrospective thoughts on Smart Shift

Travel Forward 2019: Let’s do this

You know that thing when you realise there’s under two weeks to go? I’m reviewing the final PDFs of the Travel Forward conference agenda right now and once again I’m staggered to think how it has gone from the aspirational, yet largely empty canvas of six months ago, to the packed, exciting and dynamic programme we now have. 

As I’ve been briefing speakers, the message has been simple: senior technology decision makers from across the travel industry will be coming to days one and two of the conference… but what happens once they have gone home, slept, woken and arrived back in their workplaces on day three?

Our goal is not only to inspire but to educate, with practical steps that enable attendees to take their businesses forward (the clue’s in the name). I say “our” – I’ve been lucky enough to work for, and with some really smart people to pull this programme together. 

So, team, speakers and attendees, let’s do this – let’s make Travel Forward 2019 a conference to remember, where preconceptions are left at the door and where hopes and dreams are replaced by practical and actionable steps towards genuine, technology-powered opportunity. 

Travel Forward 2019: Let’s do this

Bulletin 13 September. On the depth of learning and embracing frequent failure

New wine in snake skins

One of my favourite books was, and remains, The Voyage of the Dawn Treader by C. S. Lewis (yes, I cried when Reepicheep went over the sea). And one of my favourite passages is when Eustace is turn into a dragon. From a note (I think) he learned that, to become human again, he needed to shed a dragon skin (like a snake skin) and bathe in a certain pool.

So, he tried. He shed a skin, but it wasn’t enough. So he shed another, then another, bathing each time, but each time he emerged a dragon. Somewhat unexpectedly, Aslan the lion happened upon him: it’s not working, said Eustace. That’s because you are doing it wrong, said Aslan, who took out a huge claw and cut through Eustace’s many skins like an onion. That time, he emerged from the pool a boy again.

A couple of times in my career, I have been quite convinced I know it all… working back from the punchline, only to discover, quite uncategorically and without mercy, that I seriously do not. The first came just after I had been over-promoted to the point of deep stress, when working as an IT manager for a subsidiary of Alcatel.

Alongside the coping strategies and very real learning I was picking up on the job (I have, essentially, dined out on that experience ever since), I came to the conclusion that I had this management thing nailed: I doubted was anything else to learn about keeping saucers on sticks, running meetings, facilitating, time management or anything else administrative.

I then joined Admiral Management Services, a company whose ethos gave short shrift to any such idea of grandeur. Yeah, whatever, was the attitude: take some minutes and be a good boy, would you? Learning the hard way (failing fast and frequently), I unpicked everything I thought I knew and re-knitted it into some semblance of genuine best practice. Which I have also dined out on ever since.

The next big moment of big-headedness came a couple of years into my analyst career, when (at the heights of the dot-com) I thought it a really good moment to set up on my own. The lows of the dot-bomb followed almost immediately, mirroring both my feelings of utter incompetence and my bank balance. So many lessons learned, not least, cooking on a shoestring.

I didn’t mean to say all that: I was only going to talk about my memories of being in (what felt like) financial difficulty: any money we had was always in the wrong place, cheques bounced and bills went unpaid, with banks gleefully adding their fees to any debts incurred. I wasn’t going to say that either: I was only going to make the point that getting back on track didn’t just take extra effort: it took extra effort beyond what I thought extra effort meant at the time.

Like Eustace, sometimes the problem goes far deeper than we have the ability to understand. Not least in questions of learning, particularly when we come to it from a position of knowledge. Surely, people understand what is being discussed, we say, or can understand it as long as we explain it correctly. Even if they are a bit behind. And so, in areas of so-called ‘new thinking’, we can have whole conversations without realising that what we are saying is of very little relevance.

I’m coming to think this is the case for my own current area of specialism, DevOps. Even as I discuss how to make it work better (a.k.a. ‘to scale’), I have my good friend (and practitioner) Andy’s words ringing in my head: that nobody is really doing it, they just say that they are. Perhaps to get the analysts off their backs. It’s not just DevOps: the reason we can keep saying the same things about best practice, webinar after webinar, year after year, is that people still don’t get it.

To whit. I’m not sure what the answer is but based on my own experience, perhaps part of it is to recognise the fact that we are a lot less mature than we would like, as organisations and as people. And we need to deal with this as Eustace, not superficially but digging really deep, getting right down to the base, beneath layers and layers of traditional practice.

Not to do so will cause us to reinvent whatever we are talking about, for a number of reasons. First to avoid boredom — you can only hear people bang on about the same thing so many times, before cognitive filters push it into the background. Second because it loses its effectiveness as the world moves on. And third, because lovely marketing and PR people want something new to talk about, in the name of differentiation or thought leadership.

These days I have come to realise that ‘knowing it all’ is a false summit, a thinking person’s Tower of Babel. Rather than embracing my own inadequacy and giving up, I’ve found a joy and freshness in learning: in essence, I’ve come to terms with my own, valid feelings of imposter syndrome. Perhaps we could all do with recognising that we don’t all get it, neither at a superficial nor deep level.

There is no shame in this, as such an admittance is the first step in actually working out what the heck is being discussed. Like Eustace, we can all benefit from digging particularly deep in terms of what we don’t know, understanding the problem we are trying to solve before applying the latest iteration of solutions.

Thanks for reading, Jon

P.S. I said these bulletins would be more factual from now. This one isn’t but I’m on holiday, so sue me.

Bulletin 13 September. On the depth of learning and embracing frequent failure

Bulletin 23 August. Automating operations and what’s in a name?

Left brains and right brains

At a recent developer event, I happened to participate in a discussion about the role of what we call “operations” that is, managing and running IT systems, networks, storage and all that. The view, universally it appeared, was that operational IT was on the brink of being automated away, therefore rendering such roles redundant. Discussion turned to the fact that other roles would be available, so nobody needed to worry about jobs.

Which was nice, but irrelevant. Because IT infrastructure is not going anywhere. It only occurred to my flummoxed self some way through the debate, that the quite senior, enterprise-based people involved were largely on the development side. I have been characterising these as the right-brain creatives, largely directed by innovation, aspiration and other uplifting motives.

Meanwhile, on the operational side are the left-brain types, who need to work with reality and whose fault it will be if things start to fail. And fail they do, for reasons nefarious from power shortages to software bugs, and everything in between (it is completely relevant that the origin of the word ‘bug’ was based on a real insect, crawling across a circuit board and causing it to short).

Wait, I hear the visionaries say. It’s all about orchestration now. Work things out up front, write it in configuration files, throw it at the pristine racks of servers and it will just, you know, happen. That’s all very well, and I’ve been to the co-located data centre facilities with minimal staff where, indeed, it does all seem to be auto-magical.

Ramifications of this approach are that the configuration work still has to happen, even if specified in YAML (sands for YAML Ain’t Markup Language. Don’t ask). Such work can be given to Site Reliability Engineers, a new super-race of individuals that are absolutely bloody brilliant at defining infrastructure so it will just work. I’m pretty sure that’s the spec.

Or it can be allocated to developers, who, as we all know, love (and I mean LOVE) to spend their time doing things that aren’t development. “If only I could spend more time defining my target infrastructure,” said no developer I have ever spoken to. Okay, sorry, I’m being glib. The point is, however, that the job has to happen, even if the interface moves.

There’s more. In terms of day to day, keeping the lights on operations, much of the effort goes into dealing with consequences of poor, or incompatible decisions. These can be, variously, badly architected solutions; applications being used for things they were never intended for; prototypes becoming live products because of shortening timescales; and so on.

Many such challenges are as likely to happen in software, as in hardware. Ops people can suck their teeth for a reason: it’s because they will remember last time a certain thing was tried, and how badly it went for everyone involved. If you speak to someone shaking their head and looking negative, it isn’t because they were born that way, but because they have learned to be so.

We do have some potential hope coming from the latest, greatest trends in tech: I’m speaking about containers, microservices and all that (for the uninitiated, this means defining applications as a set of highly portable modules which can be run anywhere). With such models comes standardisation of everything ‘below’, which leads to less incompatibility, etc etc.

However these ideas have a way to go. The Kubernetes container orchestration service may become de facto, for example; but it doesn’t yet have everything it needs to support storage, networking or security, nor is there a generally agreed approach to building a Kubernetes application. We may be standardising one thing, but the rest is still very much to be dealt with.

And then, perhaps all we will have done is shifted the problem. With Kubernetes you can build fantastically powerful, yet complicated applications, buts of which could be running anywhere. And so, guess what, people will do just that, even when it is completely the wrong thing to do. And they will do it badly.

And when they do, somebody will need to be there to pick up the pieces, to work out where things stopped working, to isolate the problem and to feed back information that can prevent it from happening again. Who knows what they will be called, these clever people: something like “operations”, perhaps. No doubt in five years’ time, rooms of experts will tell us that such roles are on the brink of being automated away.

And round we shall go again.

Bulletin 23 August. Automating operations and what’s in a name?

Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code

We are all surgeons now. Sorry, I mean programmers

Hello, and welcome to this week’s bulletin.

I’ve been spending quite a lot of time talking to people about microsegmentation recently. I didn’t mean to, however, it would appear that purveyors of microsegmentation products are like buses: nothing for ages, then three come along all at once.

What is microsegmentation, I hear you ask. For a start, it’s a word that means a great deal to those who know what it means, and just about nothing to anyone who doesn’t… whenever a new term comes along, it never ceases to amaze me when I watch the ease with which it is used.

“Trans-notionalism is all the rage,” we might say, within minutes of having heard the term and, potentially, whether or not we actually know what it means ourselves. I’m not sure if this is driven by disdain, fear or ignorance, but it certainly isn’t a desire to help others get on board.

Anyway, microsegmentation. A word which could mean many things, depending on context. If I say ‘networking’ it starts to shimmer in front of the eyes — oh, right, yes, network segmentation at a micro-level? That makes sense.

Which is kind of right, but then again, not really. Microsegmentation is actually about network security: you can define specific routes around a network, which (I guess) you call segments. You define them, generally, at the application level, which means you don’t have to reconfigure the network each time.

So you could, say, declare that service A, B and C could talk to each other. The notion then implies that nothing else can see (via encryption) what happens between the three. Quite handy if (say) you wanted to gather information from a network of weather sensors, without anything else seeing.

So, “secure network segmentation, defined at the application level” might be a good description. Clearly this is too long, as we have microsegmentation instead. I’m not sure of why the “micro” is there, other than to make it sound a bit more snazzy.

It’s a good idea. Which begs the question, as so often in tech, why didn’t it happen before? The answer is, it kind of did, but it kind of wasn’t exactly right, for several reasons. First, as mentioned, network segments were traditionally configured at the network level. By network engineers.

Which kicks off a whole raft of issues, not least that it means you need to talk to network engineers. Don’t get me wrong, network engineers are great people, some of my best friends are network engineers. But having to communicate with someone else is always going to slow things down.

And what if you want to change your mind? We’re in an age where everything is supposed to happen fast, and iteratively, which making up new things every day. Ideally, you can do things faster if you give more of it to the people building applications (the developers), and let them make the decisions.

It’s good theory, but it falls down on two counts: one, that the developers will know what to do, and two, that all infrastructure today is magically simple, and requires little intervention. I’ve talked about the latter in a previous bulletin (TL;DR “It’s very complicated and can go wrong”).

As for the former, the microsegmentation approach allows for network engineers, and indeed security experts, to set what is possible, before giving away the keys to the kingdom: they can define usage policies, or in today’s terminology, “guardrails” to ensure developers don’t break anything.

Taking a step back, we can perhaps see why microsegmentation is emerging as a seemingly new thing, given how it could, notionally, have existed decades ago. First, it is as much about the eroding boundaries between technologists as anything. As with Network Function Virtualisation (NFV, also discussed before), our digital mechanisms are increasingly software-based.

This means they can be dealt with by developers, though it is not necessarily the case that they should be. Just because everything can be programmed, that doesn’t mean that the group referred to as ‘programmers’ are the best placed to direct everything. We wouldn’t imagine this to be true for healthcare decisions; nor is it the case for network configuration.

We may be arriving at a juncture where most things can be software driven, but are also reaching a point where software becomes the basis of communication, and not the consequence. If this sounds like I am going off on one, I am probably not explaining myself properly so let me try harder.

We need all the groups of people we have, security and infrastructure expertise, business and financial acumen, science and artistic endeavour. Software will infuse all such things, indeed, it already does but it is disjointed. But how about I, as (say) a healthcare person, could set out what I wanted and express it in terms that could then be programmed?

What this leads to is a common way of expressing needs. The whole field of requirements management has emerged over decades due to the complexity of translating needs into code, and we are a long way off simplifying that. A start, however, is to recognise our applications as transient, as a way of doing something right now, rather than as an end in themselves.

I will leave this here. If you have read this far, I would welcome your feedback. One thing is for sure: we are going to need a whole bunch more words.

Thanks for reading, Jon

P.S. Did anyone notice there was no newsletter last week? I didn’t, until I did…

Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code

Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google

The pessimist’s guide to the future

I start this week with a couple of resolutions. Or rather, aspirations — they only become resolutions when I actually do them. So:

Resolution 1: to start collecting links about what I read, so I can actually connect to it and turn my idle opinions into commentary.

Resolution 2: turn my idle opinions into commentary.

Meanwhile, we are stuck with idle opinions, but with a bit of solidity behind them. This week, I noticed an article about the dangers of being optimistic about AI vs jobs — the optimist’s idea is that automation eats away the dull stuff first, leaving the more interesting stuff which has to be a good thing, right?

I’ve written from an optimist’s perspective myself, but I have taken a different line. Many jobs only exist because of our desire to both feel useful, to get value from one another and to cope with complexity. On this latter note, to whit:

I was chatting with Adrian (from whom I sublet my office) this morning, about the amount of waste in corporations — or to put it another way, the pointlessness of meetings. People spend (perhaps) a third of their time sitting round tables or on conference calls, much of which gets in the way of getting things done.

So, yes, complexity. We spend time tying to understand things, trying to get other people on board, to agree a shared approach — all of which gets in the way of actually doing something. I read once that the most successful people are those who make decisions, even if the decisions are wrong, as indecision is a bigger killer of progress.

In other words, we already waste a great deal of time doing jobs, and somehow we get away with that. I have a funny feeling that even if we automated a raft of things we currently do, we’d just carry on working (potentially more) inefficiently to fill the time we had saved.

Actually, I’m not sure that’s particularly optimistic at all. Anyway, tune in next week for an altogether more considered and link-strewn bulletin. I might even draw some pictures.

Thanks for reading, Jon

============================================================
** (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** Share (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** Tweet (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** (*|FORWARD|*)
** Forward (*|FORWARD|*)
** (http://www.twitter.com/jonno)
** (https://www.facebook.com/thejoncollinspage/)
** (http://www.joncollins.net)
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*

Our mailing address is:
*|LIST_ADDRESS|* *|END:IF|*

Want to change how you receive these emails?
You can ** update your preferences (*|UPDATE_PROFILE|*)
or ** unsubscribe from this list (*|UNSUB|*)
.

*|IF:REWARDS|* *|REWARDS_TEXT|* *|END:IF|*

Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google