Bulletin 13 September. On the depth of learning and embracing frequent failure

New wine in snake skins

One of my favourite books was, and remains, The Voyage of the Dawn Treader by C. S. Lewis (yes, I cried when Reepicheep went over the sea). And one of my favourite passages is when Eustace is turn into a dragon. From a note (I think) he learned that, to become human again, he needed to shed a dragon skin (like a snake skin) and bathe in a certain pool.

So, he tried. He shed a skin, but it wasn’t enough. So he shed another, then another, bathing each time, but each time he emerged a dragon. Somewhat unexpectedly, Aslan the lion happened upon him: it’s not working, said Eustace. That’s because you are doing it wrong, said Aslan, who took out a huge claw and cut through Eustace’s many skins like an onion. That time, he emerged from the pool a boy again.

A couple of times in my career, I have been quite convinced I know it all… working back from the punchline, only to discover, quite uncategorically and without mercy, that I seriously do not. The first came just after I had been over-promoted to the point of deep stress, when working as an IT manager for a subsidiary of Alcatel.

Alongside the coping strategies and very real learning I was picking up on the job (I have, essentially, dined out on that experience ever since), I came to the conclusion that I had this management thing nailed: I doubted was anything else to learn about keeping saucers on sticks, running meetings, facilitating, time management or anything else administrative.

I then joined Admiral Management Services, a company whose ethos gave short shrift to any such idea of grandeur. Yeah, whatever, was the attitude: take some minutes and be a good boy, would you? Learning the hard way (failing fast and frequently), I unpicked everything I thought I knew and re-knitted it into some semblance of genuine best practice. Which I have also dined out on ever since.

The next big moment of big-headedness came a couple of years into my analyst career, when (at the heights of the dot-com) I thought it a really good moment to set up on my own. The lows of the dot-bomb followed almost immediately, mirroring both my feelings of utter incompetence and my bank balance. So many lessons learned, not least, cooking on a shoestring.

I didn’t mean to say all that: I was only going to talk about my memories of being in (what felt like) financial difficulty: any money we had was always in the wrong place, cheques bounced and bills went unpaid, with banks gleefully adding their fees to any debts incurred. I wasn’t going to say that either: I was only going to make the point that getting back on track didn’t just take extra effort: it took extra effort beyond what I thought extra effort meant at the time.

Like Eustace, sometimes the problem goes far deeper than we have the ability to understand. Not least in questions of learning, particularly when we come to it from a position of knowledge. Surely, people understand what is being discussed, we say, or can understand it as long as we explain it correctly. Even if they are a bit behind. And so, in areas of so-called ‘new thinking’, we can have whole conversations without realising that what we are saying is of very little relevance.

I’m coming to think this is the case for my own current area of specialism, DevOps. Even as I discuss how to make it work better (a.k.a. ‘to scale’), I have my good friend (and practitioner) Andy’s words ringing in my head: that nobody is really doing it, they just say that they are. Perhaps to get the analysts off their backs. It’s not just DevOps: the reason we can keep saying the same things about best practice, webinar after webinar, year after year, is that people still don’t get it.

To whit. I’m not sure what the answer is but based on my own experience, perhaps part of it is to recognise the fact that we are a lot less mature than we would like, as organisations and as people. And we need to deal with this as Eustace, not superficially but digging really deep, getting right down to the base, beneath layers and layers of traditional practice.

Not to do so will cause us to reinvent whatever we are talking about, for a number of reasons. First to avoid boredom — you can only hear people bang on about the same thing so many times, before cognitive filters push it into the background. Second because it loses its effectiveness as the world moves on. And third, because lovely marketing and PR people want something new to talk about, in the name of differentiation or thought leadership.

These days I have come to realise that ‘knowing it all’ is a false summit, a thinking person’s Tower of Babel. Rather than embracing my own inadequacy and giving up, I’ve found a joy and freshness in learning: in essence, I’ve come to terms with my own, valid feelings of imposter syndrome. Perhaps we could all do with recognising that we don’t all get it, neither at a superficial nor deep level.

There is no shame in this, as such an admittance is the first step in actually working out what the heck is being discussed. Like Eustace, we can all benefit from digging particularly deep in terms of what we don’t know, understanding the problem we are trying to solve before applying the latest iteration of solutions.

Thanks for reading, Jon

P.S. I said these bulletins would be more factual from now. This one isn’t but I’m on holiday, so sue me.

Bulletin 13 September. On the depth of learning and embracing frequent failure

Bulletin 23 August. Automating operations and what’s in a name?

Left brains and right brains

At a recent developer event, I happened to participate in a discussion about the role of what we call “operations” that is, managing and running IT systems, networks, storage and all that. The view, universally it appeared, was that operational IT was on the brink of being automated away, therefore rendering such roles redundant. Discussion turned to the fact that other roles would be available, so nobody needed to worry about jobs.

Which was nice, but irrelevant. Because IT infrastructure is not going anywhere. It only occurred to my flummoxed self some way through the debate, that the quite senior, enterprise-based people involved were largely on the development side. I have been characterising these as the right-brain creatives, largely directed by innovation, aspiration and other uplifting motives.

Meanwhile, on the operational side are the left-brain types, who need to work with reality and whose fault it will be if things start to fail. And fail they do, for reasons nefarious from power shortages to software bugs, and everything in between (it is completely relevant that the origin of the word ‘bug’ was based on a real insect, crawling across a circuit board and causing it to short).

Wait, I hear the visionaries say. It’s all about orchestration now. Work things out up front, write it in configuration files, throw it at the pristine racks of servers and it will just, you know, happen. That’s all very well, and I’ve been to the co-located data centre facilities with minimal staff where, indeed, it does all seem to be auto-magical.

Ramifications of this approach are that the configuration work still has to happen, even if specified in YAML (sands for YAML Ain’t Markup Language. Don’t ask). Such work can be given to Site Reliability Engineers, a new super-race of individuals that are absolutely bloody brilliant at defining infrastructure so it will just work. I’m pretty sure that’s the spec.

Or it can be allocated to developers, who, as we all know, love (and I mean LOVE) to spend their time doing things that aren’t development. “If only I could spend more time defining my target infrastructure,” said no developer I have ever spoken to. Okay, sorry, I’m being glib. The point is, however, that the job has to happen, even if the interface moves.

There’s more. In terms of day to day, keeping the lights on operations, much of the effort goes into dealing with consequences of poor, or incompatible decisions. These can be, variously, badly architected solutions; applications being used for things they were never intended for; prototypes becoming live products because of shortening timescales; and so on.

Many such challenges are as likely to happen in software, as in hardware. Ops people can suck their teeth for a reason: it’s because they will remember last time a certain thing was tried, and how badly it went for everyone involved. If you speak to someone shaking their head and looking negative, it isn’t because they were born that way, but because they have learned to be so.

We do have some potential hope coming from the latest, greatest trends in tech: I’m speaking about containers, microservices and all that (for the uninitiated, this means defining applications as a set of highly portable modules which can be run anywhere). With such models comes standardisation of everything ‘below’, which leads to less incompatibility, etc etc.

However these ideas have a way to go. The Kubernetes container orchestration service may become de facto, for example; but it doesn’t yet have everything it needs to support storage, networking or security, nor is there a generally agreed approach to building a Kubernetes application. We may be standardising one thing, but the rest is still very much to be dealt with.

And then, perhaps all we will have done is shifted the problem. With Kubernetes you can build fantastically powerful, yet complicated applications, buts of which could be running anywhere. And so, guess what, people will do just that, even when it is completely the wrong thing to do. And they will do it badly.

And when they do, somebody will need to be there to pick up the pieces, to work out where things stopped working, to isolate the problem and to feed back information that can prevent it from happening again. Who knows what they will be called, these clever people: something like “operations”, perhaps. No doubt in five years’ time, rooms of experts will tell us that such roles are on the brink of being automated away.

And round we shall go again.

Bulletin 23 August. Automating operations and what’s in a name?

Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code

We are all surgeons now. Sorry, I mean programmers

Hello, and welcome to this week’s bulletin.

I’ve been spending quite a lot of time talking to people about microsegmentation recently. I didn’t mean to, however, it would appear that purveyors of microsegmentation products are like buses: nothing for ages, then three come along all at once.

What is microsegmentation, I hear you ask. For a start, it’s a word that means a great deal to those who know what it means, and just about nothing to anyone who doesn’t… whenever a new term comes along, it never ceases to amaze me when I watch the ease with which it is used.

“Trans-notionalism is all the rage,” we might say, within minutes of having heard the term and, potentially, whether or not we actually know what it means ourselves. I’m not sure if this is driven by disdain, fear or ignorance, but it certainly isn’t a desire to help others get on board.

Anyway, microsegmentation. A word which could mean many things, depending on context. If I say ‘networking’ it starts to shimmer in front of the eyes — oh, right, yes, network segmentation at a micro-level? That makes sense.

Which is kind of right, but then again, not really. Microsegmentation is actually about network security: you can define specific routes around a network, which (I guess) you call segments. You define them, generally, at the application level, which means you don’t have to reconfigure the network each time.

So you could, say, declare that service A, B and C could talk to each other. The notion then implies that nothing else can see (via encryption) what happens between the three. Quite handy if (say) you wanted to gather information from a network of weather sensors, without anything else seeing.

So, “secure network segmentation, defined at the application level” might be a good description. Clearly this is too long, as we have microsegmentation instead. I’m not sure of why the “micro” is there, other than to make it sound a bit more snazzy.

It’s a good idea. Which begs the question, as so often in tech, why didn’t it happen before? The answer is, it kind of did, but it kind of wasn’t exactly right, for several reasons. First, as mentioned, network segments were traditionally configured at the network level. By network engineers.

Which kicks off a whole raft of issues, not least that it means you need to talk to network engineers. Don’t get me wrong, network engineers are great people, some of my best friends are network engineers. But having to communicate with someone else is always going to slow things down.

And what if you want to change your mind? We’re in an age where everything is supposed to happen fast, and iteratively, which making up new things every day. Ideally, you can do things faster if you give more of it to the people building applications (the developers), and let them make the decisions.

It’s good theory, but it falls down on two counts: one, that the developers will know what to do, and two, that all infrastructure today is magically simple, and requires little intervention. I’ve talked about the latter in a previous bulletin (TL;DR “It’s very complicated and can go wrong”).

As for the former, the microsegmentation approach allows for network engineers, and indeed security experts, to set what is possible, before giving away the keys to the kingdom: they can define usage policies, or in today’s terminology, “guardrails” to ensure developers don’t break anything.

Taking a step back, we can perhaps see why microsegmentation is emerging as a seemingly new thing, given how it could, notionally, have existed decades ago. First, it is as much about the eroding boundaries between technologists as anything. As with Network Function Virtualisation (NFV, also discussed before), our digital mechanisms are increasingly software-based.

This means they can be dealt with by developers, though it is not necessarily the case that they should be. Just because everything can be programmed, that doesn’t mean that the group referred to as ‘programmers’ are the best placed to direct everything. We wouldn’t imagine this to be true for healthcare decisions; nor is it the case for network configuration.

We may be arriving at a juncture where most things can be software driven, but are also reaching a point where software becomes the basis of communication, and not the consequence. If this sounds like I am going off on one, I am probably not explaining myself properly so let me try harder.

We need all the groups of people we have, security and infrastructure expertise, business and financial acumen, science and artistic endeavour. Software will infuse all such things, indeed, it already does but it is disjointed. But how about I, as (say) a healthcare person, could set out what I wanted and express it in terms that could then be programmed?

What this leads to is a common way of expressing needs. The whole field of requirements management has emerged over decades due to the complexity of translating needs into code, and we are a long way off simplifying that. A start, however, is to recognise our applications as transient, as a way of doing something right now, rather than as an end in themselves.

I will leave this here. If you have read this far, I would welcome your feedback. One thing is for sure: we are going to need a whole bunch more words.

Thanks for reading, Jon

P.S. Did anyone notice there was no newsletter last week? I didn’t, until I did…

Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code

Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google

The pessimist’s guide to the future

I start this week with a couple of resolutions. Or rather, aspirations — they only become resolutions when I actually do them. So:

Resolution 1: to start collecting links about what I read, so I can actually connect to it and turn my idle opinions into commentary.

Resolution 2: turn my idle opinions into commentary.

Meanwhile, we are stuck with idle opinions, but with a bit of solidity behind them. This week, I noticed an article about the dangers of being optimistic about AI vs jobs — the optimist’s idea is that automation eats away the dull stuff first, leaving the more interesting stuff which has to be a good thing, right?

I’ve written from an optimist’s perspective myself, but I have taken a different line. Many jobs only exist because of our desire to both feel useful, to get value from one another and to cope with complexity. On this latter note, to whit:

I was chatting with Adrian (from whom I sublet my office) this morning, about the amount of waste in corporations — or to put it another way, the pointlessness of meetings. People spend (perhaps) a third of their time sitting round tables or on conference calls, much of which gets in the way of getting things done.

So, yes, complexity. We spend time tying to understand things, trying to get other people on board, to agree a shared approach — all of which gets in the way of actually doing something. I read once that the most successful people are those who make decisions, even if the decisions are wrong, as indecision is a bigger killer of progress.

In other words, we already waste a great deal of time doing jobs, and somehow we get away with that. I have a funny feeling that even if we automated a raft of things we currently do, we’d just carry on working (potentially more) inefficiently to fill the time we had saved.

Actually, I’m not sure that’s particularly optimistic at all. Anyway, tune in next week for an altogether more considered and link-strewn bulletin. I might even draw some pictures.

Thanks for reading, Jon

============================================================
** (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** Share (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** Tweet (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** (*|FORWARD|*)
** Forward (*|FORWARD|*)
** (http://www.twitter.com/jonno)
** (https://www.facebook.com/thejoncollinspage/)
** (http://www.joncollins.net)
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*

Our mailing address is:
*|LIST_ADDRESS|* *|END:IF|*

Want to change how you receive these emails?
You can ** update your preferences (*|UPDATE_PROFILE|*)
or ** unsubscribe from this list (*|UNSUB|*)
.

*|IF:REWARDS|* *|REWARDS_TEXT|* *|END:IF|*

Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google

Bulletin 19 July 2019. AI vs Jobs, take two

Crunching in the shoes of others

I was starting to think I was becoming a bit of a one-trick pony with this bulletin. You know, a bit naive, over-optimistic, potentially ranty and vague (or in the immortal words of Neil Ward-Dutton, not very crunchy). All of these things may be the case but I recorded a podcast this week that brought it all into focus.

In case I haven’t mentioned it, I’m the lucky host of a podcast for GigaOm called Voices in DevOps. Lucky not only because it is a topic, or field, close to my heart; but also because I learn so much from the guests, each of whom’s job is to answer one simple question: what is going to make DevOps work in the enterprise?

Some guests take a view based on their own background: so, for example, I’ve discussed a product management mindset on a couple of occasions, and other times the conversation has turned to data and measurability, or collaborative best practice. Meanwhile I’ve been able to test out my own theories, for example around the Guru’s dilemma.

On a couple of occasions, such as this week, the debate has turned to how provenance influences how the situation is framed: essentially the meta-level of the above paragraph. So, for example, if you have a room full of people with a development background, they may take the view that everything can be programmed, and therefore should be.

On the first point, mathematically, they are not wrong. One of the late, great Alan Turing’s principles was that any complete formal language can be used to specify any other complete formal language (or to put it another way): in other words, you can express anything you like as a program. The second point leads to the enticing “as-code” notion, for example expressing infrastructure, security, test scripts as code.

Trouble is, just because something can be done in a certain way, that doesn’t mean it should. My guest this week (David Torgerson, of Lucid Software) warned against the danger of employing programming generalists, rather than specialists in other areas — such as, let’s say, database engineers. Yes, and indeed, you can express a database structure as code.

However, I was reminded of another conversation I had with Neil W-D, who has a knack of making things very crunchy very quickly. We were talking about the nature of programs or process steps to impact data and state, and vice versa, to the extent that the two intertwined. “What, you mean like, “is it a wave or a particle?”,” he asked. Yes, that’s precisely what I meant, if only I could have been crunchy enough.

In my experience, programmers do not always make brilliant database engineers and vice versa: actually, in my own personal experience, I’m happy to write programs but I find myself wanting to leave the room quickly if every I am faced with a spreadsheet… which, yes, makes being a research analyst a challenge sometimes. And don’t start me off about expenses.

But, to David’s point, we need people of both types, and a lot of other types to boot. Horses for courses, different problems require different brains, teams should be cross-functional not only because we need a variety of skills but because we need a variety of minds (which is also the root of my thinking about needing to pay more than lip service to ideas around diversity).

Without talking too much about the topic at hand, DevOps blessing and curse is that it has been led by developers — it is DEVops and not devOPS at its origin. Which leads to another strange notion, that operations can be automated out of existence, which is clearly an idea from somebody who has never worked in a data centre (cf yet another of these newsletters).

That’s all, really, apart from recognising our own preferred framing , and ensuring that we don’t see it as the only view. We can all benefit from walking in the shoes of others from from time to time, or risk believing that they don’t need to exist.

Smart Shift: The truth is out there

In this section we go from loyalty points to virtual cash, following the rise of Bitcoin and its supporting platform, Blockchain, and we look at its applicability to the music industry and other domains. “It’s not about the money,” says anyone who thinks it is about the money.

Thanks for reading.

Bulletin 19 July 2019. AI vs Jobs, take two

Bulletin 12 July 2019. On framing, specialisation and provenance as code

Crunching in the shoes of others

I was starting to think I was becoming a bit of a one-trick pony with this bulletin. You know, a bit naive, over-optimistic, potentially ranty and vague (or in the immortal words of Neil Ward-Dutton, not very crunchy). All of these things may be the case but I recorded a podcast this week that brought it all into focus.

In case I haven’t mentioned it, I’m the lucky host of a podcast (https://gigaom.com/podcast/voices-in-devops/) for GigaOm called “Voices in DevOps”. Lucky not only because it is a topic, or field, close to my heart; but also because I learn so much from the guests, each of whom’s job is to answer one simple question: what is going to make DevOps work in the enterprise?

Some guests take a view based on their own background: so, for example, I’ve discussed a product management mindset on a couple of occasions, and other times the conversation has turned to data and measurability, or collaborative best practice. Meanwhile I’ve been able to test out my own theories, for example around the Guru’s dilemma.

On a couple of occasions, such as this week, the debate has turned to how provenance influences how the situation is framed: essentially the meta-level of the above paragraph. So, for example, if you have a room full of people with a development background, they may take the view that everything can be programmed, and therefore should be.

On the first point, mathematically, they are not wrong. One of the late, great Alan Turing’s principles was that any complete formal language can be used to specify any other complete formal language (or to put it another way): in other words, you can express anything you like as a program. The second point leads to the enticing “as-code” notion, for example expressing infrastructure, security, test scripts as code.

Trouble is, just because something can be done in a certain way, that doesn’t mean it should. My guest this week (David Torgerson, of Lucid Software) warned against the danger of employing programming generalists, rather than specialists in other areas — such as, let’s say, database engineers. Yes, and indeed, you can express a database structure as code.

However, I was reminded of another conversation I had with Neil W-D, who has a knack of making things very crunchy very quickly. We were talking about the nature of programs or process steps to impact data and state, and vice versa, to the extent that the two intertwined. “What, you mean like, “is it a wave or a particle?”,” he asked. Yes, that’s precisely what I meant, if only I could have been crunchy enough.

In my experience, programmers do not always make brilliant database engineers and vice versa: actually, in my own personal experience, I’m happy to write programs but I find myself wanting to leave the room quickly if every I am faced with a spreadsheet… which, yes, makes being a research analyst a challenge sometimes. And don’t start me off about expenses.

But, to David’s point, we need people of both types, and a lot of other types to boot. Horses for courses, different problems require different brains, teams should be cross-functional not only because we need a variety of skills but because we need a variety of minds (which is also the root of my thinking about needing to pay more than lip service to ideas around diversity).

Without talking too much about the topic at hand, DevOps blessing and curse is that it has been led by developers — it is DEVops and not devOPS at its origin. Which leads to another strange notion, that operations can be automated out of existence, which is clearly an idea from somebody who has never worked in a data centre (cf yet another of these newsletters).

That’s all, really, apart from recognising our own preferred framing , and ensuring that we don’t see it as the only view. We can all benefit from walking in the shoes of others from from time to time, or risk believing that they don’t need to exist.

** Smart Shift: The truth is out there
————————————————————

In this section we go from loyalty points to virtual cash, following the rise of Bitcoin and its supporting platform, Blockchain, and we look at its applicability to the music industry and other domains. “It’s not about the money,” says anyone who thinks it is about the money.

Thanks for reading.

Bulletin 12 July 2019. On framing, specialisation and provenance as code

Bulletin 6 July 2019. On why technology is not a journey, and lasagne

Warning: this gets a bit meta

There’s a game I play pretty much every time I see a technology-related announcement. Case in point, last week I saw something from a vendor which pretty much said, “Now with extra security!” (I am racking my brains who it came from, but no matter, it was reasonably standard). To the game: turn the sentence around and see what thought processes that triggers — in this case, “You mean, before, it had less security? What the heck?”

To be fair, it’s not just me. Many years ago I was particularly taken by a cartoon involving the lasagne-eating Garfield, watching the ads on television. “New and improved! New and improved!” the ads blared out. “To think that, up till now, it was all old and inferior…” the cat opined.

Three dimensions spring to mind. The first is that, yes, it’s just advertising: each release has to be presented as better than before, otherwise people won’t buy it. The second, that the tech industry is changing and progressing, making improvements inevitable and welcome. And the third is that, yes, the previous generation of the product or service really was a bit rubbish.

To shift it up (or down) a level, the whole thing reflects the weight we put on the current narrative. “It’ll be nice when it’s finished,” I sometimes say, without actually joking: behind the humorous facade is a genuine belief that whatever technology has planned for us, this is not ‘it’. We are not there by a long chalk, yet, and strangely, we try to act as though we are.

Nor do I particularly believe that we are on a journey of some form, though it clearly does feel that way. A journey implies some kind of general direction and route; whereas the progression of technology is more like participating in the multi-dimensional bastard offspring of Cards Against Humanity and the Mousetrap Game. For sure, we are moving, but with no clear grasp of the rules, or what is around the next corner.

In large part, we accept the change that is happening to us; or we ignore it; or we wring our hands about it: the one option not available to us is to stop using it, for fear of being considered a luddite or for the simple reason that we quite like it, really. And thus we rely on more superficial narratives that carry us forward, that keep the money flowing, that help others get their heads around it all.

Don’t get me wrong, I’m feeling neither negative nor cynical. I am, however, feeling that we are acting as the passengers of complexity theory, focusing on the knowns as the unknowns are too much of a challenge. Some pretty deep questions pervade, not only in terms of ethics or governance (which I bang on about often enough) but also, for example, on the nature of augmentation versus automation, the impact of insight on responsibility, the ability to game our natures, the alignment of interfaces and personality.

I could drill into each of these (indeed, ahem, watch this space) but more important is the fact that they should be part of the more general narrative, but they aren’t, not really.

<* I recognise several weaknesses in this argument, in that I am of course looking for some kind of philosophical perspective on technological progress. First that of course, people are discussing these things: I’m just not party to those conversations. And second, what we might call the framing effect: another issue I’ve seen frequently, where someone stumbles upon a way of looking at things and then, post-epiphany, can’t understand why the rest of the world just won’t do the same BECAUSE IT’S REALLY IMPORTANT. But anyway…*>

Simply put, we could be modelling the kind of outcomes we are trying to achieve from the use of tech. I’m reminded of the Security By Design lobby, which says grosso modo that we should be thinking about security needs at the very outset of creating something new. More broadly, I’m wondering whether, with a few flipcharts and post-its, we can get to a kind of “this-is-a-set-of-principles-we-can-all-adhere-to-by-design” model which goes beyond notions of security, trust, governance yadda yadda and gets closer to an alignment with who we are.

I don’t think the answers would be obvious or immutable. but at least we could have something to build towards, rather than the current acceptance that technology is something to be accepted as delivered, whether we like it or not. Often (case in point: social media) the answer will be both, but then at least we can make more informed choices.

Thanks for reading, Jon

Bulletin 6 July 2019. On why technology is not a journey, and lasagne

Bulletin 28 June 2019. On software artefacts and why we need to relearn the basics, Part 17

Being cynical about cynicism

Where did it all go wrong? A reader and old friend called me out last week: “You have officially become a grumpy old geezer,” he told me, and quite right too.

Aspiring analysts take note, you can get an awfully long way with cynicism, which is right and proper given how much of the industry seems to be driven by naive optimism and assumptive hype.

Nonetheless, it can become more then a little wearing to be cynical all the time, like some end-of-bar bore who has seen it all before. “It’s just management,” one might say, or “It’s never going to work.” Etcetera.

There’s a fine line between pointing out the weaknesses in a position, and just being a bore. And besides, sometimes things are not as simple as they appear: generally, there will be a reason why the same things seem to come round again.

Often it is because they aren’t the same things, not quite, so it becomes a case of working out what the differences are. Microsoft had the tablet computer form factor nailed long before Apple, for example… but it didn’t have Jony Ive (and a thousand others) to smooth the lines and bring in the requisite minimalism.

A second reason (which I have covered before) is that every generation needs to have its own epiphanies. And thirdly, sometimes, something happens that causes everything to have to be sorted out from scratch.

Example: management of software artefacts. I was at the DevOps Enterprise Summit this week, and was lucky to be party to a number of fascinating conversations… one of which revolved around this topic. I know, right?

In my first job, almost 32 years ago, I was a programmer, which meant writing lines of code. Rather than having everyone doing their own thing in a corner, all that code was managed using a tool called SCCS — source code control system, if I recall correctly.

(Did I say it was 32 years ago yet?) But it makes sense (right?), to have a place to manage all that code-y stuff, otherwise the situation would be a bit chaotic, right? When I left my programming job, I became a software configuration management specialist, which was all about this stuff. Call me dull but I quite like it.

Quiet at the back. and fast forward to 2019, when a conversation with major financial institutions is about how important it is to manage source code… and how bad they are at it. They never meant to be, but the problem just, kind of, snuck up.

So, we have choices at that point. Call them all idiots, which is of no help whatsoever (and removed the log from thine own eye). Listen cynically to the epiphanies, in the most patronising way possible. Or indeed, think about the root causes of what made a bunch of smart people end up in a singularly un-smart situation… which might help do something about it.

I have been thinking about them, and my money is on… the Web. What started as just a bit of markup language (that’s the ML in HTML) has become the backbone of many software applications and services.

In turn, just as nobody bothered to manage web pages as though they were code, so we ended up with a massive set of programs, none of which anyone thought would need to be managed as software. And now, here we are, where that point is obvious but knowing it doesn’t solve the problem.

Sure, let’s be cynical, but (as with many things happening today) it was a lone, quiet voice that saw it coming in advance.

All the best, Jon

P.S. Apologies (even if none are needed) for the brevity of this edition but I am in the midst of moving house. Analogies abound, if only I had time to write them all down.

Bulletin 28 June 2019. On software artefacts and why we need to relearn the basics, Part 17

Bulletin 21 June 2019. The dark side of progress and the case for industry-scale ethics

Beyond the echo chamber

We are in the fourth industrial revolution, so we are told. Disrupt or be disrupted. Jobs will be lost. No they won’t, but they will change. Kids are smarter than their parents. Science is always right. Resistance is futile.

Some of this may be true — I don’t think anyone nobody meant to unleash the transistor onto the world as a bad thing, any more than I believe that the designers of Facebook algorithms or Twitter streams planned to unlock such bias, or such anger. Nope, no siree, I don’t think any said group thought about such stuff at all. Which is the point.

I have a theory about this, by the way, one which I hope is objective… though objectivity seems in very short supply in the current narrative. We are all driven by agendas, complacencies, instincts and I spend a certain amount of time pondering what my own are, and how they will (or do already) manifest.

Back to the theory. The technological centre of gravity lay first with the governments and researchers, then with the corporations, then with the young and forward thinking… and then, i.e. now, with the generations and demographics that were never consulted about any of it.

And they are coming back in spades. There’s a local Facebook group I am party to, which (with no sense of irony) is called “Local Town for Local People.” It is renowned for people complaining about things, often the same things (potholes, bad driving, the lack of respect…) it also has some great threads about old photos.

In said group, voices that might be considered ‘progressive’ are in a minority. I don’t believe the thousands-strong group is so because it overly respects a certain sub-group of the town; rather, it is representative of the actual conversations that, until recently, happened behind closed doors, in pubs and on street corners.

Welcome to the real world, progress. And welcome to the backlash against traditional, top-down behaviours from anyone in power: if everyone has a voice, then you’d better start listening to it. In the past, charismatic leaders could get lucky by aligning with the zeitgeist; today, anyone can do it if they are prepared to listen.

I’m not a super-great fan of jingoistic populism, but nor am I an advocate of complacency or assumptiveness. It probably is the case that the nature of democracy is changing, as opinions can be gauged (and people engaged) at a much finer level of granularity than before. And the debate, as nasty, bully-laced and mob-ruled as it can become, is taking place in public.

But still, the creators of technology continue as if all that is nothing to do with them, as if progress was unassailably a force for good, as if the next set of superpowers the industry was about to unleash were all about the positive consequences, and let’s not worry about the negatives, shall we?

I’m not sure I’m any better, for the record. I might bang on about governance and the like, as though I actually care, but do I really? Am I no better than someone in the alcohol industry or gambling industry saying, yes, we really think people need to be careful with this stuff, but you know, personal responsibility eh?

And this is one area I need to recognise the inherent bias in the systems we create. In driving technology business through positive case studies, we create an echo chamber of our own, in which the positives fill out the narrative with only a little bit of space left for what might go wrong.

I’ve talked about governance by design before, but perhaps we need to go further than that, to a notion of industry-scale culpability. I’m not saying the tech industry is inherently bad; I am saying that this notion that it is inherently good is wearing thin.

I’m not sure what the answer is; but I do know that the notion of ethics is largely consigned to individual efforts, and not to mega-trends (the hammering that Facebook is currently getting is the exception, not the norm). As an industry we have great power in our hands and with great power comes great responsibility.

Smart Shift: Is that a drone in your pocket?

In this section I cover all things autonomous, from driverless trucks to drones. I’m looking forward to self-driving pizza: you heard it here first.

Thanks for reading, Jon

Bulletin 21 June 2019. The dark side of progress and the case for industry-scale ethics

Bulletin 14 June 2019. Why do we assume the Internet will ‘just work’? 

Pies, damn pies and charts

As just about everyone I know knows, I’m in a band (https://www.facebook.com/PluckingDifferent/) . I never meant to be, it just kind of evolved, out of a ukulele based collective that friend and fellow podcaster (see below) Simon co-set-up. The “never meant to be” element is relevant, as I find I’m learning new things, not least how to sing in front of large audiences, but also the notion of an audience of one.

Performing is by its nature a little bit (okay, a lot) narcissistic: delivering artistic and creative things into a vacuum is no fun for anybody. At the same time, it is a complete lottery: sometimes you get the adrenaline rush of a huge, animated crowd, and sometimes you get what a friend refers to as a ‘paid rehearsal’ in which nobody is paying much attention.

At times like that, I have found, sometimes just one person will be actually enjoying what the band is doing… at which point, in my head at least, they become the audience. I can think of a gig we played in Swindon a couple of years ago, where one chap was nodding, singing and enjoying every moment. I have no idea who he was, never seen him since but for two hours, he was who I was singing and playing to.

And that was enough, no, more than that: it’s a real privilege to offer the gift of a tune or two, and to have an appreciative smile in return. And… and now I find myself wondering how to relate this point to technology. It all made so much sense half an hour ago, when I started writing this thing.

In my line of business, over the past decades I have spent quite a lot of my working life answering questions for people. I subscribe to the Einstein principle — I might not always know the answer, but I probably know someone who does so I can find out (and often, the person asking may just need a bit of help to answer their own question).

I have also spent my days surveying, researching and writing about it all. And making plenty of mistakes along the way, and hopefully learning from them. One mistake is, of course, to take anecdotal evidence as something quantitative: a problem exacerbated by the social echo chambers we inhabit.

I’ve been seeing this a lot recently, as I have consorted with the kids from the cooler end of the tech spectrum — those we might call ‘cloud native’. For this group, notions of collaboration, iteration, of trying something and getting it out there, are the norm, to the extent that they may well be surprised to find that others don’t follow that, straightforward and productive path.

Meanwhile, I still liaise with many people who work in what might loosely be called ‘enterprise IT’. In this group, linear thinking is the most likely approach to succeed and it can be very hard to convince people otherwise. I remember a corridor conversation at a large government department, where I singularly failed to convince a colleague why waterfall approaches weren’t always going to be the best option.

Perhaps never the twain will meet, perhaps not; trouble is, if you look at IT trends as a whole, you will end up with an amalgam of both, or a view from one side or another. It is almost impossible to get a high-level picture of how, in many large organisations, the mind might be willing but the body is weak; or how individuals may have the right idea, even if their teams and groups are still lagging.

Which brings to the audience of one. Again and again, I see research telling organisations what they should be like, rather than engaging with them individually and determining what might work for them. It’s the dark secret of consulting: the ‘thought leadership’ may tell decision makers to “disrupt or die” but the day to day activities of change are much more mundane.

What can we get from this? That quantitative research will always be wrong? Not so fast. Research offers a useful starting point to have a conversation, to engage on points of similarity, to help someone get a view on the broader context of their industry or domain. At the same time, it should almost always be rejected in favour of finding out what the specific organisation is actually like.

**
Smart Shift: Augmentation in progress
————————————————————

In this week’s section (http://joncollins.net/smartshift/panel/pages/the-instant-superhero/augmentation-in-progress/edit) I bring in Wim Wenders’ film ‘Until the End of the World (http://www.theguardian.com/film/2015/aug/27/until-the-end-of-the-world-review-wim-wenders) ’ to introduce the topic of Augmented Reality. “What we learn from such films, in general, is that the fight remains between good and evil, between acts of immense inner strength to overcome situations of utter peril.” It was ever thus.

** Extra-curricular: Getting Away With It Episode 7 – Post-Lechlade Festival Blues
————————————————————

Speaking of bands, podcasts and indeed, small audiences, Simon and I actually managed to bank another episode (http://bluecube.libsyn.com/getting-away-with-it-episode-7-post-lechlade-festival-blues) of our GAWI (as we have come to call it) podcast. It includes mention of headlining in a beer tent, barfing waltzers and strange phrases, Huawei and Google, scanning for skeletons in concrete, tech giants breaking industries… and a poem from Simon.

Thanks for reading.

Jon

============================================================
** (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** Share (http://www.facebook.com/sharer/sharer.php?u=*|URL:ARCHIVE_LINK_SHORT|*)
** (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** Tweet (http://twitter.com/intent/tweet?text=*|URL:MC_SUBJECT|*: *|URL:ARCHIVE_LINK_SHORT|*)
** (*|FORWARD|*)
** Forward (*|FORWARD|*)
** (http://www.twitter.com/jonno)
** (https://www.facebook.com/thejoncollinspage/)
** (http://www.joncollins.net)
Copyright © *|CURRENT_YEAR|* *|LIST:COMPANY|*, All rights reserved.
*|IFNOT:ARCHIVE_PAGE|* *|LIST:DESCRIPTION|*

Our mailing address is:
*|LIST_ADDRESS|* *|END:IF|*

Want to change how you receive these emails?
You can ** update your preferences (*|UPDATE_PROFILE|*)
or ** unsubscribe from this list (*|UNSUB|*)
.

*|IF:REWARDS|* *|REWARDS_TEXT|* *|END:IF|*

Bulletin 14 June 2019. Why do we assume the Internet will ‘just work’?