Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code

We are all surgeons now. Sorry, I mean programmers

Hello, and welcome to this week’s bulletin.

I’ve been spending quite a lot of time talking to people about microsegmentation recently. I didn’t mean to, however, it would appear that purveyors of microsegmentation products are like buses: nothing for ages, then three come along all at once.

What is microsegmentation, I hear you ask. For a start, it’s a word that means a great deal to those who know what it means, and just about nothing to anyone who doesn’t… whenever a new term comes along, it never ceases to amaze me when I watch the ease with which it is used.

“Trans-notionalism is all the rage,” we might say, within minutes of having heard the term and, potentially, whether or not we actually know what it means ourselves. I’m not sure if this is driven by disdain, fear or ignorance, but it certainly isn’t a desire to help others get on board.

Anyway, microsegmentation. A word which could mean many things, depending on context. If I say ‘networking’ it starts to shimmer in front of the eyes — oh, right, yes, network segmentation at a micro-level? That makes sense.

Which is kind of right, but then again, not really. Microsegmentation is actually about network security: you can define specific routes around a network, which (I guess) you call segments. You define them, generally, at the application level, which means you don’t have to reconfigure the network each time.

So you could, say, declare that service A, B and C could talk to each other. The notion then implies that nothing else can see (via encryption) what happens between the three. Quite handy if (say) you wanted to gather information from a network of weather sensors, without anything else seeing.

So, “secure network segmentation, defined at the application level” might be a good description. Clearly this is too long, as we have microsegmentation instead. I’m not sure of why the “micro” is there, other than to make it sound a bit more snazzy.

It’s a good idea. Which begs the question, as so often in tech, why didn’t it happen before? The answer is, it kind of did, but it kind of wasn’t exactly right, for several reasons. First, as mentioned, network segments were traditionally configured at the network level. By network engineers.

Which kicks off a whole raft of issues, not least that it means you need to talk to network engineers. Don’t get me wrong, network engineers are great people, some of my best friends are network engineers. But having to communicate with someone else is always going to slow things down.

And what if you want to change your mind? We’re in an age where everything is supposed to happen fast, and iteratively, which making up new things every day. Ideally, you can do things faster if you give more of it to the people building applications (the developers), and let them make the decisions.

It’s good theory, but it falls down on two counts: one, that the developers will know what to do, and two, that all infrastructure today is magically simple, and requires little intervention. I’ve talked about the latter in a previous bulletin (TL;DR “It’s very complicated and can go wrong”).

As for the former, the microsegmentation approach allows for network engineers, and indeed security experts, to set what is possible, before giving away the keys to the kingdom: they can define usage policies, or in today’s terminology, “guardrails” to ensure developers don’t break anything.

Taking a step back, we can perhaps see why microsegmentation is emerging as a seemingly new thing, given how it could, notionally, have existed decades ago. First, it is as much about the eroding boundaries between technologists as anything. As with Network Function Virtualisation (NFV, also discussed before), our digital mechanisms are increasingly software-based.

This means they can be dealt with by developers, though it is not necessarily the case that they should be. Just because everything can be programmed, that doesn’t mean that the group referred to as ‘programmers’ are the best placed to direct everything. We wouldn’t imagine this to be true for healthcare decisions; nor is it the case for network configuration.

We may be arriving at a juncture where most things can be software driven, but are also reaching a point where software becomes the basis of communication, and not the consequence. If this sounds like I am going off on one, I am probably not explaining myself properly so let me try harder.

We need all the groups of people we have, security and infrastructure expertise, business and financial acumen, science and artistic endeavour. Software will infuse all such things, indeed, it already does but it is disjointed. But how about I, as (say) a healthcare person, could set out what I wanted and express it in terms that could then be programmed?

What this leads to is a common way of expressing needs. The whole field of requirements management has emerged over decades due to the complexity of translating needs into code, and we are a long way off simplifying that. A start, however, is to recognise our applications as transient, as a way of doing something right now, rather than as an end in themselves.

I will leave this here. If you have read this far, I would welcome your feedback. One thing is for sure: we are going to need a whole bunch more words.

Thanks for reading, Jon

P.S. Did anyone notice there was no newsletter last week? I didn't, until I did...