Bulletin
Posts published in Bulletin.
2018
Posts from 2018.
January 2018
01-08 – Untitled
Untitled
Nothing to Declare bulletin
Hi,
A happy new year to you!
As part of my quest to find a bit of light in the darkness, I’m starting a newsletter, I’d be very interested to know your thoughts.
Let me know if you don’t want to be added, and/or if you have any burning questions you’d like to see answered!
All the best, Jon
— — — — — —
Hi there,
Please find below the first in what I hope will be a long series of succinct and informative bulletins on the state of technology and its consequences. I’m sending this to you directly because I thought you might enjoy it and I also value your opinion, please do let me know any feedback you may have and that will help keep the ball rolling.
Following this direct email, I’ll be sending the next bulletin using MailChimp. If you don’t want to receive any future bulletins of this sort from me, please do let me know and I will not add you (or remove you, depending on when I get your reply).
Cheers, Jon
Nothing To Declare — travels in a connected world
5 January 2018. The Yin and Yang of Innovation and Governance
These are interesting times - whatever your political affiliations or wherever in the world you might be. In this context, technology is a two-edged sword — it holds both great promise and enormous risk. We can choose be evangelists or doom-mongers, or we can simply recognise this dichotomy: for every healthcare breakthrough, there will be a fake news, and so on.
It was probably ever thus — one can imagine the dawn of the iron age, when somebody chose to make a sword even as somebody else made a ploughshare. With each breakthrough comes a breakdown, an opportunity to exploit as well as enhance, and yet somehow we are still here; I remain optimistic that humanity as a whole will prevail, whatever the short-term challenges.
We don’t always make it easy for ourselves. Older companies struggle with innovation for a thousand reasons, leaving gaps for others to fill to sometimes dramatic effect. And meanwhile, our legal systems remain behind the curve, their multi-year, consensus-driven models rendered hopelessly inadequate by the pace of change. And technology is so complex, it can raise unexpected and massive challenges (such as the latest Meltdown and Spectre security flaws in computer chips).
To whit, this bulletin. As I write this, I am reminded of Alistair Cooke’s Letters From America, a weekly new broadcast which ran from 1946 to 2004. Cooke was always the observer, his role to enlighten. I stand more chance of achieving the latter than I do matching his longevity, he died at 96 but I would be 110 by the time I finished if I kept going that long. I can only hope medical science has a few tricks up its sleeve.
So, what’s news?
2018 Predictions
I wish I’d thought of Kai Stinchcombe’s tagline on Medium, “I’m whatever the opposite of a futurist is.” I recently documented my top 5 2018 predictions as follows:
1. GDPR will be a costly, inadequate mess. No doubt GDPR will one day be achieved, but the fact is that it is already out of date. For one, simple reason: we will consent to have our privacy even more eroded than it already is. Watch this space
2. Artificial Intelligence will create silos of smartness. Integration work will keep us busy for the next year or so, even as learning systems evolve. c.f. This piece on the Register: Skynet it ain’t: Deep learning will not evolve into true AI, says boffin.
3. 5G will become just another expectation. But its physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.
4. Attitudes to autonomous vehicles will normalize. Attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right?
5. When Bitcoins collapse, blockchains will pervade. The principle can apply wherever the risk of fraud could also exist, which is just about everywhere. But this will take time
6. The world will keep on turning. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.” Okay, this last one isn’t really a prediction, more an observable fact.
DevOps Automation report
Let’s be clear, it was always about what’s currently being labelled DevOps: if you can do things faster, test them and get them into production quicker, you can find out what you really need and move on. This shouldn’t be rocket science but it is very hard for us humans to get our brains around. In this article I cite Barry Boehm, founder of the spiral methodology — I was surprised to find that it emerged in the 80’s, not the 70’s but no doubt prototyping approaches have existed since the invention of the wheel.
Why do (in-expert) organisations think they are secure?
This is the first in a series of “unanswered questions” — you know, the ones that nag at you but never really get tacked. In this case it was from security expert Ian Murphy — “Why do companies with little or no real security experience think they know their environment better than anyone else?” I welcome any additional questions you may have.
Extra-curricular
In other news, over the break I was involved in a Christmas single which is raising money for mental health charities (I’m also in the video); I have a weekly podcast with my mate Simon; and alongside my writing, I have fallen madly in love with the piano so I have set two challenges for 2018: to finish a novel and to play Widor’s Toccata on the biggest church organ I can find. I’ve started a video blog on the latter if you want to follow my progress.
Cheers, and see you next time. Jon
01-12 – Newsletter
Newsletter
The irony of anti-hype articles
“Good article,” somebody said to me last week about a piece I had put together around Blockchain and the NHS. I had to make a confession — it was a reasonably standard effort, as I put it, “It’s the “not a magic bullet” response any “this is a magic bullet” hype. It goes with “Tech X is dead” vs “No it isn’t”, “It’s going to be more complicated than you think,” “Don’t forget the management aspects” and other handle-turning articles.”
So, yes, got me. It’s a conundrum: when to say something out loud, even if it’s been said before? Indeed, I’ll go further than that — it’s one of the reasons that, a few years now in the past, I arrived at a point where I felt I had nothing to say. The danger at such a point is to come out with something like “the end of history as arrived” but for one I didn’t, and for two, it’s usually rubbish.
Trouble is, in this hype-ridden industry we need the mundane perspectives. Some have built their careers on presenting the obvious as some profound wisdom, but without it, we are leave innovation to operate unchecked. On the upside however, the pioneers of the Internet cannot fully claim, or hold the blame for the failure to deliver utopia. As I say in this article on the topic, “Let’s not be downhearted, rather, let’s recognise that no technology can enable us to transcend our discomfortingly complex, conflicted and ambivalent nature as a species.”
The mess that is GDPR
Moving to articles that were less straightforward to research and write, I said I’d be talking more about GDPR, a topic about which I feel strangely sad. It’s one of the few areas of legislation I’ve ever felt compelled to write to its creators about — the EU powers-that-be kindly sent a hand-signed letter to inform me how my views would be taken into account (no doubt I should be happy not to find out that I had in some way entered myself into a competition).
Why the long face, you might ask? Well. For the life of me, I cannot fathom how it will make our general lives better, at the same time as causing considerable cost to be borne across the world’s businesses. If the former were possible, the latter might be a price worth paying; as things stand, it is not. Still, we are where we are and for now, we will all have to suck it up. As one reader, Peter, points out, “Just one point. GDPR is already a mess and is likely to be used as an excuse for price hikes.”
Extra-curricular
There’s a new GAWI (Getting Away With It) podcast up, while largely non-technical it does venture into some of the above. I’ve configured a Raspberry Pi for use as part of my house-wide AirPlay music network, which is a darn sight cheaper than the alternatives. Piano practice continues — I’ll be updating the vlog. And meanwhile, I’ve been revising the synopsis of the novel I am writing this year, more soon!
Finally, I’ve not quite managed the step to MailChimp just yet, so for now this is still coming from my desktop. Any questions, feedback or requests to unsubscribe, let me know.
Until next time, Jon
01-19 – Nothing To Declare — travels in a connected world
Nothing To Declare — travels in a connected world
19 January 2018. Digital transformation — when specificity becomes too specific
First off, thank you to all who have engaged in conversation, since I have sent out newsletters. I have had some long chats and big reads on GDPR and Blockchain in particular, both of which I shall be following up on. Meanwhile, I’ve been thinking, and writing, about digital transformation. It’s not that I think it is all bunk, but rather, as the Irish adage goes, “If you want to get there, don’t start from here.”
A little definition can go a long way, but sometimes it can get in the way — we’ve all been in meetings (if you haven’t, you are the lucky one) where more time is spent trying to define some term, than actually getting on with making things happen. Case in point is digital transformation, which seems to spawn more discussion than any ‘technology trend’ of recent times. This does beg the question of whether something can really be a trend, if nobody can agree what it is… but that’s for another day.
If you’re looking to ‘do’ digital transformation, read this first
To get to the point, i’m not sure there’s any such thing as digital transformation. I set out my reasons why here: in summary, terminology matters not a jot but the propensity to change is fundamental:
1. It’s all about the data — the term is just an ill-considered response to what we knew anyway, that we are in the information age.
2. Technology is enabling us to do new things — to continue the Sherlock-level insight, this really is enabling breakthroughs. Who knew?
3. We tend to do the easy or cheap stuff — trouble is, these breakthroughs happen just as often because we are lazy, as driven.
4. Nobody knows what the next big thing will be — is where the varnish starts to peel. Won’t we just have to ‘transform’ again?
5. That we are not yet “there”, nor will we ever be — which is enough to lead any strategist to breakdown. This gig will never be done.
6. Responsiveness is the answer, however you package it — so our focus should be on ability to change. Common sense perhaps, but it isn’t happening.
On the upside, there’ll still be plenty of jobs
A good example of the digital hype and in particular, point 4 above is how we’re all going to be out of jobs (yes, everyone, from manual workers to lawyers, according to the University of Oxford). Here’s a summary of 10 reasons why nobody should worry about whether they will have something to do in the years to come:
- Because decisions are more than insights.
- Because we have hair, nails and teeth.
- Because we ascribe value to human interaction and care.
- Because we love craft.
- Because we value each other and the services we offer.
- Because we are smart enough to think of new things to do.
- Because complexity continues to beat computing.
- Because experience and expertise counts.
- Because we see value in the value-add.
- Because the new world needs new skills.
The bottom line is that even as we automate certain manual activities, we lose neither the desire, nor the propensity for work. We have evolved such that we see work as necessary: we derive satisfaction from doing it ourselves, and sharing the fruits of our labours with others. Will jobs change? Well, yes, but how does this differ from the past 50 years?
Oh and finally, don’t even start me off on monetisation.
01-26 – Newsletter — whence cybersecurity and trust?
Newsletter — whence cybersecurity and trust?
It’s an author’s worst nightmare. You write a book, it’s complete and just about to be published. So you ask a few people for a review — it would be nice to get some positive words on the back and, frankly, you want a bit of a stroke. So you send off the manuscript to a trusted party… and they come back and say it’s all rubbish. Stop the world, go and sit in a darkened room and hope it all goes away.
And yes, it happened to me when I wrote a short book on information security architecture. The trouble with it is, it suggests that you need to create an architecture for information security: this approach flies in the face of (some) cyber-thinking, which starts from the point of view that any notion of perimeter security should be consigned to the past.
It’s not bad thinking, indeed, it is highly valid. Google have adopted an approach for their own internal systems that encrypts every communication, based on the assumption that no system can be secure. At the same time however, try walking into Google’s buildings and see how far you get. As with so many things in life, absolute positions generally end up tempered when they hit reality.
Cybersecurity best practice really only needs five rules.
1. It’s all about the data. Which means understanding it before you can protect it.
2. It’s a board-level choice. It can’t be delegated away, outsourced or otherwise.
3. It’s about understanding the whole picture. Which means architecture.
4. It’s impossible to protect everything. Which means being able to react.
5. It’s never done. And why would it be, if technology keeps changing.
Obviously, there’s a great deal of devil in the detail but if organisations could only get their heads around the above, we would all save so much time and effort. These rules should be as prevalent as ‘don’t run with scissors’, but while they are not, we reiterate (and indeed write books about) them. And, as an aside, security vendors are stuck with scare-story marketing, not because they want to be forever spelling doom, but because it’s the only thing that works. Ho, hum, and on we go.
To whit, some articles from this week.
Cybersecurity should be a board room topic
A global survey by Allianz has found that cybersecurity is now the number two global business risk, up from fifteenth position 5 years ago. Number one is business continuity, which is a bit of a circular argument of course — a threat to business is not being able to do business? But perhaps organisations are finally getting the message.
Trust in media is collapsing. Is that such a bad thing?
Imagine my delight when I found publication of fake news was illegal during the French revolution, with the obvious consequences. Incidentally, if you want a positive story about the impact of technology its ability to democratise data distribution, execution by guillotine continued well into the 20th century, but it was the arrival of film recording that led to its demise. Turns out the mob thing quickly loses its sparkle if you’re not there.
Extra-curricular: how not to write a biography
In other news (which is semi-curricular, to be fair), I gave a talk on this subject to the Gloucestershire Society of Authors group this week. I’m reminded of Christopher Norris’ response when I went on one of my usual existential rants about whether I was actually an author, or ‘just’ a writer. “Oh, you’re definitely an author,” he sad. “But you’re not an auteur…”
On that note and until next time, Jon
February 2018
02-01 – Can Frugal Innovation models apply to Western corporations?
Can Frugal Innovation models apply to Western corporations?
Frugal Innovation, seen in 2012 as “a distinctive specialism of the Indian system” — is about delivering innovation while minimising costs. Here’s how UK research body NESTA put it back in 2012: “Frugal innovation responds to limitations in resources, whether financial, material or institutional, and using a range of methods, turns these constraints into an advantage.”
As examples such as Tetra Pak’s pyramid-shaped “samosa” packs (lower manufacturing cost, higher efficiency) illustrate, it is possible to use cost minimisation as an innovation driver; the four conditions of “lean, simple, clean and social” can lead to products and services more applicable to populations without access to resource levels available to more ‘developed’ countries.
Organizations looking to replicate the perceived benefits of frugal innovation (for example, to address healthcare challenges in the US, or as per this BearingPoint report) face several difficulties, not least definition. How can the term, also known as jugaad (“making do with what is around”) apply in a traditional corporation — is it simply a case of cutting off the money, or is it an exclusive feature of developing economies — only possible if you have less in the first place?
This contextual factor begs a question of whether it is just marketing speak, turning a limitation into a differentiator — “all our ingredients are hand-picked (because we haven’t got a machine)” or “we only work with a select group (because we don’t have the bandwidth to do otherwise)” and so on. It also opens itself to accusations of post-colonial patronization or even exploitation, as Western-headquartered investments and partnerships target the emerging middle classes.
Whether there is something in frugal innovation for developing economies, as USP or simply as a response to challenging conditions, Western corporations looking at it as an answer may be looking the wrong way through the telescope. To understand why, we need to think about their chequered relationship with the notion of innovation itself.
Back in the day (and to this day), the biggest tech companies would show their prowess through the amount of money they put into research, or through the number of patents they created. With reason, as hardware and software engineering presented a phenomenal number of options, with enormous economic and business potential. At the end of the Second World War (during which many resources had been allocated towards ‘the effort’), money was no object.
As a result, for companies such as IBM, the emphasis was on ‘pure’, rather than the ‘applied’ research models followed by the likes of AT&T’s Bell Labs. “The Watson Lab’s first director, Wallace J. Eckert, gave researchers license to pursue their interests without concerning themselves with business imperatives,” says IBM’s article on the topic.
This approach lasted several decades before IBM and many others had to move to the applied, that is, justification-based, model. They were not alone: I can remember my own programming alma mater, Philips Electronics, announcing that it would follow a similar strategy. (In Philips’ case, it was perhaps catalysed by the repeated failure to monetise its great ideas, such as the compact cassette and compact disk.)
Fact was, and remains, all such companies were hitting the law of diminishing returns, in which each new success requires a bigger effort than the last. With the advent of digitalization (that is, with all companies becoming tech companies) every vertical now faces a similar challenge. Companies can adopt a number of strategies to counter this — for example big banks such (as Credit Agricole) may create a new entity for innovation, to side-step the sluggishness of the mothership.
Or, as happens so often in the technology industry, they can feed the ecosystem and acquire any successful startups that result. Microsoft once had a (negative) reputation for innovation-through-acquisition, and meanwhile Oracle, CA, Cisco and many others have built their businesses on it. As someone from IBM once said to me, “We’d be able to do so much if we could just get our act together.”
So, about that telescope. Rather than seeing limited-resource models as an opportunity, organizations would do well to think about how they can freeing innovation from the constraints and shackles of over-resourcing. So, what can these be?
1. Too much money. The finance industry has a reputation for buying its way out of a crisis, or into an opportunity, with the resulting layer-upon-layer of complex, sluggish technological junk.
2. Too many cooks. Sometimes, indeed much of the time, less opinions are more valuable than more. Making a decision, trying something out and working with the outcome delivers results faster.
3. Too long timescales. We try to do everything we can, adding features and meeting as broad a set of needs as possible, resulting in project timescales that last longer than the problems they were looking to solve.
4. Too many options. Thank Geoffrey Moore for the fact that the entire tech industry is founded on differentiation, meaning we spend far too long investigating options and not enough time solving problems.
5. Too much paperwork. Policies and procedures, sign-offs and authorizations matter, but in many cases have become and end in themselves, making jumping through hoops more significant than achieving goals.
6. Too much stuff. Many of the challenges we look to address — integration, interoperability, virtualisation, network and storage management, cybersecurity, data analytics — are about dealing with the quantities of materiél we have created.
7. Too much thinking. Yes I know, that’s rich coming from me, but all this complexity makes it harder to see the wood from the trees, leaving us stymied or at least, working inefficiently.
Of all of these, perhaps the biggest is that we have too much money. We have the Web’s shotgun approach to spending advertising dollars, to the discontent of, well, everyone; indeed, the “which 50%” model also applies to start-up economy (when I was leading a development team, we struggled to understand what start-ups were doing with the 20-million-dollar cash injections they were burning).
From this, highly encumbered point of view, Frugal Innovation becomes like a holiday experience — it’s what happens when you can stop, think and perhaps even dream. We all know what happens when we come back from holiday and the real world comes flooding back in. All we really need for innovation is the ability to think. And yes, if developing economies have a make-do mindset, that becomes a differentiator worth paying for.
02-01 – The dangers of noun-centricity
The dangers of noun-centricity
The topic emerged in a conversation with a family member — you know the one, “everything isn’t about money, there’s more to life, etc, etc.”
In this industry we occasionally adopt similar framings. I can remember a discussion with CA a few years ago, when the company decided everything was about management. Of course, many things have to be managed, but such absolute positions can only take things so far. Similarly, we could say everything was about governance, or about analytics, or integration, or customers, or cloud, and so on and so on.
Each is true to a point, but not to the detriment of everything else: it’s one of the reasons why I could be so confident back in the day, when I took a ‘hybrid’ stance against the emerging everything-to-the-cloud positioning so loved by companies that would benefit the most.
note the difference with the “it’s all about the data” stance I have taken in previous newsletters
Of course, for businesses, money is very important, and sometimes so is a singular focus, as per Steve Jobs with his unflinching design obsession. But by goodness, if you are to adopt a singular vision, be sure it’s the right one, be ready to change it as soon as it reaches its sell-by date, and above all, recognise it as a mechanism to set priorities, not as something with which to beat customers over the head.
With this in mind, here’s some articles.
02-09 – Bulletin 9 February 2018. AI and the theory of exponential linearity
Bulletin 9 February 2018. AI and the theory of exponential linearity
You know that thing where, in a discussion, a concept comes seemingly out of nowhere, but then changes the whole nature of the dialogue? So it was in a recent conversation with my old friend and colleague Roger Davies, him who has written whole books on Value Management (kind of, what to do when you realise Balanced Scorecard can only take things so far.
“Of course, despite all this innovation, our progress has only been linear,” I blurted, immediately finding myself in a position where I had to justify my remark. “Is that true?” asked Roger. “Yes,” I said, sounding much less confident than I felt. But as we explored the idea, it did seem to hold water — are we any better at communicating, for example, or really any more productive than we were two or three decades ago?
This question is writ large when it comes to Artificial Intelligence, whose impact, we are told, is to be transformative. I admit to being flummoxed in another conversation, this time just before Christmas at the Great Telco Debate, when I was told in very clear terms that AI would, indeed, change just about everything. He clearly interpreted my uncertainty as ignorance, as he undertook to explain (as one might to a child) why this should be.
Interestingly, the example he cited was telecommunications complexity, what with Network Function Virtualisation (NFV), 5G and the like. I was almost immediately taken back to the early Noughties, when I presented on neural networks in enterprise management, on behalf of CA. Also on the bill was an Italian researcher with several PhDs — “Maybe we could share research,” he suggested. I politely declined, not wanting to show him up with my hastily collated notes and superficial thinking.
The thing about all this fantastic innovation is that it has a fractal effect on the problem space. That’s me getting pompous again, but it happens every time: we are as sorcerers’ apprentices, generating data as quickly as we work out new ways to harness it. The impact on most technological fields is that breakthroughs tend to be quite tightly scoped — at the moment, voice, image and other signal processing are seeing the most progress, and will rightly yield some great results alongside the dumb animations we can add to video chats.
But nobody has yet created an algorithm that can somehow get ahead of the game. To my delight and relief, someone far smarter than me has captured this in an article, “The impossibility of intelligence explosion,” i.e. that point at which our own intellects get left behind. By extension, as long as they (our intellects) do not (get left behind), we are stuck with them in all their wonderful, dumb glory. In the meantime, AI will serve to augment, not replace capabilities including that litmus-test topic, radiology.
Don’t get me wrong, these are transformative times. But even so, our relationship with technology remains that of a craftsperson and their tools — it is up to us to prioritise, to architect, to make things happen. We should not expect our environments, or relationships, our abilities or anxieties to be that much different in ten years’ time, compared to now. And that is a good thing.
And so, to some articles.
The digital world needs new lawmakers
</soapbox on>In case it isn’t clear enough from this article, I’m none too comfortable with the repeated, deal-with-problems-after-they-happen approach to the information age. Moves are being made to change this, notably in the financial industry, but here in the UK for example, our general legislative processes still take place over months or years. Not only is this too slow, but it is massively inefficient and open to exploitation by corporations and cybercriminals alike. As Einstein said, “We can’t solve problems by using the same kind of thinking we used when we created them.” Word.</soapbox off>
On luxmobiles and flying unicorns: how diversification and proliferation will rule the routes
On a much lighter note, yes, I very much enjoyed writing this — but it has a serious point. If vehicles are no longer constrained by needing a driver, what will they become? I posited some examples as a staring point:
- Luxmobiles and budget pods. Like Apple vs Android, the market tends to diverge
- Pizza box scooters. Expect transport the shape and size of its load
- Drone swarms. No doubt flying in some sponsored logo formation
- Super Strings. I should really patent the coupling mechanism now…
- Public Subs. What about underwater package delivery? Really, why not?
- Vompods. Awkward but who wants to be the third drop-off after a late night out?
- Flying Unicorns. For multi-functional transport, the aerial has to go somewhere
The last one was first postulated in our ‘techsplaining’ podcast, the ironic title of which seems strangely appropriate.
Webinar: DevOps in the Real World
Yes, I’ve been that person who tries to get people to do things in different ways to previously. Indeed, in the course of this bulletin it was good to hook up again with Mary Lynn Manns, co-author of Fearless Change: patterns for introducing new ideas which I was delighted to contribute something to (my proposed pattern was “Champion Skeptic” — who would have guessed!). I’m rambling, but the long and short is that I am well aware you can’t just tell people to ‘do’ agile, or DevOps, or anything else that makes them have to think differently. Even if it’s a really good idea.
That’s why I’m really looking forward to having a virtual fireside chat about it all with Nigel Kersten of Puppet Labs. If you’re interested you can sign up here.
Extra-curricular
The piano continues - still two hours a night, as the latest video blog shows I’ve started to crack two hands though I’m a long way from being able to just launch in on a piece.
And meanwhile, that august institution the Society of Authors has seen fit to re-post my blog on “How Not To Be A Biographer. Which is nice!
That’s all for now, until next time. Jon
Mr Goat
The other side of the pond
What it looks like from over here
02-16 – Bulletin 16 February 2018. On swords and ploughshares: data doesn’t kill people, does it?
Bulletin 16 February 2018. On swords and ploughshares: data doesn’t kill people, does it?
I need to thank Aleks Krotoski for bringing Kranzberg’s first law to my attention: “Technology is neither good nor bad; nor is it neutral.” A stark example was the gun designed by Richard Gatling in 1861. Gatling was a complex character, so keen to demonstrate the futility of war that he created a weapon which showed no regard for human life. If any weapon ever did.
Gatling was, let us say, a complex character — at the same time as selling his weapon to the US Army, he sided with the Confederates. His invention changed the nature of war in particular, and society in general, as profoundly as Jethro Tull’s seed drill changed the nature of agriculture. While the latter might be seen as a better result, the industrial consequence was to drive many thousands of people into factories, slaves to the machines.
Against the background of a tragic, sobering and repeated cycle of school killings, I do not make the comparison between the equipment of war and that of agriculture lightly. Both involve the same materials, and both can be used for righteous good or for ill. And a similar, almost Gatling-esque level of moral ambiguity is rife in the technological world: a prize for anyone who can identify a vendor or service provider who restricts use of their outputs to positive purpose.
An ethical debate around technology exists, but it has rarely come up in the many thousands of conversations in which I have been involved. Most, if not all examples chosen by vendors illustrate the ‘good’ uses of tech, with the ‘less good’ tending to emerge from real-world experience. So, smart city debates might talk about the wonders of shifting people efficiently, but will avoid bringing up notions of crowd control.
Why is this? I think we, as an industry, genuinely think that the things we do are all for the better. The positive, optimistic voices lead our keynote speeches and case studies, leaving little room for discussion of the downsides (is it any wonder why cybersecurity tends to get forgotten during design stages?). And as Frederic Filloux suggests, the amount of chagrin around having drunk too much kool-aid doesn’t change the behaviour — we can all have regrets in hindsight.
I don’t have any answers, and I wish I did. As I wrote last week, part of it does come from our lawmaking — the nature of legislation is that it covers the areas that the state is willing to control, as per the will of its elected people and its ability to be evenhanded despite the power of unelected lobbies. Another goodly part also comes from within us all, as we accept that such optimism needs to be tempered, not with cynicism but reality.
This also means that not all problems can be solved with the latest technology. Even as we look to make every single device from toasters to traffic-cones smarter, legislation to monitor firearm use with databases of any form continues to be blocked in the US. It is not my place to say how to do things differently in another country, but the scenario illustrates how one technological advance is restricted, such that another can continue unabated.
Perhaps Kranzberg’s first law should read: “Technology is neither good nor bad; nor is our use of it neutral.” Which brings to some articles.
The Seven Deadly Sins of the Digital World
Here’s something I started writing as a bit of whimsy, and ended up feeling a bit depressed. I do remain optimistic, don’t get me wrong — I remain confident that we shall look back on this period in a few decades as one of the most transformative periods in history, not unlike that moment when someone said to someone else, “What’s that you’ve got there?” “Oh, dunno, think I’ll call it ‘iron’. Let’s see what happens if I sharpen it.”
Messaging the use of AI against terrorist propaganda
Democratisation remains a key element of the information age: simply put, anyone can now say anything to anyone else, anywhere. The message behind this article is that we all have the same tools, so it boils down to how well we use them. Oh, and the law of unintended consequences continues to play out.
UK Cloud Awards 2018
As a judge for this year’s UK Cloud Awards, it’s only a month to go before I’m wading through the unprecedented numbers of entries. Awards events are only as good as the quality of the entries, which puts them in the same category as employment CVs and kids’ homeworks (and opens to the same questions about the level of help they might have along the way). Unsurprisingly, I shall be looking for examples of best practice that others can learn from.
Webinar: IT as a platform for business success
I’ll be ‘mein host’ for a Register webinar in a few weeks’ time — a great position to be in, as all I have to do is ask questions of great panellists and bask in their reflected glory. In this webinar the topic is how mid-market businesses can go beyond tinkering with their IT and start taking on the bigger companies: the answer is going to be architectural, easy to say, harder to do! Always a pleasure to hang out with my old friend and colleague Tony Lock.
Extra-curricular — local leg-ends
I know it’s only one marketing statement, but I confess to have had a huge buzz from Lechlade Festival calling Plucking Different “local legends” — we will be playing the fourth year in a row, with headliners Scouting for Girls. Rehearsal tonight before our first gig of the year, at Stroud Brewery. Expect Fleetwood Mac, Rolling Stones, Pulp, The Corals, George Ezra, Arcade Fire, U2, Muse, and of course, Gogol Bordello as well as beer and pizza!
02-22 – The bittersweet tragedy of falling thresholds
The bittersweet tragedy of falling thresholds
You have to feel sorry for Charles Babbage, whose 1830’s designs for the analytics engine were never to be realised in his own lifetime. As well as Augusta Ada King, Countess of Lovelace, the maverick mother of programming, whose own book was never finished. It could be said they were the founders of steampunk, in which all matters tekno-logical could be enabled with clever use of cogs and lenses.
Arguably, they were also the first to experience the falling threshold theory of computation. While it was potentially possible to realising his designs with the accuracy required, the costs were not seen as acceptable by the UK Treasury. How different might history have been, had they been funded. Indeed, another hundred years passed before others attempted to devise contraptions that could perform calculations according pre-‘programmed’ instructions.
When figures (such as German engineer Konrad Zuse, with an adaptation of the telegraph) finally did, they no doubt went through similar epiphanies of invention, those moments where the world appears to open up. “Just what I will be able to do with <insert device, program or concept here> beggars belief,” goes the thinking, repeating a pattern that has no doubt continued since the invention of the wheel.
(As an aside, I am reminded of one of my favourite cartoons from the Wizard of Id. “I have it, I have it! The greatest invention since the wheel!” announces the Wizard. “What is it?” replies the King, to which the Wizard says, “The axle!”)
No doubt the inventors of the tin can envisaged a future in which any foodstuff could be ‘canned’… these waves of invention are so often followed by the equally inevitable realisation that, no, not all problems can be solved — not cost-effectively, at least, and not at the time. Only the advent of Moore’s Law, variants of which still continue, offer any hope: if something isn’t possible yet, then it may be so in a couple of years’ time. Or so.
The thing about Moore’s Law is that it became an impetus to research, a budgetary variant of Heisenberg’s Uncertainty Principle, in that measuring something influences its behaviour. In this case, funds were allocated and priorities set in order to ensure it remained true. This is without malice or corruption: the potential consequences of delivering on the Law, and the opportunities thus created, were sufficient to drive the strategies of Intel, IBM et al.
Visionaries and indeed, marketers have tried to drive similar self-fulfilling prophecy through the power of their aspiration. When this is both blatant and unlikely to succeed, we refer to this as Kool-aid, illustrative of how easy it is to get whipped up into a frenzy of belief about the next big thing. It is similarly simple to be cynical, to point out how unlikely the vision is in practice. As I have written before, we may be right, for a while.
When breakthroughs tend to happen, they are driven by cost effectiveness, limited in scope and broad in impact: so, for example, the arrival of Hadoop onto the data management scene became possible due to the availability of certain components, notably in-memory data stores (which are lost if the machine is turned off, but that doesn’t matter if you already have your answer). Similar, the wave of “AirBnB doesn’t own a hotel”-type pronouncements.
Most of the major innovations we have seen in recent years are a consequence of these falling thresholds. Computation really is just maths, and infrastructure really is just engineering: anyone preaching the mantra of agility is on firm ground, as the chances are very high that what we see today as the norm (and build our businesses around) will tomorrow become superseded.
Equally, some of that maths will still be held back by the current state of engineering. Artificial Intelligence, the Internet of Things, Big Data Analytics, Cloud Computing are all models operating under various levels of constraint. The trick is to avoid listening to what might be possible, which may or may not be noise, while being ready to take advantage of the very real opportunities when they emerge.
Innovation in its truest sense — that is, constant renewal of thinking and practice — is the name of the game. Keeping this in mind, here’s some articles for this week.
What’s missing from the Malicious Use of Artificial Intelligence report?
While there’s a lot in this meaty report about what might go wrong about AI, it lacks substance — partially fair given that we are talking about the future, but equally unacceptable given our shared goal of understanding, and mitigating, potential risks. OK, I’m giving away the plot but the consequence is decidedly unscientific, which is surprising given that it comes largely from academia.
Why AI on a chip is the start of the next IT explosion
Ironic that I should write about Kool-aid, with that header! The bottom line is that chip design is moving from something everybody did, back in the day (my first job was Philips Components, and who remembers the Transputer?) to something everybody does. This shift from proprietary to commodity to open, seen so clearly in software, is now happening at the chip level.
Five questions for…
Okay, this isn’t an article but an aspiration (it’s not just technology that has to deal with bottlenecks, in my case, time). Conscious of the fact that briefings with technology companies are important, and that nobody wants to waste anybody’s time, I’m starting a series of posts where I will ask five questions of a technology vendor. I’m thinking that it should be possible to say something new and useful in a short piece.
Extra-curricular — a shout-out for parkrun
Last Saturday I was Race Director for the third outing of the UK’s 500th parkrun venue, at the Royal Agricultural University in Cirencester. I can’t extol the virtues of parkrun enough — it is incredibly inclusive, with sprinters, walkers and people pushing prams sharing the same route. A run not a race, the goal is to get the world fitter. Long may it continue.
March 2018
03-02 – Bulletin March 2 2018. Show me the money!
Bulletin March 2 2018. Show me the money!
I confess to have got a kick, at a presentation I gave in Berlin in November, out of the impromptu one-man re-enactment of the scene from Jerry McGuire, yes that scene, where Tom Cruise has to shout down the phone to Cuba Gooding Jr. (quick presenter tip, if you are going to pull a stunt like this, probably best not to tell the client first). It certainly livened up what might otherwise have been a quite dry topic around IT procurement.
Truth is, however, that pretty much everything is about money these days. We dress it up in terms of “business value”, “stakeholder outcomes” and so on, but to the extent that we can start to feel positively altruistic about our innovations, but no money, no point in staying in business. I draw the line at the frankly horrible word ‘monetisation’, which (among other crimes against vocabulary) takes a pretty bleak view of what data is for.
I also don’t necessarily agree with the view, as proposed by value management guru (and old friend and colleague) Roger Davies, that everything can be presented in financial terms. Perhaps this is true in principle, but (a) cf my previous points about the nature of terminology, and indeed (b) if we do so we end up in a place where everything does, indeed, become all about the money.
Don’t get me wrong, money is clearly important. Its invention (somewhere in what is now Turkey, an awfully long time ago) was to solve the many problems of not having it — interestingly, it almost immediately spawned the notion of currency exchange, with people taking a bit off the spread for kindly switching one for of value for another. The very word entrepreneurship suggests somebody positioning themselves as an in-betweener, taking what they see as their due for managing whatever hand-off they undertake.
It’s also important, frankly, because we all have to eat. And the fact you can play a mean guitar won’t sit well with the person on the checkout, as justification for your purchases.
But this very confusion around the nature of money, or value, or whatever you want to call it, is fundamental to the changing dynamics of business today, underpinning concepts such as ‘disintermediation’. It’s also unsurprising that anybody who creates something of value, essentially mining their own heads and hands for IP, art or craft, risks being ripped off. Indeed, the whole “Don’t work for free” movement for creative professionals has emerged, rightly, from the difficulty we have disentangling notions of monetary value from value’s more absolute form.
Technology could hold the answer, indeed it may already do so, but the past ten thousand years of progress have also created a deep-seated attachment to both money and the need for intermediaries. The debates around self publishing offer an illustration of how hard it is to separate our (real) need for collaborative endeavour, from the fact we’d be better off doing some jobs ourselves. The fact you can now set up as a self-publishing consultant is both as ironic as it is illustrative.
With this in mind, here’s some articles.
Why can’t we work out a technological solution for music distribution?
Choon is the latest new service to attempt to bring music directly to the masses — this time based on the Ethereum cryptocurrency. In this article I (try to) take things back to first principles of why we make, and listen to music in the first place. And indeed, will continue to do so.
What does “sustainable income” mean for authors?
I wrote this a while back out of pure self interest. A standard phrase from the more successful authors I’ve come across is “then, I became a full-time author…” It’s worth unpicking this as it suggests that the main route to success involves sustaining one’s inherent desire to write with other distractions, such as paying the bills. There aren’t any author practices in which older partners support younger trainees, perhaps we are better for it but it does make me wonder if we have the balance right.
Extra-curricular: Epilogue to Suburbia
Confession: I wrote this as a consequence of a minor fantasy about… what would happen if a small nuclear device was set off over Camberley? I was working there at the time and, frankly, sitting in queues of cars in the driving rain with nothing of note but a concrete elephant and the sad faces of other commuters was starting to get to me. This is actually a prologue, as researchers unpick the remains and try to work out what happened. Perhaps it will turn into a book one day.
03-08 – On Cloud, serverless and all that — are we there yet?
On Cloud, serverless and all that — are we there yet?
This week sees the tenth anniversary of CloudCamp, that self-titled ‘unconference’ which, in its anarchic style, seems to run just as smoothly as any non-unconference I have ever been to — even conferences that aren’t called conferences still need to be organised (and it was, very well). During last night’s London event two of its founders, Joe Baguley and Simon Wardley continued the double act pitting hybrid vs purer forms of cloud computing, the latter overloaded with terms like ‘serverless’, ‘FaaS’ and so on.
Ignoring that the discussion would have gone over the heads of all but the deeply technical (“But, this is one of the least technical events you can come to!” protested one, momentarily forgetting just how much technological knowledge lay behind these innocuous terms), we have its continued debate. On one side, that technology will continue to commoditise, and on the other, that it will continue to diversify. Each are right, creating both dilemma and opportunity for evangelists and those liking a good chat in either camp. And, given that the themes have got a good couple of decades to run yet, so will it continue.
I’m guilty as charged. Back in 2000 I penned two reports: the 50-page “From Application Service Provider to Universal Service Provider” which extrapolated the thinking of my then-boss, Robin Bloor; and “Development is Dead”, which I wrote based on my own experiences as a software development consultant. At the heart is of each is that, while software has the potential to become easier, reducing both barriers to entry and time to delivery, hardware and infrastructure will only get more complex. Because it can.
This dichotomy is as true today as it always has been. ‘Serverless’ software development can exist anywhere there is a server, which is an increasingly wide pool: even so-called ‘edge’ devices (from Echo Dots to toasters) can serve data or indeed, functions. I remember a discussion a long time ago with Akamai, about edge computing — in a nutshell, why send the data to the functionality of you can shift the functionality to the data? The fact that sometimes one will make sense, sometimes the other, is precisely what makes things so hard to pin down.
None of this is a problem, it’s part of the rich, dynamic tapestry of standardisation and commoditisation versus very real limitations of resources such as bandwidth, processing, storage, electrical power and so on. Cost is a consequence of all such constraints, in that, sure, we’d all have mainframes in the basement if we could afford them. But we can’t, and we never will be able to as long as the volumes of data we create continue to exceed our ability to process it all. On we go.
With all this in mind, here’s some articles from this week.
For most firms GDPR is an opportunity, not a threat
GDPR is going to run and run. I can’t say I blame every person and their dog looking to jump on the GDPR bandwagon, let’s face it, cybersecurity and compliance software vendors and consultants have not had the easiest of rides. What some look at as ambulance chasing or FUD (fear, uncertainty and doubt) spreading, I see as — have you ever actually tried to get budget for anything relating to these areas without some sword of Damocles hanging over the head of the CFO? All the same, a crapload of confusion exists around GDPR right now. Solution: the ICO has a free helpline, so use it.
5 questions for… TechVets
My inaugural 5 questions article is on the back of another event, this one to help ex-service personnel get jobs in the tech industry. It’s a laudable initiative, serving all parties involved. Ed’s note, I’m a great fan of ‘and’ rather than ‘either/or’ in these things — supporting one doesn’t have to diminish support for any other.
Extra-curricular
My Journey to Widor continues — if you haven’t been following this, it’s taking me from a keyboard novice to one of the harder, but fabulous organ pieces. I’m recording progress as a series of vlogs, to be frank because putting it out there means I can’t stop as I’ll look stupid. Sitrep: I’ve got as far as tackling Beethoven’s Moonlight Sonata, which for some reason I’ve been memorising (it seemed like a good idea at the time). Progress video here — I’m pretty made up to be playing such a beautiful piece, despite all the mistakes.
03-16 – Bulletin March 16 2018. Time flies like a banana
Bulletin March 16 2018. Time flies like a banana
I’ll try to keep this short. After all, time is at a premium. Or is it?
It’s funny, isn’t it, how we spend an inordinate amount of time talking about technology, the benefits it can bring and so on, and then cover all the related topics in a rush — the “it’s not as simple as that” stuff like security and management, “the things people don’t realise” areas, like business people not caring all that much (though this is changing). In general, we reach universal agreement that such topics are as important, if not more important, than the tech.
If, like me, you’ve been around the block a few times more than you’d like (to whit, a story: “That bloke over there, he looks really old, older than me!” I said. “No he doesn’t,” replied my wife), then this pattern will be all too familiar, with the latter part often finishing in the bar. If only pubs held design workshops, imagine how far we would have advanced.
Even more surprising by its frequent absence, is the notion of time. Surprising, as we are told that it is one of the most important factors: transform or die, goes the mantra, as everything gets faster, and faster, and faster. Graphs on this topic are typically exponential — the years for an innovation to go mainstream, the likely lifespan of new companies, each follows a similar curve.
Clearly, we are talking about one aspect of time. What comes up less are the very fixed timescales that still occupy most, if not all companies: for example, technology refresh cycles are set at two years, plus or minus, and enterprise contracts can be fixed at three or even more, particularly for outsourcing.
For tech vendors, these cycles are both a blessing and a curse. Knowing when your target’s contracts come up for renewal is very important, of course; but once an acquisition has taken place, it is almost impossible to displace. And sales people are driven by another, quarterly cycle, which gets even less of a mention despite the fact that it influences the client relationship more than anything else. Apart, of course, than the vendor’s own annual cycle…
The pay-as-you-go model does have its advantages, minimising up-front costs and so on, but as anyone who has tried it knows, it has not thus far replaced standard procurement cycles that apply to most other things an organisation might need. A cursory search shows that IT budgets are going to be a maximum of 7% of corporate spend, so you can understand why nobody’s looking to transform procurement to make way for the credit card model.
And even this may be transient. Even now, it looks like we are moving from the mother of all technology centralisation waves (a.k.a. the shift into the cloud) into what may well be its teeming offspring — the Internet of Things is one manifestation of our distributed future, as are the local processing needs driven by machine learning. What payment models this triggers is anybody’s guess.
But even as things continue to speed up, the chances are we will continue to operate within some pretty fixed cycles. Yes, indeed, it’s strange how they never get discussed. Must dash, meanwhile here’s some articles. Oh and P.S. yes, I know, time flies like an arrow. But fruit flies like a banana. HT Rob Bamforth, an awfully long time ago…
In other news… the Origami Mobile Phone Stand
“Ever been stuck, miles from a gadget shop and needing to prop up your smartphone to watch a video or do some typing?” Back in 2013 I was in a B&B on the Gower Peninsula in Wales, and at a loose end, so I thought I’d tackle one of modern society’s greatest challenges. At the time not many had done it (well), so this became the number one origami smartphone stand on Google. Oh fickle fame, it is no longer but for a moment I could stand tall among the paper folding community. Here’s the link.
03-16 – Untitled
Untitled
Human endeavour is a powerful thing. It saw Amelia Earhart fly single-handedly across the Atlantic, Neil Armstrong on the moon and no end of people tracing the steps of others up the slopes of Everest, in the knowledge that they might not come back.
Many of our enterprises were originally formed on the basis of similar, beyond the call of duty effort — “One percent Inspiration, 99 percent perspiration,” as Thomas Edison was purported to say. The combination of money, vision, acumen and consistent, focused effort, even when all seems pointless — Sir Ranulph Fiennes calls this ’motivation’ — is only occasionally rewarded by remarkable success: the legions of failure go silently to their doom, like all those films we will never see because the hero gets killed in the opening scene.
It is against this background that we should view today’s technology superheroes — like Jobs and Gates before them, and many more before that (shout out to unsung hero Tommy Flowers, who had the drive and chops, but not the PR), the current raft of Brin, Bezos, Zuckerberg and of course Musk, have had to deliver all of the above, over a period of decades.
Elon Musk has been close to failure more times than he dares think about, but the announcements keep on coming. For example, Tesla’s latest smart power grid experiment in Nova Scotia in collaboration with the Canadian Government, or DHL’s suggestion that the ROI of Tesla trucks may be better than expected, when maintenance costs are taken into account as well as fuel.
Meanwhile Amazon has moved from being a shaky stock to ‘corporate America’s nightmare’, according to Bloomberg. Jeff Bezos’ entire strategy, we are told, hinges on it being always ‘Day One’, plus a sharing model literally designed on a napkin. Yet the company holds entire industries in thrall, as do Google, Apple, Alibaba and the rest. So, to the elephant in the boardroom: why don’t other companies simply follow suit?
The answer is not about being the most technologically advanced: I’ve been told that Tesla’s battery technology is not the best on the market right now, and Amazon’s infamous recommendations algorithm was initially no better than any number of university machine learning projects. Don’t get me wrong, hardware and software engineering deliver great things — but their presence or absence is not the main lever.
Nor is the answer about money — you just have to look at the investments by technology giants to see that (a) they have plenty of it, and (b) they are spending just as much (if not more) than the upstarts. The very existence of FinTech is illustration that money can’t buy you innovation-led success.
Nor even is the answer about vision. Sorry. We can all come up with sweeping statements about how we are changing the world, but saying them doesn’t make them true, or effective. Last year Facebook changed its mission statement from “Making the world more open and connected” to “Give people the power to build community and bring the world closer together.” As well as copping out from any blame, the change reflects how Facebook enables change more than causing it: the rest is up to us.
So, is the answer down to ‘digital transformation’ or whatever the latest business change mantra might be? Again, the answer is no, as it puts the cart before the horse. Sadly, you will not be able to use technology to change the way you think, however much potential it offers. Analytics, cloud or whatever will not cause some kind of corporate epiphany at the top, nor anywhere in the hierarchy.
No, no, no and four times no. Rather, the answer is down to doggedness within a context fraught with uncertainty, ignoring anyone that might tell you that you are an idiot (because, frankly, you probably are). Today’s companies are shackled by stock market valuations, quick-buck investors and risk-averse management. It is completely unsurprising that Amazon went from being a loss maker to one of the world’s most valuable stocks — like all of the best stories, it snatched victory from the jaws of defeat. As the Bloomberg author Shira Ovide remarks, “This was a case of preparation meeting opportunity.”
So, is it the case that traditional industries should follow suit, taking huge gambles and risking shareholder wrath? No, as this isn’t an either/or game. Despite the repeated public failures of the Musks and Bezoses, Zuckerbergs and Jobsian inheritors, behind the scenes are well-oiled machines that see experimentation as part of business as usual. Even if the hero does get killed, that’s OK, because these people have more than one story to tell.
Don’t get me wrong, many organisations are making great strides (check out what Philips is becoming, for example). But many more that I have worked with are looking to become better at this innovation thing, without actually addressing the most significant factor — a need to put “doing new things” in front of “doing old things”. That’s in front, not alongside or somewhere over there. My experience working internally in some of our biggest institutions is that they simply don’t have the determination to do it.
As long as this is true, any amount of lean or agile, DevOps or predictive decisions, transformation, digital-first strategy or anything else we can come up with will make a difference. Sure, individual departments may have something to feel proud about, a new app or service, a new type of account or clever customer portal. But as long as these are the exception, not the norm,
Corporations have few guarantees in this tumultuous age, but one thing is for sure. While success is not pre-ordained, any company without a positive attitude to change is on its own death march. Perhaps, for the executives currently in place, this doesn’t matter — they will just move on and take their salaries with them. But if they do not accept the need to embrace change as normal, they will be failing all of their stakeholders, customers, investors and employees alike.
03-23 – Lambda is an AWS internal efficiency driver. So why aren’t we seeing private serverless models?
Lambda is an AWS internal efficiency driver. So why aren’t we seeing private serverless models?
I’ve been in a number of conversations recently about Functions as a Service (FaaS), and more specifically, AWS’ Lambda instantiation of the idea. For the lay person, this is where you don’t have to actually provide anything but program code — “everything else” is taken care of by the environment.
You upload and press play. Sounds great, doesn’t it? Unsurprisingly, some see application development moving inexorably towards a serverless, i.e. FaaS-only, future. As with all things technological however, there are plusses and minuses to any such model. FaaS implementations tend to be stateless and event-driven — that is, they react to whatever they are asked to do without remembering what position they were in.
This means you have to manage state within the application code. FaaS frameworks are vendor-specific by nature, and tend to add transactional latency, so a re good for doing small things with huge amounts of data, rather than lots of little things each with small amounts of data.
So, yes, horses for courses as always. We may one day arrive in a place where our use of technology is so slick, we don’t have to think about hardware, or virtual machines, or containers, or anything else. But right now, and as with so many over-optimistic predictions, we are continuing to fan-out into more complexity (cf the Internet of Things).
Plus, each time we reach a new threshold of hardware advances, we revisit many areas which need to be newly understood, re-integrated and so on. We are a long way from a place where we don’t have to worry about anything but a few lines of business logic.
A very interesting twist on the whole FaaS thing is around its impact on server efficiency. Anecdotally, AWS sees Lambda not only as a new way of helping customers, but also as a model which makes better use of spare capacity in its data centres. This merits some thought, not least that serverless models are anything but.
From an architectural perspective, these models involve a software stack which is optimised for a specific need — think of it as a single, highly distributed application architecture which can be spread over as many server nodes as it needs to get its current jobs done. Unlike relatively clunky and immobile VMs, or a bit less flexible containers, you can orchestrate your serverless capabilities much more dynamically, to use up spare headroom in your server racks.
Which is great, at least for cloud providers. A burning question is, why aren’t such capabilities available for private clouds, or indeed, traditional data centres? In principle, the answer is, well, there should be. Despite a number of initiatives, such an option has still to take off. Which begs a very big question of — what’s holding them back?
Don’t get me wrong, there’s nothing wrong with the public cloud model as a highly flexible, low-entry-cost outsourcing mechanism. But nothing technological exists that gives AWS, or any other public cloud provider some magical advantage over internal systems: the same tools are available to all.
As long as we live in a hybrid world, which will be the case as long as it keeps changing so fast, we will have to deal with managing IT resources from multiple places, internal and external. Perhaps, like the success story of Docker, we will see a sudden uptake in internal FaaS, with all the advantages — not least efficiency — that come with it.
03-23 – The more that things change…
The more that things change…
It’s hard not to be a cynic as an independent industry analyst. Indeed, you are given little choice: the nature of the beast is that a bandwagon will appear, over the brow of a nearby hill, proclaiming something that (you swear) you have seen many times before as new, improved and indeed, the answer to everything.
No, it isn’t, you say. But even in doing so, you somehow validate it. “Artificial Intelligence isn’t as straightforward as that,” one might argue, in the process helping to spin up the flywheel still faster. “AI, you say? Thanks, I must take a look at that.” And thus, the job of the marketer is done.
This isn’t a bad thing in itself — tech needs its greek chorus — but at the same time, it uses up bandwidth that might be spent on the things that aren’t earning the PR dollars. Most readers will remember the Green IT wave which took up so much attention in the Noughties. Hey, guess what? Sustainable IT is still a thing, it just isn’t a marketing topic.
It also distorts reality. We are, indeed, in a wave of machine learning excitement, caused by the fact that previously unachieveable tasks are now both affordable and timely. Real-time delivery of new insights really is a thing, as are voice recognition and computer vision.
Less so is what has been termed AI, for a stack of reasons: not only that learning and inference models still need to be scoped pretty tightly, but also because they deliver useful gen to humans, not to autonomous virtual beings with the power of reason. As per a couple of recent conversations, what we currently call AI lacks the ability to discern.
But round we go. I’ve not been against taking the marketing dollar to help vendors clarify their stories, with a shrug as I face a reality in which I’m not setting the agenda. My only advice would be, sure, acknowledge the noisy topics under discussion right now, but don’t let these obscure the ones just past. Second-mover advantage really is a thing, whatever the marketers might say.
03-26 – Is enhanced reality an AR/VR cop-out?
Is enhanced reality an AR/VR cop-out?
Watch out, there’s a new term on the block. Even as the initial flurry of excitement over Oculus-primed virtual reality seems to be in a perpetual state of prototyping, and as other forms of augmentation are hanging about like costume options for Ready Player One, discussion is turning to enhanced reality. I know this not because of some online insight (Google Trends isn’t showing much), but because it has come up in conversation more than once with enterprise technology strategists.
So, what can we take from this? All forms of adjusted reality are predicated on a real-time feed of information that brings a direct effect to our senses:
- At one end of the scale, we have fully immersive environments known as Virtual Reality (VR). These are showing themselves to be enormously powerful tools, with potential not just in gaming or architecture but also areas such as healthcare — imagine if you can shrink to the size of a tiny cancer, and then control microscopic lasers to burn it away? At the same time, the experience is isolating and restricted, which is both a blessing and a curse.
- Augmented Reality (AR) rests on the fulcrum between virtual reality and, ahem, reality. Goes the argument, why take a real-time video feed and add data to it, if you can project data or images directly onto what you are seeing? It’s a good argument, but it demonstrates just how fraught and complex the debate quickly becomes. What’s useful information? What’s in and what’s out? And indeed, is a computer really better at discerning the important stuff, than our own senses?
- With its roots in passive information and heads-up display, the notion of enhanced reality (ER) does away with the need to worry about such things. Yes, AR and VR also follow a similar lineage but ER does not try to be anything beyond a context-specific information delivery mechanism. So, a motorbike rider can see speed and fuel, a surgeon can monitor vital signs, and so on.
The difference from other models is that ER starts from the perspective of minimum necessary — that is, what information do I actually need right now. This approach is not that dissimilar to the kinds of displays we are now seeing in cars, and indeed, it pushes the same buttons of how to add information whilst minimising distraction.
So, is this a cop-out? It is not hard to see VR, AR and ER as a Venn diagram, with plenty of overlaps between each. Right now however, no one-size-fits-all solution addresses that little triangle in the middle; meanwhile, the different models enable us to focus on what problem we are trying to solve, rather than worrying about whether today’s technology options are sufficient for one thing or another.
In practical terms, for example, even as the wonks work on deeply immersive experiences, a pair of retinal-focused sunglasses that offer a feed of messages, and/or can link into a feed of data about whatever activity is being undertaken, would probably sell like hot cakes if the price was right. (Indeed, if I could put in an early feature request, a tap on the frame to turn the thing on/off would be most welcome).
Enhanced Reality offers us more than just a lowest-denominator entry point. By focusing on useful data and how to deliver it succinctly, rather than clever hardware and how to make it fit our daily lives, we are putting the horse before the cart and, as a result, perhaps advancing things faster, even if initial use cases appear mundane. If we want to get to deep levels of technological immersion, we would do well to start at the shallow end.
03-27 – Digital Transformation 101: What are we trying to transform, and why?
Digital Transformation 101: What are we trying to transform, and why?
New wine, old skins
It’s pretty easy to be a digital transformation consultant these days. Here’s what you do.
First, you report on the amount of data growth, the increasing rate of change and other exponential factors; you flag up the massive growth of recent, tech-first companies such as Amazon and Alibaba (whilst carefully ignoring those who tried and failed to follow similar models); you list out conveniently acronymised manifestations of technological progress — Social, Mobile, Analytics and Cloud. Oh and IoT. And AI. You get the picture.
Having engendered a suitable level of fear and uncertainty among your target audience, namely executive decision makers (who happen to control consulting budgets), you go in with the scoop: that the only possible response is to transform. Not to tweak, nor encourage stepwise progress, but to make a ground-to-sky, soup-to-nuts matrix-style inversion of the entire organisation.
How should we do this, you are asked. Well, how fortunate you have an answer for that, you say. The response is to run a series of very expensive strategy workshops, which will generate a new vision for the company. You will then review existing lines of business and operational departments, looking at existing processes and advising on the best way to bring them into the new world.
Oh, and you will also propose a cloud-first innovation strategy, which means shifting an organisation’s IT capabilities (planned and legacy) from where they are “into the cloud”. The “cloud” in question just happens to be your data centres, or those of your partner. And thus, you have the company or department in question completely set up for the future.
Strip away the gloss however, and you haven’t done much more than you would have done in years past — that is, a strategy-cum-business-modelling exercise coupled with an outsourcing contract (admittedly to a potentially more flexible infrastructure architecture). The executives involved may not be too bothered that their organisation has not been ‘transformed’, as they now have on their CV that they have overseen a multi-million change programme.
And so, round we go, with consultants taking companies and public organisations a bit further on their journeys. Don’t get me wrong, this is a reasonable outcome but digital transformation it isn’t — which smacks of a wasted opportunity given the amount of effort that went into it. And a source of risk, as all those tech-first companies continue to do all the things they are reputed to do — transform industries, disintermediate supply and demand, and so on.
This presents a conundrum: if this isn’t digital transformation, what is? Perhaps there is no such thing; the term is mere marketing, a sugar-coated way of loosening the purse strings of conservative board members. At the same time, industries are transforming, yadda, yadda, so decision makers should at least do some due diligence on where technology can actually make a difference.
There is an answer, but it is not as straightforward as you might think (it never is, right?), as it means changing board-level attitudes to those key measures of cost, stakeholder value, and risk. You can draw a simple picture, a quadrant chart, if you will — up the side are existing versus new sources of stakeholder value, and across the bottom are existing versus new working practices. The chances are that most of your business will fall into the bottom-left quadrant.
The consequence is simple: that any new sources of value are likely to be encumbered by existing operational processes, while new processes quickly hit a glass ceiling in terms of the efficiencies that new technology can bring. Understanding the dilemma between old processes and new value is fundamental to any transformational business change.
I’m not saying that drawing a picture can change the world, it’s only a model. But being able to perceive your business in terms of how new can create new, even as it makes old more efficient (without throwing the baby out with the bathwater), is profound. It also makes the choice pretty stark — just how much investment (of money, but more importantly, time and political will) do you want to make in the top-right quadrant?
An honest answer to this question will give you all the clarity you need in terms of your actual appetite for transformation. Once you understand this, chances are you will need some consultants to help; you may also decide that outsourced, cloud-based infrastructure gives you the way forward. But at least you will be making decisions with a real grasp of what you stand to gain.
03-29 – Translating for geeks since 1987
Translating for geeks since 1987
I was looking at some stuff about neural networks the other day. There’s two implementations, apparently: one which looks something like a tree-walk algorithm, and the other, more like a fishing net with various knots. I can’t remember exactly what the differences were, but suffice to say that one is better at some things than the other.
Which is important, of course, but not for the purposes of this discussion. More relevant is that the fishing net approach reminded me of something I saw (potentially being invented, I can’t remember) pretty early on in my career. It was known as Layout Interlace Field, and its creator was one Tim Bolton.
Back in the day I was a Pascal programmer for Philips Components, and we were writing software for silicon chip design. The place was delightfully well run, a fact I only found out when I left (I remember reading Tom De Marco and Timothy Lister’s seminal book, Peopleware, and thinking, “Who’d be so dumb as to not apply all that best practice!” – the answer, it turned out, was most other organisations…).
I digress. The thing about chip design is that you are always looking to squeeze as many bits of circuit onto the wafer as you can — Moore’s Law can only get you so far. The exceedingly smart Tim came up with an idea: how about creating vertical and horizontal lines of silicon down the chip, then only connecting the points at which you wanted a transistor to be (like a knot on a fishing net)?
I can’t remember the detail, and right now it does sound similar to a programmable logic array, but it did make for much denser designs. Indeed, it is highly likely that it’s the kind of thing being done in today’s neural-network-optimised chips. Another digression but I must take a look some time. More important was, Tim was the kind of person (and probably still is) that would do things very differently, with all the benefits that brings.
At the same time however, he wasn’t the world’s best communicator of what he created. One of my earliest tasks was to write some documentation for the compaction software Tim had written (Moore’s Law can only get you so far, yadda yadda). I simply had to interview Tim and write down what he told me.
The software was pretty smart, as you’d expect. When you lay out a circuit in a CAD system, you essentially create a series of rules — this track connects to that track, those two elements need to stay so far apart, and so on. Tim’s algorithm loaded all of these rules into a giant (for the time) matrix, did all they could to squeeze everything down to the bottom left corner, and then (the really clever bit) flip the matrix like you might a tablecloth, then squeeze it all down into the bottom left again.
Of course, sometimes the rules were already pretty tight and left little room for any squeezing. So the software also employed a soupçon of simulated annealing, that is, loosen everything up a bit (like when you heat a metal) and then apply the compaction when it is ‘cooling’. Simples.
Of course, Tim didn’t explain it to me like that. Rather, he told me about each instruction, it’s purpose and capability. It was only after sitting with him going through the code (what felt like) line by line over a period of (what felt like) weeks before I had the sudden epiphany about what the code actually did.
Which is pretty much what I still do today. I remain in absolute awe about the engineers and innovators that are creating our shared future in front of our eyes: their engineering talent does not always leave room for more down-to-earth skills, and I am truly grateful for that as well. The day that technology becomes straightforward to understand, I understand, I will be out of a job.
For the time being however, I think I should be OK… with this in mind, here’s a couple of articles for the week.
Is enhanced reality an AR/VR cop-out?
I heard a new term this week, and wanted at least to mark its existence as if seems to make sense. Indeed, I could have written a whole newsletter about the notion of making things a little better than before, rather than changing the world again (or should I say re-imagining? As an aside, I bloody love Indiegogo with its barking mad campaigns, it has replaced Skymall in my mind)… maybe I will just do that.
Digital Transformation 101: What are we trying to transform, and why?
Okay, this may be old news but it only recently occurred to me that the reason consultants love cloud is because they love outsourcing, and cloud is a form of outsource… oh, wait….
Lambda is an AWS internal efficiency driver. So why aren’t we seeing private serverless models?
Cloud is just your stuff on somebody else’s tin, as someone said to me
Heads-up: TechPitch
I hope the above illustrates my genuine delight to be involved as a judge on the TechPitch panel in April. This involves startups at pitching workshops, at the end of which they get to present to a wider audience. Tickets available here.
In other news, Big Big Bread
Those who know me also know my absolute passion, and obsession for making bread. Passion, because it is possibly one of the first thing we ever learned to cook (unless you count turning a small rodent over a fire); obsession, because… I don’t know. Obsessions seem to happen to me rather than me having any choice in the matter (in this case, it was a day-long course with the amazing bread maker). I also have a penchant for beer (my attempts at fermenting have never been as successful, to my shame) and progressive rock music… so here’s a recipe for bread made with beer from one of my favourite bands. Enjoy.
April 2018
04-13 – The unintended narcissism of personal epiphanies.
The unintended narcissism of personal epiphanies.
No tech articles this week, as I’m just back from holiday. Normal service will be resumed next week.
Do you still have “that book”? You know, the one that you read and you found that it so closely mirrored your experience, or gave answers to questions you thought were unanswerable, to the extent that you thought, “Wow, that’s amazing! Everyone needs to read this!”
I know I do. As I worked through my first mid-life crisis, I was recommended M. Scott Peck’s ‘The Road Less Travelled’. “Life is hard,” it started, and I was hooked. The discovery that the author was a serial philanderer and alcoholic did little to dampen my ardour; indeed, it only increased as I realised he was human too.
It was only decades later that I realised many of the ‘Don’t Sweat The Small Stuff’, ‘Chicken Soup for the Soul’, ‘Men Are From Mars’ and indeed, the current generation of similar books (quick tip: to get published, put a swear word in the title) share a thread: a personal analysis of individual experience.
They are also, frequently, written by people of a certain age. My suspicion has long been that they arise from the ashes of personal challenge — epiphany following hopelessness, like sunrise follows dark. At such moments, the human tendency is (in my experience) to become somewhat evangelical: the world must know. In book form.
The reader’s realisation that the author has slain similar dragons creates its own epiphany, together with a reinforcing feedback loop: I must be right, thinks the author, as, well, look at all the people who have said so! Cue publisher asking for follow-up and a guaranteed future on the self-help speaker circuit.
Strangely, none of this ever gets talked about — we prefer our gurus to appear fully formed, no doubt from some guru school, where they have spent decades of apprenticeship before emerging into the light. And similarly, their words are presented as fact: no “What I have found,” nor, “This won’t be true for everyone, but,” nor, “In my opinion,” to precede sentences.
Which makes sense: if they were just the words of the struggling person in the street who had learned a few things, they wouldn’t be that interesting, would they? None of this is a bad thing in itself, as it results in old wisdoms being re-iterated by, and interpreted for each generation.
However, our desire to keep things simple affects our ability to discern and, frankly, makes us lazy. If we are presented with three reasons why something may be so, why look for a fourth? And meanwhile of course, the views as originally expressed become entrenched: it is hard for a guru to change his or her mind (ask Chris Anderson).
We can all have opinions and experiences, hard data and soft anecdotes. But the moment that we disconnect our views from the information that supports them, we risk creating a new kind of truth — one dependent on esteem rather than fact. Even in this unprecedented age, there is nothing new under the sun.
All the best, Jon
P.S. Incidentally, the book I picked up on holiday was Atul Gawande’s Being Mortal. I haven’t finished it yet, but I can confirm it fits into the category of being a Very Important Book. And it does buck the trend, as will many others.
04-27 – Bulletin 27 April 2018. Flipping compliance… on its head (GDPR redux)
Bulletin 27 April 2018. Flipping compliance… on its head (GDPR redux)
I seem to keep getting drawn into GDPR-related compliance conversations. I do so uncomfortably, for two reasons: first I am not a lawyer, and second, given the general level of misunderstanding about the new regulation (under a month to go, folks), I am very wary about offering anything that might be perceived as advice. Something I am absolutely clear about, however, is that it is all the wrong way around.
Let me say first that I understand why. We live within a social structure based on the rule of law, rather than any other principle. The law may be an ass but it has stood us in good stead over the centuries: from its roots setting out rules of civil conduct, or presenting the latest impositions of those in power, it’s generally been accepted (sometimes begrudgingly), as the rulebook to be followed.
This was all very well when resources were plentiful, but as we have come up against the limitations of our own population growth and voracious appetites, efficiency has become the name of the game. There was a point, not that many decades ago, where the role of law flipped from something to be followed unthinkingly, to a set of criteria to work within.
So, it’s become less important, for example, to do the right thing, and more important to demonstrate that the rules have been followed: the cry if “I didn’t do anything wrong” ((c) Richard Nixon) may be defensible, even if the behaviour is deemed unethical. It’s well-known that banks have made it their business to look for loopholes in regulation, and why? Because that’s where the money is.
For law makers, the result is to create more laws. Banking regulation is getting more and more complex: this creates more complexity, and therefore the potential for more loop holes. And so it goes on. And on. And on, to GDPR, which has done a pretty good job of closing a number of stable doors… though it remains to be see what it has not dealt with in terms of our privacy.
Apologies, I’m rambling as usual. On the topic of GDPR conversations, organisations of all shapes and sizes have several choices. The first (and most obvious) is to comply with the regulation, appoint data controllers, inform customers and request appropriate consent, and all that. Frankly, doing so is blooming hard given the level of information, and the scale of the challenge.
The second, however, is to address it from same the perspective as the regulation. GDPR is coming into force in direct response to repeated abuse of personal privacy, in the name of (that horrible phrase) ‘data monetisation’. Its base requirements are pretty straightforward: hold only the data you need to do the job, and have good reason to do so.
For organisations looking to do the same thing that they did last week, this makes things pretty simple: define what you do, say what data you need, and if you need to ask customers to do so, then ask. This is no more nor less than an information strategy — the difference is that it is being publicised to customers. Bluntly, it you don’t have one, you should, for a raft of reasons not just compliance.
Of course it may be the case that an organisation wants to hold onto more information than it needs, and/or do things with it that the customer doesn’t like. The answer, simply, is, well, don’t, or you will get into trouble. But the majority of organisations I have spoken to are not in this category — they just want to ensure they don’t get caught short. Creating a clear and defensible statement of intent, then broadcasting it out to those affected, is a solid starting point.
The third choice, open to all but probably only accessible to the biggest organisations, is to play the “we aren’t doing anything wrong” game, that is, see loopholes in the regulation as an opportunity to push things as far as possible (or at least, make hay until the loopholes are closed). This will inevitably happen with GDPR, indeed, it probably already is if Facebook’s ’sidestep’ is anything to go by.
One thing’s for sure — this isn’t over — May 25 is a starting point. Let’s see where it goes… and meanwhile, here are some articles for the week.
GDPR – are we witnessing the death of one-way monetisation?
GDPR has certainly put the cat among the pigeons, for better or worse. This article is a stake in the ground — building on the above, it is becoming less and less viable to take customer data and then try to make money out of it for its own sake. Apparently. Also building on the above, time will tell whether organisations simply find other ways to take data-related value out of customers and pass it to shareholders.
Five questions for… the AURA fitness band
Apropos of data, we continue to look for new ways of creating it. I don’t think there’s any contradiction, in that all technologies can be used for both good and less good purposes. In this case, the potential of low-cost bio-impedance data (to indicate BMI, heart rate and hydration) is pretty compelling. And/but it is yet another illustration of how the ultimate owner of any such data should be the individual to whom it relates.
5 questions for… Densify: a sign of the times
On a different tack, the number of meta-infrastructure solutions — that is, technologies that help organisations decide what to do with their resources, wherever they are — appears to be proliferating. Densify helps organisations migrate workloads to the cloud, or optimise them once they are there, by understanding which of the (quite complex) options are appropriate. And it will continue to have a role as long as cloud providers are not offering optimisation as a service.
Semi-curricular: How not to be an industry analyst
Following on from the highly successful “How not to be a biographer” seminar, I’m plotting with the IIAR to present a few anecdotes about lessons I’ve learned across the last 19 years (including taking a break). More information soon. Oh and it will likely be followed by “How not to run a marathon.”
Until next time, Jon
May 2018
05-04 – Bulletin 4 May 2018 - Absolutes corrupt absolutely, and other idea-ologies
Bulletin 4 May 2018 - Absolutes corrupt absolutely, and other idea-ologies
“Information wants to be free” — Stewart Brand, founder, Whole Earth Catalog
“Privacy is dead - deal with it” — Scott McNealy, founder and CEO, Sun Microsystems
The technology industry is founded on a platform of ideas and aspirations. Engineering has played its part, of course, creating the basis for instantaneous, global communications and massively scalable data processing. But our mental models have set the scene for what we do with all this wonderful electronic gubbins.
This has its plusses and minuses, of course. One of the reasons for starting this newsletter was a recognition that tech can cut both ways: working in the industry has to be founded on an optimism that the good will outweigh the bad, which has been true thus far.
Despite the digital revolution being founded on shades of grey (sic), on the ideas and aspirations front, people respond better to black and white: we have Highlander-like, “there can be only one” war of ideas, and may the most powerful win over the rest.
(Indeed, I’m guilty as charged: when I first heard about the OpenFog consortium for example, my first thought was, “That’s a bit of a vague name — couldn’t they have come up with something stronger?”)
As troops rally behind an idea, they find themselves exposed to its flaws and become defenders of, even apologists for, its weaknesses. For example, on notions of information wanting to be free, open data advocates now face the reality of (largely corporate) machine learning; privacy being dead is the yang to open data’s yin.
Meanwhile, people of influence are judged by their ability to offer clarity — it’s a trick employed by futurists, rock star analysts and others to demonstrate prowess. And, frankly, it’s much easier for pundits to speak in absolutes, than it is to second-guess the way that complexity will take us.
It would be counter to this whole line of thinking to suggest that ideas are therefore bad. Ideas serve a purpose, they offer a rallying cry behind which people, and budgets, can unite. They catalyse change and break through inertia. But they have a decay curve, which the human psyche is ill-equipped to deal with.
The deeper truth is that the dynamics of an idea — where it came from, the purpose it serves and the impact it has, are more important than static notions. Ideas are stakes in the ground, nails in an ever-moving nail-and-string picture. Sure, we can offer clarity at various moments in time, but clarity too has a sell-by date.
With this in mind, here’s some articles for the week.
5 questions for… Auddly, targeting the source of music creation
Ensuring people get paid for their art is a complex web, easily exploitable by, well, anybody who wants to stand in between artist and recipient. Most attempts to resolve this operate at the listening end of the gramophone, whereas Auddly inserts itself right at the start of the production process. The service is designed to meet an industry need, by those who need it, and it has some pretty solid backing. As always, the challenge will be in take-up, of the service and more importantly, the practice (of logging participation) it represents.
5 questions for… Cloudistics. More than a cloud in a box?
Continuing on last week’s hybrid theme, Cloudisitcs is looking to bring the cloud into the data centre — its stated USP is that it brings software-defined networking along with it. Any vendor in this space has a mountain to climb: my gut feel is that the company is more about establishing its IP than about becoming the world’s number one purveyor of hybrid cloud infrastructure. Though the latter would be very nice, and all big companies have to start somewhere, the laws of probability are stacked against it. Here’s an overview, and my take.
5 questions for… Densify — Redux as the link was wrong last week (thanks Alan)!
As I was saying, the number of meta-infrastructure solutions — that is, technologies that help organisations decide what to do with their resources, wherever they are — appears to be proliferating. Densify helps organisations migrate workloads to the cloud, or optimise them once they are there, by understanding which of the (quite complex) options are appropriate. And it will continue to have a role as long as cloud providers are not offering optimisation as a service.
Extra-curricular: Super-Awesome, The Musical
As if I didn’t have enough on, I’ve been writing a musical. About the tech industry. Yes, you heard that right. It’s a kind of cross between Rocky Horror and The Office. I’ve completed a draft of the first half, and I’m now finishing the second, which means turning it into Alexandrine (hexameter) form as a hat-tip to Cyrano de Bergerac. I have absolutely no idea whether it will see the light of day, but I’m loving doing it…
Sincere thanks for reading this far, and for sticking with this bulletin. A long time ago, I can remember my boss admonishing me (gently) for rushing into his office, starting to describing a problem, then saying, “Oh yeah, I’ve got it,” and rushing out again. By saying things out loud I’m starting to build a picture, which will play into the various things already bubbling away. More news soon, and good luck with all of your own projects!
All the best, Jon
05-11 – Bulletin 11 May 2018 — The Seven Laws of the Information Age
Bulletin 11 May 2018 — The Seven Laws of the Information Age
Definitely one for a Friday. And definitely a straw man — feedback very welcome.
The law of falling thresholds
Advances in the fundamental building blocks of technology — processing, storage and communications — reduce bottlenecks and make new things possible due to falling cost, power and size needs, in parallel with increased capacity, capability and bandwidth. The Internet of Things, for example, is a manifestation of how sensor-based remote management and control can apply to whole new areas, once it becomes affordable that is. In turn, this enables new practices and models such as pre-emptive maintenance or competitive fitness apps.
At an infrastructure level, falling thresholds are enablers to new approaches to storage and processing, driving specifics such as in-memory Apache Spark, and more general trends like cloud computing. As 5G becomes a thing, so we will see vastly increased bandwidth, making such things as streamed augmented and virtual reality possible — should anybody want them, of course!
The law of self-fulfilling prophecy
Like the technology it creates, the technology industry is constrained by both financial and engineering limitations, which means it has to set priorities. Frequently, these are put in place based on supply and demand: if the market for RAM increases, so will finance be found to create more of it. At the same time, priorities can be influenced by agendas, charisma, personal drive and other forms of influence.
A positive example of this is Moore’s Law: when an entire industry gets behind a single theme, putting the necessary research behind it, so it is possible to maintain a steady level of progress across, well, decades. We’ve also seen the single-mindedness of people like Steve Jobs, who drove the market for tablet computers through seeming force of will, and of course Elon Musk and Jeff Bezos.
The law of potential differences
Over recent decades, many of the most exciting breakthroughs in technology-led business models have resulted from spotting new connections to enable value exchange. That is, I give you something, and you give me money back. These are dressed up in clever economic terms but boil down to the same set of questions: what can I give you that you are prepared to pay for, and/or how can I short-circuit existing business models?
The result plays to the agile startup: given just how slowly old corporations move, if I can do something new quickly, I will be able to siphon off a bunch of money and grow to such an extent that I will have established myself by the time they catch up. E-business, disintermediation, Uberisation, the Network Economy are all manifestations of this same principle.
The law of exponential complexity
This can be formulated in a number of ways. For example, that the amount of data that we create will always exceed the amount of processing we have available. Or that the needs of the devices we have to manage, in terms of volumes, rate of update or upgrade or will always exceed our capacity to manage them. Or that the attack surface will always be greater than our ability to secure it. Or that the photos we take will never be tag-able in a meaningful way.
However it is framed, the consequence is always the same: that the hopes we have — we the business, or IT management, or home user — live in a constant state of forlorn hope, that the next generation of technology will solve what remain some pretty fundamental issues (manageability, security, insight delivery). Only to find that next-gen tech creates as many new challenges as it purports to solve. So, on we go.
The law of inflating expectations
A counter to the law of falling thresholds is that a technological advance can very quickly become a default, rather than an exception. We only have to look at the progression of video, from a minority owning expensive, tape driven camera equipment, to a situation where capturing and uploading video has become a blight on music gigs, and indeed a disappointment if it is not possible for whatever reason.
In turn this drives the law of exponential complexity, as our default behaviours result (for example) in generating far more video content than we can fit on our two-year-old smartphones. Again, this is a law which can be ‘leveraged’ by technology companies: by getting the customer base to see the new as the norm, it drives new spend as the old very quickly becomes inferior.
The law of unintended consequences
Innovation is no longer in the hands of the technologically savvy few, as individuals create whole new ways of using tech that were never part of the plan. Often these are positive, such as geocaching; equally, they might drive the use of a new technology to its absolute limit, driving designers to distraction and feeding the law of inflating expectations.
And frequently they can have negative consequences. Each new generation of technology creates new ways to extricate money from people, which is why we have a whole industry around cybersecurity, to counter a whole industry around cybercrime.
The law of innovation decay
Any innovation has a sell-by date, as over time, contextual requirements will move to a state which make the innovation a poor fit to the situation at hand. Individual solutions can never change as quickly as the problem spaces they serve, in many cases as hardware, software and communications are overtaken by the very complexity that they create.
In part this is a consequence of the law of self-fulfilling prophecy: the push to create new things inevitable drives things out of date more quickly. It also results from the laws of both inflating expectations and unintended consequences. Some device vendors have exploited this law through designed obsolescence, accelerating the point at which a device will become redundant.
05-18 – Bulletin 18 May 2018. It’s all virtual. All of it. (Or why I can’t write about NFV)
Bulletin 18 May 2018. It’s all virtual. All of it. (Or why I can’t write about NFV)
Anyone who has read some of my written work will know I’m a bit of a one-trick pony. I start with some cultural reference or anecdote, then segue into a more technical or managerial point. Over the years I have found myself using the same stories more than once: in particular, the moment when Dave Lister finds that he is the only human left alive. Here’s its most recent outing, in relation to GDPR (of all things).
I’m saying this apologetically, because I find myself urged to use it again. For months now, since I was involved behind the scenes on a pretty big project on the topic, I have been wanting to write about Network Function Virtualisation (NFV) and its impact on telecommunications. However, for some reason I have found I have nothing to say on the subject. Which (for a pundit) is more than a little worrisome.
It’s not as if there’s nothing to talk about. For the record, NFV has emerged in the telco space as a new way of creating communications services: rather than having custom-built hardware and software for each specific need, why not follow the lead of data centres, bring in a virtualised infrastructure stack and be able to support any new software function with minimal hardware change? What’s not to like?
The answer, as many telcos are discovering, is that it isn’t as simple as that. Challenges of ripping out and replacing an entire global telecoms infrastructure couple with the fact that earlier iterations of NFV stacks have not been, shall we say, fully market ready? Money has been spent, disappointment felt and a mood of reticence has settled across large parts of the industry.
There we have it, in a nutshell. A good idea meets a complex world, working brilliantly for some and causing problems for others. Fall back, regroup and try again, a bit more slowly. Like my anecdotes, we’ve heard them all before. Which goes a long way towards understanding why I’ve struggled to write anything coherent about it.
But, like any other unsoluble problem, it has refused to leave my head, stuck like an unfinished conversation. The issue is not that we’ve seen it all before, but that we are lulled, somehow, into believing that anything we do in tech is in any way real. Of course we can see some mind-bending physics and remarkable engineering, but its ultimate purpose is to create and manipulate representations of things.
It’s all virtual, all of it. Every A-to-D converter, every API, bootloader and on-screen widget is a façade, designed to disguise the few trillion transistors, gates and fibres that lie behind its slick interface. This isn’t some glib philosophical point, as it lies behind a great deal of procrastination and distraction.
Any conversation that starts with the idea “technology idea X doesn’t work” is flawed: we can say it is poorly configured, or that it doesn’t fit existing practices, or that existing practices are a poor fit to what it does. What we should not say, however, is that it is a bad idea in itself: to do so implies that it exists in some tangible way. By treating an idea as a thing, we stop focusing on how well it is designed.
Specifically on the topic at hand, NFV is a work in progress. It will improve, for two reasons: first its inherent structure will be improved, as it is used and understood. And second, as new functions are developed we will start to see proper innovation, rather than just building the old services (e.g. messaging, email, voice) in new ways.
And more broadly, it was always about virtualisation. While this might have been slowed by both corporate inertia and hardware vendor lock-in, we are moving inexorably towards putting the control in the software, rather than tying it into the hardware. While we may all experience Lister-levels of incredulity from time to time, this doesn’t make reality — or indeed virtuality — any less true.
With this in mind, here’s a couple of articles I have actually been able to write.
5 questions for… ADLINK. Edge-Fog-Cloud?
ADLINK is a little-known company outside of its own space, but is right in the thick of the increasingly distributed world we are seeing come out of IoT. Here I speak to Steve Jennis about where this space is going. In conclusion, as we move from a centralisation to a distribution wave, the cloud providers are going to have their work cut out.
Is Enterprise DevOps an Oxymoron?
Oh, I long for the days when I was working things out for the first time, those wonderful moments of epiphany… these days I linger somewhere between discovery of the wisdom of others, and the knowledge that (to pick another over-used cultural reference) there is nothing new under the sun. In this article I introduce a report I am currently researching, on how DevOps can make a difference at scale, in the enterprise environment. All thoughts, stories and guidance very welcome.
Extra-curricular: Super-awesome, the Musical (Redux)
The first draft is now complete. No spoilers but the final song is called “Perfectly Illogical.” Now all I have to do is write it through another few times, but that will be less taxing than getting it to here. Famous last words…
Thank you as ever for reading. This bulletin now has over 350 opted-in recipients, and a small number of people (20-ish) I am still to contact. If it’s not your thing, please feel free to unsubscribe at any time.
All the best, Jon
05-18 – Bulletin 25 May 2018 — Happy GDPR Day! And other conundrums of governance.
Bulletin 25 May 2018 — Happy GDPR Day! And other conundrums of governance.
| :—– | | I thought I’d skip the picture this week and get straight down to business. I’ve written before how technology is a two-edged sword: so, indeed, is governance. It’s not just the bureaucracy that comes with it. Bureaucracy is the yang to governance’s yin (“Easy for you to say,” to coin a phrase), you can’t have one without the other. In my experience, this follows a sorcerer’s apprentice style path: once you start feeling that controls need to be in place, these generate more controls (to count for the things you didn’t originally think of), and so on and so on. This also feeds a dodgy part of our psychology we could call “tin-pot general syndrome”. I’m not going to lie - when I was responsible for IT security for a large organisation, I started getting a bee in my bonnet about how security policies weren’t taken seriously. Then another bee. Until I realised, and took it upon myself to block access to IT resources, if they were to be used insecurely. That’ll stop ’em! What I missed however, was the very premise for corporate IT security - to reduce business risk. In disabling access by users to systems, I felt I was doing my job… but my shadow-boxing, however well-meant, was also preventing business from being done. It was only years layer that I realised I was the problem. Governance, like any intervention, can have an undesirable, even counter-productive effect. In the case of GDPR, for example, we have all been swamped in recent weeks by a slew of messages telling us how much our privacy is valued. No breach has taken place, but I can’t help wondering which I care about more: people misusing my data, or people sending me a raft of messages asking whether they can use my data. Or indeed, what is the difference. And meanwhile, from the recipient’s standpoint, governance (again, however well-meant) can become not a guide, but a set of goals. We have seen it repeatedly in the financial industry, and I have absolutely no doubt whatsoever that thousands of lawyers are right now looking at how the text of the GDPR can be kept to, even as notions of data monetisation, targeting and so on can be achieved. Don’t get me wrong, GDPR is a good thing —or would be, if it was just a little clearer. The many emails that I have received from providers of all types, and the differences between them, are a good indication fo just how confused everyone is. And meanwhile, here’s a conundrum: “The GDPR only applies to loose business cards if you intend to file them or input the details into a computer system.” So, if your business cards are in alphabetical order right now, I suggest you give them a shuffle and stop thinking you might one day want to file them, just to be on the safe side… In other news, here’s some articles from this week. The Five Cs of DevOps at Scale While it’s long been yet another conundrum, I’ve been racking my brains why we have a situation where something (in this case DevOps) works very well for some people, but doesn’t seem to work at all for others. Are the latter group just wrong, or are contextual factors dictating success or failure? As a straw man, I have suggested the following areas to start looking: * Commodity - DevOps works best in an environment where the infrastructure building blocks are relatively uniform. * Consistency - Frequent deliveries of software requires a significant amount of discipline * Collaboration - While DevOps is highly collaborative, even sociable, many corporate environments are not so much * Continuity - Similarly, DevOps requires repeated and controlled cycles: this can be a challenge for many organisations * Charisma - The secret sauce of DevOps is its mentors, whose easy-going zeal cuts through the fog and makes sense of it all So, why do these factors (or their absence) make things so much harder? The answer, I believe, is that they impact the ability to respond to change sufficiently fast, in an already complex environment. Feedback very welcome! 5 questions for… the Mellel word processor While I don’t mention it in this article, I’m a great fan of Mellel’s competitor, Scrivener. It feels almost counter-intuitive, that I should be using a non-standard desktop word processing application — surely everything should be online these days? Well, sorry, but for me, tablets and phones have a place but so do laptops, keyboards and indeed, big screens. The world should know that alternatives exist to what have become de facto standards, for good reason: because they do a subtly different, yet still valid job. Extra-curricular: the piano continues to obsess Sitrep: Piano is improving. But 5 months in and I still can’t make a video. So here’s the warts and all, straight off the card upload. You can hear the sound of me taking a photo on the iPad rather than starting the video, and my head is chopped off. Meanwhile, there’s a full version of Bach’s Well Tempered Clavier first prelude in C (at about the 4-minute mark), and various other bits and bobs. It’s a good indication of where things are up to. Still loving it. Thank you, as ever, for reading this far and for all the feedback you have sent. The list is now 100% opted in, which has another psychological effect - I feel I am writing to people, not just firing things into the ether. As far as numbers go, this means 368 people, with roughly 150 reads per week, which is really great. All the best and speak soon, Jon |
June 2018
06-01 – Bulletin 1 June 2018 - A rose by any other name would smell as sweet
Bulletin 1 June 2018 - A rose by any other name would smell as sweet
What’s in a name? The answer, as so many literary characters have told us, is: “Quite a lot.”
“I am a servant of the Secret Fire, wielder of the flame of Anor. You cannot pass.”
Gandalf in The Lord of the Rings, The Fellowship of the Ring, Book II, Chapter 5: “The Bridge of Khazad-dûm”
The irony of working in an industry which is all about naming things (perhaps that should be on the business card: “Head of Naming Things”), is that often the same old stuff gets a name which makes it look and feel different to the time before.
The lifecycle of a newly-named area goes something like the following:
-
Someone, somewhere coins a term for something. It could be new, old, or simply evolved
-
‘The market’ (that is to say, a set of like-minded technology vendors, consulting firms and so on) picks up on the term and uses it in their literature
-
As this continues, they do what marketers are good at, which is to differentiate. “Unlike other suppliers of <term>, we…”
-
In parallel, a broader range of analysts, journalists and relationship brokers look to incorporate the term in their own understanding and exploration of the topic it represents.
Yes, there’s some back and forth between groups on this. Meanwhile I, no doubt like many others involved in tech, follow a staged approach towards understanding what is going on.
First, I start to hear the term used repeatedly, and think, wow, am I so thick that I was unaware of an entire area of IT? I’m not kidding — most recently, it was use of ‘shift-left’ as a testing best practice.
Second, I buy into the idea that, yes, it really does mean something and I need to know all about it. It may start to feel vaguely familiar at this point.
Third, as its provenance emerges, I recognise its derivative nature for what it is, re-tying the strands of understanding I had thought were starting to unravel. Peace is restored.
Of course, it could be the case that an entirely new field has emerged. This does happen: the mobile phone doesn’t have much of a precedent, for example; virtualisation is also noteworthy.
More often than not however, new developments sit somewhere along a line from “a clever re-naming due to changing circumstances” up to “the same old stuff, re-packaged because we were bored of the old name.”
IoT for example, sits somewhere towards the left hand end. Sensor-based remote management and control may not be new, but the costs have fallen to such an extent that you can put sensors literally anywhere.
I’m racking my brains for examples of the right hand end of the scale, but the constant refresh of data management comes close… from data warehouses, through big data and now into data lakes.
Sometimes the name really does look like something new, but then you give it a prod and you realise its implications are as old as business itself. Take Robotic Process Automation, for example.
It’s got it all —a three letter acronym (RPA) which has a buzzy term in it, a direct hook into both software products and consulting services. But behind the hype are business-oriented rules engines and other repackaging.
And don’t even start me off on digital transformation. Though I agree — “business model and process re-engineering to integrate and incorporate use of analytical insight from internal and external data sources” isn’t quite as snappy as a title.
The irony is that, in all the hubbub around re-invention, we lose track of the innovation. Which goes some way to explaining why the industry can sometimes appear to move incredibly fast, at the same time as being laboriously slow.
Perhaps we need a term for that. Here’s a couple of articles for the (short) week.
5 questions for Shawn Rogers of Tibco – from data to insight
As I write in this article, my recent chat with Shawn Rogers was delightfully devoid of jargon. Equally interesting was how the company is having to grow into more of a consultative role, as the nature of the problems it can solve for its customers becomes more complex. I see this as further evidence of a move towards a wave of distribution, following the centralising influence of public cloud.
From the archive: A Technological Map of the Future
You know how it is, you sit in a coffee shop and start doodling… and before you know it you have put everything you understand onto a single sheet of paper. I’m not sure things are that different today, which is good on the one hand, but at the same time, illustrates how far we have to go.
Highly curricular: How Not to be an Analyst
As previously mentioned, I presented on this topic to the Institute of Industry Analyst Relations last week — it was pretty useful to help me get my thoughts in order. I have written up my notes as a blog, the five rules are:
1. Analyse, don’t preach (the clue is in the name)
2. Don’t get distracted by influence
3. Beware of silo-ed thinking
4. There’s no ‘I’ in analyst
5. Avoid the basic gotchas
Oh and of course, avoid drinking wine before replying to job ads!
Thanks for reading and until next time, Jon
06-08 – Bulletin 8 June 2018 - Apple and the decay curve of potential difference
Bulletin 8 June 2018 - Apple and the decay curve of potential difference
This week’s plan to write about WWDC was thwarted when I realised there was not much to say…
OK, that’s not quite fair. The plans around Augmented Reality look very interesting, though they are ‘just’ software enhancements and not unique to Apple. The speed-ups, streamlining and additional usage controls in iOS are very welcome, as is the fact that CarPlay supports other apps than Apple Maps. The new features on the Mac, Apple Watch and Apple TV look useful.
But it’s all a bit, you know. Like that party you go to when you expect food, and someone comes round with a tray every now and again, but it’s not quite what you want, so in the end you nip out and get a sandwich. Or the shop you go in towards Christmas, which all looks like it should fulfil all your needs, but in the end you can’t see one thing you want to buy.
I’ve written before about how many technology companies, and indeed darlings of the platform economy, benefit from a sudden release of potential value. Uber managed to link together drivers and passengers for example, missing out pesky and anti-competitive taxi corporations, as well as avoiding (until recently) niceties such as passenger safety.
These opportunities come from the law of reducing bottlenecks which arise as a technology increases in capability and reduces in cost. But they have a best-before date: that’s the nature of commoditisation. It’s also the nature of fashion, which is not a coincidence.
It’s not hard to remember what technology looked like in pre-Apple days: beige or grey, clunky and thick. In large part, it did not attempt to be cool: exceptions such as Sony differentiated against the blandness of, well, just about everybody. Apple made tech cool, at a moment when it was not the norm.
But, frankly, now it is. Apple’s USP is no longer that it is the coolest, nor that it is the most innovative or leading edge: the days of pulling a laptop computer out of a flat envelope are over. Of course it has ongoing credibility, but it is more Audi than Tesla.
Perhaps it doesn’t matter: with one of the world’s biggest corporate cash piles, it is not going anywhere soon (but then, that didn’t slow the demise of Sun Microsystems, when it became the dot in dot-bomb). However, the rationale upon which the organisation’s rebirth was founded is no longer differentiating. Which is food for thought indeed.
Just the one article this week.
The missing element of GDPR: Reciprocity
Blah Blah GDPR… As I was walking down the high street this morning, it suddenly clicked that yes, finally, stuff about people was being treated as such, and not just some sanitised view that we call ‘data’. Data is difficult to abuse, but people it is more straightforward. So I welcome the ongoing debate and activity (which largely boils down to: what’s your privacy policy?), at the same time as questioning its scope and future-safety.
06-22 – Bulletin 15 June 2018 - On briefings, corporate transference and other tricks
Bulletin 15 June 2018 - On briefings, corporate transference and other tricks
I was sitting on a briefing call earlier this week when I confess my mind was starting to wander. Briefings, for the initiated, are where a company get to tell you things about themselves, and you get to ask interesting questions. Inevitably, they are designed to present things in a good light, so you can find yourself wondering about which bits have been left out. There’s some standard tricks that any briefing recipient needs to watch for.
1. Corporate transference
This is an old, familiar phenomenon to anyone working on the outside of an organisation. Inevitably, as a system of people develops and matures, it solves problems and reaches states of epiphany — that moment of clarity when everything gets worked out. This is sometimes followed by, “We must tell the world!” evangelism, whether or not the world already knew about it.
I remember a briefing a good ten years ago, by a software vendor (you can probably guess who) which had just worked out just how important IT security was. So, they proceeded to tell a room full of people that already knew this, in clear and unequivocal terms. Is vendor-splaining a thing? If not, it should be.
2. Challenge-opportunity
In a similar vein, organisations can be forced to address a weakness in their portfolio, offering or approach. “I know,” goes the mantra, “let’s not see it as a challenge, but an opportunity, right?” And so, out of the blue, there appears a great new service which is, great, yeah, just great.
Now, it’s not for anyone to put anyone else down, but this approach can appear more than a little disingenuous. Every now and then it become difficult to resist the urge to call B-S, though this response is frowned upon in a professional setting. And so we keep steam, but we all know, don’t we?
3. Down-scoping
Case studies are always good value, as they make things real, bring the intangible to life. Behind the scenes of any case study activity are difficulties in getting customers to talk openly about their own problems, to appear allied to one vendor vs. another, and so on. The result is that examples, when they appear, are not reflective of the situation as a whole.
The result can be that the example is of some obscure bank in Nebraska (I choose any place guardedly, apologies to Nebraskans who don’t feel at all obscure), or a department of a larger enterprise that happens to have got its own act together. “We’ve enabled new business value in ABC retailer” sounds so much better than, “An initiative in ABC’s new, yet short-lived customer experience facility used our stuff.”
The result of the above is that the world can appear a wonderful place, at least for the duration of the briefing, yet the joy starts to fade just minutes after it ends, leaving the briefed wondering whether they heard anything useful at all. It also feeds a bigger challenge faced by the industry, which is: if we had all the solutions two decades ago (when I first started being briefed), why to the problems still pervade?
Part of the answer does come down to continued change, growing complexity and so on. But an equal measure needs to be placed at the door of our being a little too ready to accept the good news stories at face value. For sure, nobody wants to spend their working lives in a constant state of cynicism. Yet we remain unable to have any clear measure of progress, beyond processor cycles and storage volumes. Hm.
In other news, here’s a couple of articles from this week.
5 questions for… Electric Cloud. Whence DevOps?
Over the past few weeks, I’ve been taking a stack of briefings around DevOps as part of a report I’m putting together. The wood I’m trying to separate the trees from is, what’s so new now, compared to an awfully long time ago? Back when I was still literally a kid, software process people knew what configuration management and automation was; they also had tools and practices around operations, and ways of managing difficult conversations. So, if that’s not the topic at hand, what is? In this article, Sam Fell has some useful thoughts around the nature of complexity, which is a good place to start.
5 questions for… Nuance – does speech recognition have a place in healthcare?
I’ve been a speech recognition advocate since I used to tramp round fields with my dog, transcribing interviews using a hacked-apart laptop, a scroll wheel and a headset. Which is also a long time ago. I am, frankly, surprised we (including me) seem to prefer sitting in front of a screen, with all the RSI and back problems we incur as a result, alongside the inefficiency. Will speech recognition have its day? Who knows, but I hope so for these reasons alone. In this article I speak to Nuance about how it could make a difference in the (needy) healthcare sector.
That’s all for this week! Thanks for reading, as ever.
Jon
06-29 – Bulletin 6 July 2018. On methodologies: not the weapon, but the hand
Bulletin 6 July 2018. On methodologies: not the weapon, but the hand
| :—– | | I’m all for methodologies. Of course, I would say that – I used to run a methodology group, I trained people in better software delivery and so on. From an early stage in my career however, I learned that it is not enough to follow any set of practices verbatim: sooner or later, edge cases or a changing world will cause you to come unstuck (as I did), or the approach will reach a best-before point, which goes a long way to explain why best practices seem to be in a repeated state of reinvention.I was also lucky enough to have some fantastic mentors. Notably Barry McGibbon, who had written books about OO, and Robin Bloor, whose background was in data. Both taught me, in different ways, that all important lesson we can get from Monty Python’s Holy Grail: “It’s only a model.”Models exist to provide a facade of simplicity, which can be an enormous boon in this complex, constantly changing age. At the same time however, they are not a thing in themselves; rather, they offer a representation. As such, it is important to understand where and when they are most suited, but also how they were created, because, quite simply, sometimes it may be quicker to create a new one than use something ill-suited for the job.And so it is for approaches and methods, steps we work through to get a job done. Often they are right, sometimes less so. A while back, myself, Barry and others worked with Adam and Tim at DevelopmentProcess to devise a dashboard tool for developers. So many options existed, the thought of creating something generic seemed insurmountable…… until the epiphany came, that is: while all processes require the same types of steps, their exact form, and how they were strung together, could vary. This was more than just a, “Aha! That’s how they look!” as it also puts the onus onto the process creator to decide which types of step are required, in which order. It therefore becomes important to understand both the steps and the reasons they exist.With the above very much in mind (as I lifted it from my wrap-up), here’s an article from this week. Five questions for: Mike Burrows of AgendaShift While I didn’t get actively involved in Mike’s collaborative book-writing process (for a number of reasons, not least, I’m not really a practitioner any more), I did observe the process and I think he is on to something. Hence this article. In another recent conversation, Tony Christensen, DevOps lead at RBS, said the company’s goal had become to create a learning organisation, rather then transforming into some nirvanic state. True Nirvana, in this context at least, is about understanding the mechanisms available, and having the wherewithal to choose between them. Extra-curricular: Plucking Different plays Gogol Bordello In case it wasn’t already apparent, I’ve become a bit obsessive about packing as much in as possible before my sorry body packs in underneath me. To whit, in this song at least I find myself fronting a ukulele band as we deliver a new take on the gypsy punk song. File under: I won’t wear that shirt in the hot sun again. Also file under: no, I didn’t expect to be doing that either. Thanks for reading, Jon |
06-29 – Bulletin: The cognitive dissonance of customer centricity
Bulletin: The cognitive dissonance of customer centricity
“Do as I say, not as I do” - John Selden, c. 1654
“Do be do be do” Baloo, 1967
Cognitive dissonance is a very human trait. Few, if any, could claim not to be guilty of it; in the main, we content ourselves with minor inconsistencies of behaviour, which we justify sufficiently to push out of the mind’s eye, forgotten before long. Similar dissonances apply in the working day of the technology writer, analyst or pundit, who invariably and repeatedly stumbles onto topics or framings that would be impossible to keep to, even if they were possible to achieve.
An obvious example is how technology is going to fix everything, at some point. Take any new or repackaged capability, start writing about it and before long, you will have an article or report that, if one just does what it says, will assure absolute success. That’s fair enough — after all, who wants to read something that says, “Umm, Artificial Intelligence. Well, if you want to do it, it probably won’t work for you, but why not give it a go, you could get lucky?” We have to be reasonably succinct, direct and dare I say it, prescriptive, otherwise we are not really helping, are we?
In general, we hope, we get it right. In my experience, we get it less right when we talk about things we don’t fully understand: how easy it is to miss some vital piece of information, which not only colours, but changes everything. I remember a few years ago, I was writing about a mobile solution for people working on building sites. In the process, I spoke to a couple of construction engineers and was very quickly put right: “You don’t understand,” said one, “Mobile phones are completely banned on site.” “What, completely?” “Yes, completely.”
Some mantras are almost impossible to get wrong, at least on the surface. For example: all enterprise IT should exist to deliver some kind of ‘business value’; the earlier you spot a problem, the easier it is to solve; and, of course, it’s all about the customer. I’m not picking this last one at random, as it has suddenly decided to be a significant element of a report I’m writing. The trouble with it is not that it is wrong (though like all absolutes, including this one, it can never be one hundred per cent true), but that we all know better.
Right now, the technology industry is at the eye of a storm of its own creation. Back in the day, it existed far apart from people, who would access data via green-screens (and who were given tech-subservient titles, such as data entry clerk, as a result). Today, technology is everywhere, infiltrating most aspects of our daily lives. But while we may be leaving the era of ‘computer says no’, we are a long way from a time when we, as people, call the shots.
The increasingly frequent examples of backlash against out-and-out data monetisation — as illustrated by California’s decision this week to bring in privacy laws — are symptoms of this. And, or indeed but, the strangest thing is that we all know that we are being held in thrall. All of us, from top CEOs to those living and working at life’s front lines, are aware. We are like the construction workers: we know the rules how we live our lives, and we know when and where technology is failing to deliver.
Yet, still, we present technology, and business, like it can do no wrong, like it is enough to say that customers are an organisation’s most important asset, or that the customer experience is driving corporate strategy. These statements are not false in themselves, not generally, but they present an incomplete picture: more realistic is that, when we get things right, we find that customer-oriented measures increase.
This is not some idealistic view, far from it. Rather, it is a the starting point for a hypothesis: if customer-centricity is as important as we make out, it should be possible to put a value on where we have come from, where we are now and where we want to get to with it; equally, we might be able to be more honest about where compromises have to be made, or indeed, what constitutes a blatant abuse of accepted social contracts.
It may just be the case that ‘delighting the customer’ really is the best way for most organisations to go about their business. If so, we can only make it true by accepting that we are a long way away from such an aspiration. And, what is more, we all know it.
No articles this week. as I have been deep in said report. I hope, instead, you will enjoy some feedback to a budget hotel owner.
Thanks for reading, Jon
July 2018
07-13 – On digital transformation, and when a useless term proves useful
On digital transformation, and when a useless term proves useful
I find myself drawn back into the term ‘digital transformation’ as I am starting up two projects on the topic right now (more about both very soon). As I’ve said before, the term is not very good. However, and as I am learning, its very existence is serving a purpose.
First off, however much organisations may want to look liked they are emerging like a butterfly from a chrysalis, the reality is much more mundane. Given that even the smallest changes take a large amount of time, this should come as no surprise.
As anybody working in a big company knows, large-scale change just doesn’t happen. It can’t, any more than humans can trans-substiate and re-appear somewhere else. What can happen is either making existing things work better, or enabling new things to happen separate from the old.
In these contexts, digital transformation starts to sound a little bit like the allegorical lipstick on a pig. If it’s not wholesale change, is it really transformation as such? Well, no, but yes. The term may be inaccurate, but its purpose remains.
What the heck is he on about, I hear you say. Ah, I reply: based on the raft of interviews I have recently had the ‘pleasure’ to work through on the topic, I can confirm that at least some enterprises see it as important.
Why? Because conversations involving the term ‘digital transformation’ take a different tack to those about specific technologies. Even with disagreement about what it might actually mean, or whether it can be delivered upon, digital transformation is accepted as meaning business, not technology change.
As a result, the conversations lead to topics such as customer experience, business value, models and so on, rather than integration, scalability, operational management and other such more tech-y topics.
I’m not going to lie, I still don’t like it. But it is a boon that technology-related strategies and conversations can put the business, and/or its customers first. If it has to be called digital transformation in order to achieve this, I’m all for it.
Thanks for reading, Jon
P.S. No articles this week (did I say I was starting up two projects? :) )
07-20 – Bulletin 20 July 2018. On journeys to the cloud, and what to say when you have nothing to say?
Bulletin 20 July 2018. On journeys to the cloud, and what to say when you have nothing to say?
I’m all ears…
It’s not that big a secret why I dropped out of the analyst business for a while. Having spent over ten years being asked what I thought about everything and anything technological, one day I simply found I had run out of things to say. It wasn’t any big crisis, more a flummoxed shrug which came uncoincidentally at a time when everything, I mean everything, was going to “move to the cloud.”
We now know better, of course. A philosophical shift towards thinking about workloads rather than resources has been taking place for some time, based on an increasingly flexible architecture. The only reason you (still) have to think about hardware is because of its constraints, bottlenecks and work-arounds; take these away, either by having enormous availability or by clever orchestration, and the conversation can focus on the problem you are trying to solve, rather than what you use to solve it.
Should everything have just moved to the cloud, we would indeed now be in a world where IT infrastructure didn’t matter at all (hat tip: Nicholas Carr, who painted such a picture). For a raft of reasons however, nothing could be further from the truth. Technology is fragmenting and diversifying, just as you would expect anything to do when all constraints are removed: isn’t commoditisation the ultimate constrained environment? (It is also no coincidence that, as the conversation has moved more towards representing this increase in complexity, I have found myself re-joining it. But enough about me!)
So, what does one do when one has nothing to add? One shuts up, of course. Over the past few years I have spent a fair portion of my working hours ghost-writing for very smart, yet strangely less articulate members of the technological community. In doing so, I’ve learned an inordinate amount. Not only about technology and business, but also about listening.
Listening is hard. Not only because of the noise/signal ratio, but because of our individual perspectives, framings and indeed, fears and hubris: it can sometimes be the hardest thing in the world to pay attention to what people are telling you. At a recent event, a speaker was complaining to me about how stupid some of the questions were. I didn’t say anything, but I remain firmly in the belief that when someone says, “Can you explain better, I don’t understand,” that’s probably down to your inarticulacy rather than a fault of the audience.
Back in 1997, I wrote an article for my consulting colleagues about blockers to listening. As I’m a hoarder, I still have them so here they are:
-
baggage - extraneous clutter in our own brains which distracts from the conversation in hand
-
inner noise - the conversation sets off a train of thoughts which, though fascinating, prevent us from continuing to listen
-
control - leaps of understanding about what the person is trying to say, missing his/her actual point entirely
-
ping-pong - where a client’s point triggers a memory or an opinion, so we spend the next minutes looking for a suitable gap to express it
-
display - where we use the conversation as a tool to express our own knowledge, ignoring the client’s subject matter and making him feel stupid in the bargain
-
hidden agenda - where we ensure that the conversation achieves our own goals, forgetting to check that the client’s goals are satisfied.
We’ve all done them, me more than most probably. But over the past twenty years, I have at least learned that often, saying nothing is the best form of communication.
10 Reasons why Broadcom is buying CA Inc
Here’s an “article” for the week. I say “article” because I confess to have been tickled as to why a chip company might by an enterprise software company. I couldn’t resist a bash at why that might be.
Extra-curricular: Society of Authors Annual Awards
The SoA is like one of those best-ket secrets: anyone who professes some kind of authorship can can join for a nominal fee, make the most of its great legal and other services, and attend small events with luminaries of the writing world (Stephen Fry’s speech was worth the cost of entry). When seen as a collective, authors are not such a scary bunch but they generally share one characteristic: the ability to finish things. I will this on board!
Thank you again for reading, as always.
Jon
07-28 – Bulletin 27 July 2018. Technology may be indifferent, but people are not
Bulletin 27 July 2018. Technology may be indifferent, but people are not
I’ve written before about the nature of technology to be neither good nor evil: it’s an easy thing to say, a bit like, “guns don’t kill people, people do.” Which is of course true, but guns make it easier. Whether they should be banned is also for people to decide: sometimes we see a certain facility as being too dangerous for public ownership, whereas we shrug and grudgingly accept others. “Cars don’t kill people, people do” is just as true, but nobody but a minority is looking to ban cars on public safety grounds.
Sure, sure, heard it all before. But such justificatory denialism takes place at a much deeper level. One can watch Facebook’s share price plummet as changes its practices, or indeed see similar companies go out of business, at the same time as accepting the platform for what it is. Disdain of social media may be merited, but at the same time, only a minority are switching it off. It’s a conundrum: if we really believe something is bad, why are we so slow to act?
There’s a reason I’m bringing this up. Our corporations are out to maximise the value they give to their shareholders: that’s a fact; so is the reality of social responsibility, green, pink or any other form of washing of the corporate image. So, on the first hand, there’s always going to be a tendency to put business profit over customer value: some take this to an extreme, but basically, if you don’t, you will go out of business. And on the second, by extrapolation, however much an organisation wants to do the right thing, it must first do the profitable thing.
We might grumble and rail about this, but we all know it can’t be any other way — for many, their jobs depend on it. So what, so what? So, we don’t really get over the hump of finding this a problem unless a real crisis hits: we go with the flow, based on the assumption that more good will happen than bad.
I have two bees buzzing in my bonnet about this. Today, I spoke to Ali Hadavizadeh, Program Manager at the Farm 491 AgriTech incubator — Ali is passionate about helping new companies innovate, in the name of sustainable and safe food production, and we concurred on how we as an industry and a population have been sleepwalking towards a crisis. Companies that look to maximise profit and deny environmental damage are rightly castigate-able, but we have a tendency to avoid paying too much attention to this…
… and meanwhile, retail organisations have valid reason to be testing all kinds of technology in the name of improving how they engage with customers. I’ve been involved in various recent studies which suggest, first, that the route to ultimate success is about genuine, long-term human loyalty as opposed to “what can we get out of whoever walks though the door”; and that the continuing e-commerce wave distracts from this more idealistic view due to its tendency to cause a race to the bottom.
I’m not sure what the answer is, but I think it has a great deal to do with transparency. GDPR is certainly helping, as it forces organisations to say what they are doing with data (or indeed, change their practices, which is in part the cause of Facebook’s stock fall). It also links back to customer-centric thinking: the best route to profit is to give people what they value the most. One thing’s for sure: we all know what’s going on really, but in the majority of cases, we choose to ignore it as a collective. On our own heads be it, then, when things go wrong.
Apropos of which, here’s an article for this week.
Five questions for… the Thinaire platform
As I say in the article, if you want to innovate, you have to stand on the shoulders of anything you can find. In this case, Thinaire brings a bunch of retail-related capabilities to the table: it’s the platform, baby, but then it’s up to retailers, and indeed ourselves, to discern.
Extra-curricular: Never say never
It’s still Friday as I type this, but I am just back from playing a gig… I’ve just about come to terms with the fact I don’t need to pinch myself to prove it’s happening. More strangely, we seem to have picked up a sizeable audience in Myanmar and the Philippines: without prejudice to the above, we put a quick Facebook ad out to see what happened and were quite astonished by the level of interest. So who knows where that might take us! Meanwhile, here’s us playing Gogol Bordello at a local music festival. And why not.
August 2018
08-03 – Bulletin 3 August 2018. We are all data companies, and therefore analysts, now
Bulletin 3 August 2018. We are all data companies, and therefore analysts, now
It is perfectly natural for anyone to want to protect a domain they have spent so long building. All those skills, accumulated experience and expertise become part of who we are; take them away, and our life loses meaning (which, as Victor Frankl will tell you, is a bad thing).
I am, of course, talking about the artificial, yet rapidly diminishing boundary between the analytical haves and have-nots. Case in point: industry analysts have built global businesses on the basis of being able to know more about what’s going on, or count more things, or simply have the time to do so, than anyone else. Such smarty-pants specialists exist in every industry, in every domain.
And their days are numbered. I’m not talking about the rise of Artificial Intelligence (AI). As a techie, and therefore someone you’d expect to say different, I remain to be convinced about the rise of the robots as in some way signalling the demise of everything else. I won’t reiterate old arguments, apart from to say that absolutes tend to architect their own demise.
Taking the technology-as-tool position, we need not speculate, as the truth is all around us. Consumerisation and augmentation, disintermediation and democratisation are all (over-blown, but nonetheless) terms describing the increasingly distributed nature of tech, and therefore of its impact. Put simply, it used to be for the few and will end up for the many.
Or put simpler still, anyone can run a survey these days. In a briefing this week, I found myself discussing a vendor’s research even as I wondered to myself whether I was out of a job. “Thanks for making me redundant,” I joked, cheerily. Then, I laughed a hollow laugh, put the phone down and sobbed hopeless tears.
I didn’t really, but you get the point. Meanwhile, marketing data companies are finding themselves overlapping (if not yet competing) with manufacturers, even as both software-as-a-service providers and membership organisations realise that behavioural insights are most valuable thing they own.
Fact is, as lower levels of technology commoditise, we all journey up the data-driven pyramid of needs only to find that it wasn’t just us that had the same idea. So, will this leave us all competing over who’s got the best insights? I don’t believe so, in fact, I think the opposite will be true as we actualise data’s role to augment, rather than replace, what really matters in life.
Such as healthcare, and indeed, food production. To whit, here’s an article for this week.
Five questions for… Ali Hadavizadeh, Farm491
Coincidences are rife. I happen to live a stone’s throw from the UK’s only AgriTech incubator, at the same time as believing passionately in the need, and opportunity to help assure sustainable farming through technology innovation. Here’s an interview. Of course, this is another sword that can cut both ways (which is why I wrote last week’s newsletter).
In other news — Travel Forward conference 5-7 November 2018
I really must stop saying yes to things, but this one’s a bobby dazzler — I’m going to be Programme and Content Director for the Travel Forward, the travel technology conference attached to World Travel Markets. We have a fabulous programme and some top speakers, and, well, who doesn’t want to see our travel experiences made better through tech? Watch this space for updates and feel free to share your own experiences!
As ever, thank you for reading, and all the feedback which is both appreciated and which helps steer this ship.
Cheers, Jon
08-10 – Bulletin 10 August 2018. Tactics trump strategy every time
Bulletin 10 August 2018. Tactics trump strategy every time
Business knows the answers. But can it stick to them?
I’ll keep this quick, I promise. I know, I always say that. But something struck me as I interviewed an old friend and colleague for a survey report I’m writing: we know what the answer is.
You know, that conversation you had, in the car or in the bar, about how to sort out the problem, take over the world or make lots of money. It wasn’t wrong then, and it isn’t wrong now, is it?
There’s the rub. Working out the answer is not the hard bit. Yeah, whatever, we know this, right? 1 percent inspiration, ninety-nine percent perspiration? Sure, heard it before?
But this is where it gets weird. Not only do we forget that we know what the answer was, we then carry on like we never knew in the first place.
To whit, a conversation I had with RBS’ Tony Christensen about learning organisations. It was always the right answer, just as agile, or customer-centric, or anything else might be.
To the extent that it gets dull to hear these terms trotted out. So, what’s getting in the way of, you know, just keeping them front of mind?
I think perhaps, that the default human tendency is tactical. Our reptilian cerebellum, left to its own devices, simply wants to eat, sleep and have pleasure. Every day we wake up and have to convince ourselves of a higher path.
So, in corporate culture we are not so much fighting against entropy, but our inability to delay gratification. Meanwhile, we hold entropy to account: put simply, “if you can’t explain to me how this will deliver clear long-term ROI, we’re not going to do it.” Followed by: “Now, lunch?”
This is not so much the sky-hook upon which all business failures can hang, as the plug-hole down which their potential will inevitably disappear.
In case you were wondering, this has a fantastic amount to do with technology. Any of the strategic topics or trends in discussion — insight-driven decision making, or digital transformation, or DevOps, or whatever faces a constant battle against our all-too-tactical nature.
Meanwhile, we carry on doing things the way we’ve always done them, long after they pass their sell-by date, because habit trumps tactics. Food for thought.
Meanwhile, here’s an article for this week.
Seven lessons from writing the report, Scaling DevOps in the Enterprise
It’s never too early to move into a reflective mode. Here’s some thoughts about what I have learned, as I complete my DevOps report (due this month):
1. It’s not (just) about DevOps.
2. It is all about business value delivery.
3. Reality is the biggest bottleneck to DevOps.
4. Man, is there a crapload of DevOps vendors.
5. Cloud is cause, catalyst and now consequence of the DevOps stalemate.
6. Enterprises know where they want to end up, but are stymied.
7. Tech could start by turning some of that smartness onto itself.
Thanks again for reading, see you next week.
08-17 – Bulletin 17 August 2018: Why are we still talking about data integration?
Bulletin 17 August 2018: Why are we still talking about data integration?
New context, same challenges
I’ll keep this short (just as he says every week) but don’t you ever get that feeling of deja vu? I was having coffee with a head of digital technology from a major hotel chain this morning, when the conversation turned to the data integration challenges of digital transformation.
In case you haven’t had this conversation before, it goes something like this: legacy systems is a challenge; programmatic data access (e.g. via APIs) is a good help; but it’s all quite complex isn’t it; and a software vendor who is looking to solve the problems.
I cannot tell you how many times I’ve had that conversation. Indeed, right back when I first became an analyst in 1999, we would talk about Extract-Transform-Load, or component-based interfaces and web services; XML was going to change the world, and vendors like Constellar had it all sorted.
It does beg the questionL have we really progressed at all? The answer is, yes, of course, we have, but we keep making things even more complicated, in the hope that, at some point, we’ll just supersede all that old stuff. What can possibly go wrong?
Onward, upward, and here’s an article from this week about the UK Government’s Making Tax Digital initiative, because digital’s going to make everything better, isn’t it?
Thanks for reading and all the best, Jon
08-24 – Bulletin 24 August 2018 - We are porcupines, in a hindsight-oriented world
Bulletin 24 August 2018 - We are porcupines, in a hindsight-oriented world
We are under attack. We spend our lives in deliberate, naive ignorance of the thousands of dangers we face, in case we scare ourselves rigid, thereby rendering ourselves completely, and counterproductively, useless.
At the same time, we are constantly calculating risk, like tennis players watching the trajectories of multiple balls and somehow managing to swipe them away before they hit. As a race, we have proved ourselves immensely good at this, by the fact hat we are still here.
Which makes it all the stranger when we are faced with a new cyber-related issue. I use the term “cyber-related” as it’s not strictly security, or privacy, that is always involved. Case in point: the continued attempts by social media giants to get their houses in order.
When something new happens, we act like porcupines on the (super)highways of the Information Age, somehow confident that the protections we have developed over aeons will continue to serve us. In the case of political interference and behavioural manipulation however, our in-built mechanisms are clearly inadequate.
From a vulnerability perspective, what took place is (and continues to be) child’s play: identify, through a process of repeated testing, identify what is most likely to get a reaction from a person, then do that thing. However smart we think we are, our inability to do nothing when provoked has been our undoing.
(And, if I’m being too general in the above, let me me be clearer: it’s the “like” or the “retweet” button that we just can’t resist clicking).
Over the years, I’ve talked about security risk being more akin to permeability than any single big nasty. Bad things continue to happen, like waves buffeting the harbour wall: that is their nature. We can keep our guard across areas we understand (not clicking on spam email for example).
It appears, however, that we can’t do the same against areas we don’t (yet) know to be bad. A new attack surface, when it appears, does so largely unprotected. For the past three decades and probably longer, it has been thus.
Which is where it gets strange. Simply put, we don’t have law-making mechanisms that take this into account. We fall down with horror, pick ourselves up and carry on with our lives, as we look to how existing laws and compliance frameworks might deal with this ‘new’ situation, all the while never dealing with the root cause.
And thus we continue to create locks for stable doors, long after the notion of a stable has been superseded several times over. I’m not sure whose interests it serves, beyond anyone that wants to profit from the discrepancy. But, on we go.
Foreword to Smart Shift: From Kibish to Culture
No article this week (I’m waiting for feedback) but in other, apropos news, a couple of years ago I wrote a book about how technology was changing how we need to think. It was never published, but has been languishing on my hard drive. Ironic that I didn’t want to start it as it kept going out of date; as things now stand, it was pre-Trump, pre-Cambridge Analytica and pre-GDPR so is already well past its best before.
At the same time, as I recognised how quickly things change, across the writing it morphed into a history of technology’s impact. I’ve decided to put the whole thing online, both for posterity and potentially, as a foundation for some future work. I’ll be doing this over time but for now you can read the foreword, here.
08-31 – Bulletin 31 August 2018. Where the heck did GDPR go?
Bulletin 31 August 2018. Where the heck did GDPR go?
Moving from ‘the’ thing to ‘a’ thing…
If anyone wants to see the inner workings of the tech industry writ large, look no further than GDPR. No, I’m not talking about the need for good data, etc, but more about how much of a difference marketing dollars make.
One minute, it was the topic on the tip of every organisation’s tongue; the next, it just wasn’t. It just vanished, from the press, from web sites, from the tech dialogue. It’s not just me thinking this (beware the anecdotal evidence; Google Trends shows it down to 10% of its May-time peak.
Are we to surmise that it’s a done deal, that every organisation now has a watertight policy in place, that chief compliance officers are being made redundant? Of course not: GDPR is still there, right where the DPA was before it.
No; rather, it no longer has the fat lens of the marketing budget behind it. The system we have works like this: pick a topic, talk about it at your events and across your content streams, assign a PR agency to engage with journalists, and lo and behold, it becomes the most important thing in the world.
Particularly if your competition does the same. Tech companies, consulting and law firms all know the system works, which is why they do it. Of course, you need a valid reason to bring it up: for a provider, this is most likely to be that you can make money by talking about it, or that you might lose money if you don’t.
Suffice to say that some vendors I’m speaking to, even those big ones, don’t want to talk about GDPR anymore. Some still do, because it suits their needs: they’re not wrong to do so. If you’re a hammer manufacturer, it’s a good idea to talk about nails.
At the same time, we punters need to be clear on one thing: a topic of discussion should not be seen as a priority just because it is being discussed. It’s what turned me off from cloud in the first place: nothing wrong with it as ‘a’ thing, but it was never ‘the’ thing.
On a similar note, it’s why I smile when I see complex topics being treated as simple, even when everyone knows they are not. Such as DevOps, for example. With this in mind, here’s an article for the week.
Five steps to delivering DevOps at scale
DevOps is one of those fantastically vague technology trends: it’s not a product or service, it’s not really even a methodology… but at the same time, it is possible to get it wrong. In this article, I distil down five things I learned from Puppet’s Nigel Kersten, in a recent webinar. Here to help, I hope!
In other news… something novel
No, it’s not Paganini, but something else. Now at 60K words. Watch this space.
This is the 32nd edition of this newsletter, which certainly feels like quite a thing. Thank you for sticking with it.
All the best, Jon
September 2018
09-07 – Bulletin 7 September 2018. A computer that says yes, who wants one?
Bulletin 7 September 2018. A computer that says yes, who wants one?
I have been doing a lot of interviews recently with a group I will loosely call ‘digital leaders’, that is, people responsible for making technology make a transformative difference (as opposed to just meeting a need). I know, that already sounds like it’s vanishing up its own rear, but they are genuinely making, or have made, such changes to their businesses. And this observer has been learning a lot from them.
As discussed on a previous occasion, the ill-understood term digital transformation does have a place, as it moves the conversation from under, to above the bonnet (or hood) of the car (or automobile). The more I think about it (and I hold my hand up as someone who is working through this very process), this is about reframing our understanding of technology as something that enables, as opposed to something that responds.
Now, then. This is tricky stuff for me of all people to write about, particularly given my reputation for waffle. “Nice stuff, Jon, but can we make it a bit more crunchy/specific/fact-based/etc?” said just about everyone I’ve ever worked with, and rightly so. Something that enables vs. something that responds? Or, put more bluntly, what on earth is he on about this time?
To make it a bit more crunchy/specific, I’m going to borrow from a call I had today about how our use of technology is moving from systems of record, surrounded by processes. We are asked for information in a certain way and in a certain order, because that’s what the computer tells us it needs. And, largely, we are so overwhelmed with the wonder and magic of it all, we go along with it, be it filling in our tax returns, applying for a mortgage or booking a holiday. We accept, like the digital serfs that we are, pulling technology-powered turnips out of the frozen ground with unspoken gratitude.
But this is not how things are going to be. Rather, we will be allowing data to flow to where it is needed, with interfaces that enable us to do the things we want to do. I don’t need to say this as some kind of futurist, as it’s already happening: those upstart online companies have got to where they are by using this one, simple trick, of allowing the user, not the computer, to define the process, moving the tech into a subordinate role. Which tech is perfectly happy to occupy, if it is programmed to do so.
I’m not sure what else to add, other than to acknowledge that we are nowhere near this becoming the norm. As an example, I mentioned tax — some work I did last year involved interviewing government officials, who saw tax-as-you-go as a viable alternative to current (data entry based) systems. We haven’t even begun to understand just how profound a difference it will make to our lives, when this shift takes place, in tax, in retail, in finance, in healthcare: even ‘modern’ tools like Facebook will appear linear, clunky and old-fashioned as a result. (Oh, wait, they already do).
It’s going to happen, either over a period of time or in an explosion of change. Sure, there will be governance issues, privacy challenges, problems of misuse and the (continuing) potential for global destabilisation. All of which is exactly why we need to talk about it now. A storm is brewing: however much we rely on “computers that say no” at the moment, we are already moving away from the place where they have are burden rather than enabler. And we don’t have to look very far: as William Gibson once noted, the future is already here, it’s just not evenly distributed. And, for those who want it, it’s there for the taking.
09-16 – Bulletin 14 September. On mobile apps, engagement and human nature
Bulletin 14 September. On mobile apps, engagement and human nature
When vertical becomes horizontal
How quickly we forget. Or not that quickly, in the case of one large vendor, who (as of last year) was still looking at how it could somehow stave off the threat of consumerisation. That’s where employees have mobile phones and other paraphernalia, in some cases better than corporate kit, and so they start setting expecta… what, you mean, you knew already?
The world has moved on, faster than you can say “bring your own device strategy.” Even as I chatted today, with the head of a big department at a major consumer-facing organisation, I realised how old-hat my phrasing had become. Ostensibly, the conversation was about mobile apps, but I was quickly put to rights.
Not very long ago, this was how we articulated things: “You need an app.” Of course, all knew that the phrase was a consequence of how things stood — nobody needs an app as an end in itself, but having an app is a reflection of how well an organisation understands its user base. Simply put, if everyone is on a mobile device, best to adopt the interface du jour.
Trouble is, that’s no longer true (if it ever was, even if transient). We’ve moved from not having mobile devices at all, to them becoming the sharp end of customer engagement, to… to a point where they are recognised as just one interface among others, including face to face, phone, web site and whatever else comes along.
Which brings to the point, that it was never about the device. More, as was pointed out, it’s about delivering a service at the point of need: if the best way to reach someone is via a phone, or a retina display, or a billboard, or a customer service rep, so be it. The smarter you can get at understanding the notion of engagement, the more engagement you will have.
Which is what it’s all about. A few years ago, when I became bored of talking about technology for its own sake, I turned my attention to the human. Either, I thought, tech will make us something we are not, or it will make us something more than we are, or it will augment our lives in some way. Whatever it does, I decided, the latter category — augmentation — is the most likely.
In other words, and at least in the short term, the goal is to consider the all-too-human end before the means (for better or worse: “We are only… human.”). When I think about technology’s impact, in my head I like to position it against how we behaved when we lived in straw-hutted villages, as some still do. How ever much you add or take away, augment or diminish, it’s all about people.
I was about to “go off on one” about the future, but we should remain grounded in both the present, and the past. We are seeing so much change, but I am yet to see any deviation from what we might call human nature. While this might come as a disappointment, it is perhaps fundamental.
09-21 – Bulletin 21 September 2018. The first law of commoditisation
Bulletin 21 September 2018. The first law of commoditisation
Technology is still blooming hard.
Most careers, for better or worse, start in actually doing things, progress through telling people to do them, and end with deciding how they should be done. I know a few people who, having followed such a course, decided management wasn’t for them and went back to a hands-on role, a move which is either called a mid-life crisis or common sense, depending on your perspective.
For a techie, it can be easy to lose touch, to forget the whys and wherefores of what makes all that clever digital stuff work. We are told it is simple, and very often it is: the world of RESTful interfaces (a.k.a. exchanging information between programs in neatly formatted plain text), cloud-based services and so on have taken away a great deal of the pain.
When things go wrong however, they can be just as awful as ever. Why? Largely because there’s too many possible options. Older operating system versions. Deprecated interfaces. Ancient libraries to retain compatibility with.
We can talk about a lack of skills, but the smarts required to understand such a morass are based more on some Sherlockian attention to detail than specific expertise. Problem solving is detective work, with root causes sometimes a long way from symptoms.
Case in point: today I managed to get a small PHP package running on my web site. Why wasn’t it working, I hear you ask: well, I say. First, after much hair-pulling and message passing with the help desk, I email to say how it seems strange that the PHP version isn’t changing when I flick the option in the control panel.
Ah, they say, that’s because you are pointing at the wrong server. We migrated you a while back, but (and this part was silent) neglected to tell you. Suddenly I have an explanation as to why nothing I did (about getting a certain library installed) made any difference.
This wasn’t the end of the story (I won’t bore you with the extra security features that broke everything, nor the absence of a directory for session information) but, well, it was interesting reminder of what we might call the first law of commoditisation: nothing is actually commoditised, all that we’ve done is stuck a layer over the top, a facade if you will.
So, work above this layer and all will be well. But woe betide should you delve beneath, either deliberately or because you fall down some technological fissure. Far from everything becoming simpler, it continues to become more complex. It’s enough to drive a systems engineer to drink, and speaking of which:
Smart Shift: There is truth in wine
As mentioned last week, I shall be uploading my book to the web, section by section. This week, a story of sensors, drones and data articulated through the medium of wine. I’ve also added the foreword. Any and all feedback welcome: either this will become a living document or (more likely) I will learn from it and move on. Which is already a good thing.
Strategy & Technical Considerations for Successful Enterprise DevOps
Finally, finally, finally, my report on enterprise DevOps has been released. I have learned a great deal, not least that we are still at the starting blocks — not least because of all the complexity mentioned above. If you don’t want to read the words, I recommend that you check out the picture. Meanwhile, here are some of the articles I wrote along the way.
Is Enterprise DevOps an Oxymoron?
The Five Cs of DevOps at Scale
Seven lessons from writing the report, Scaling DevOps in the Enterprise
Five steps to delivering DevOps at scale
Could DevOps exist without cloud models?
Extra-curricular: The Day of Gloaming
While the manuscript of my novel is with a reader, I thought I’d write a short story. The tendril-laced premise literally came to me in a dream. Enjoy!
Finally, every week I get feedback from a handful of people, positive and observational, which I look to feed into my thinking. Thank you for reading , and keep it coming!
Jon
09-28 – Bulletin 28 September 2018. What are analysts analysing, anyway?
Bulletin 28 September 2018. What are analysts analysing, anyway?
Differentiating between philosophy and technology
Honour is understandable between thieves. After all, if I start stealing from you, what’s to stop you stealing back? And, similarly, few analysts go out on a limb to question or correct their peers. They might question back, and then, all hell would break loose. Of course it happens, on occasion: most often a smaller analyst firm will call out one of the industry behemoths, in the knowledge that, frankly, nobody is listening that hard anyway. It’s like attacking HMS Victory with a pea shooter.
I’ve done it myself, on occasion, largely by virtue of being unable to keep my mouth shut than any deeper motive. Over a decade ago, when IDC announced to the world that we would run out of storage, I called BS: the very notion was impossible (and showed a deep misunderstanding of resource management). We haven’t run out of any such thing, for the record. Nonetheless I do feel vaguely embarrassed at having spoken up. Just like that time at EMC, and that time at… oh well, we are where we are.
At the same time, I wonder if we do enough cross-scrutiny. The trouble with IT industry analysis is, well, where to start? but it comes too easily from losing a sense of objectivity. Technology products and services generally exist to solve a problem: the better we can define the problem, the easier it is to create a solution. For all the Three-Letter Acronyms, terms and we churn out, therefore, each one offers a touchstone, a rallying call. We saw it with ERP, then CRM; with ORBs and ESBs, with SDN and NFV. What they mean matters less than the fact they create a space.
Sometimes, however, this process goes wrong, most often when a trend emerges from the front lines rather than being triggered or catalysed by providers. We saw this with open source and SOA, then Cloud and BYOD; more recently we’re seeing it with DevOps. The models analysts use to define a space fall short, as do their philosophical basis: “What’s the market for cloud?” is one example of how traditional analyst thinking comes unstuck (answer: if you’re asking that question, you really don’t get what’s going on).
This does lead to some fascinating disconnects. It’s the analyst’s job to be the trusted third party, to make the complex simple and help guide decisions. Yesterday, I was in a meeting with a pretty big computer company, talking about a topic currently close to my heart. “That position aligns with our thinking, but we’re not hearing any other analysts talk about it like that,” I was told. Again, the topic is irrelevant (as is the fact I know several analysts that could help out). More telling was the fact that the analyst community as a whole wasn’t helping.
For sure, both me, the big company and a few other people I know could be completely wrong, but I don’t think that this the case. Rather, in this case, the topic (OK, it was DevOps) is seen as an end-point, and any products associated with it are a means to that end. Nothing could be further from the truth: for sure, it’s a good place to get to, but it is merely a point on a bigger journey. Let me put it this way: you won’t become any better at delivering innovation quickly by implementing DevOps tooling. That would be like buying some breathing apparatus, and becoming an expert diver overnight.
Of course, it’s never a good thing to buy some tech and believe it’ll change things, but in many cases, it does get a result (remember getting your first mobile phone?). And often, tech does need a rallying call, to help explain why it’s a good idea; it’s also a lot easier to talk about something if it has a name. However, just because the industry works best with bandwagons, we shouldn’t just assume everything needs one and in some cases, it can be counterproductive.
The bottom line is, where technology exists to support a philosophy, we should act very differently to where a new way of working falls naturally out of buying a particular piece of tech. Sure, there’s a spectrum, but we should be able to differentiate between the two.
Smart Shift: Brainstorming the future
The next section in Smart Shift is now up. It covers Robert Boyle and the Royal Society, unicorns and VCs, smart belts and hype cycles. Feedback welcome, as always!
October 2018
10-05 – Bulletin 5 October 2018. The future isn’t going to happen, and this is why
Bulletin 5 October 2018. The future isn’t going to happen, and this is why
One of my disappointments about working in IT is that I wasn’t there at the beginning. No, I don’t mean offering tea to Babbage and Lovelace, though that would have been quite the coup. I’m thinking more about the early days of silicon, when things really started to heat up.
Having mentioned Babbage, his situation does illustrate a point which pervades to this present day: the Information Age has been defined as much by what isn’t possible, as what is. I’ve previously talked about threshold theory, in that some things happen simply because they become possible, whereas before they weren’t.
In Babbage’s case, while his plans for a (second) calculating machine were valid, he couldn’t afford to build it — the amount of work required (and therefore cost) to engineer a machine of such complexity was beyond his fundraising ability. It’s been built, subsequently, and it works: you can see it in the Science Museum.
Similarly, I remember speaking to Augustin Huret, just after his (software-based) algorithmic inference capability had been acquired by consulting firm BearingPoint. To the point: his algorithms had originally been devised by his father, in the 1970s, but no machine was powerful enough to be able to run them. When Huret Jr came on the scene, the cost of running them was still too expensive. Today, it costs a few hundred quid a pop.
My point isn’t about threshold theory; it is, however, about the fact that we already get much of the maths, we’re just waiting around for infrastructure that can support it. Not a dig, just an observation. To extrapolate, yes, we will be able to apply such algorithms more broadly in the future. But, unless someone invents a new branch of maths, we will also largely be constrained by them.
What I’m fumbling with is my strange lack of conviction about the singularity, SkyNet, the notion that we will all be creatures of (potentially diabetic) leisure as robots do everything for us, and so on. There’s something missing in the equation, like that flow diagram which incorporates “insert something magical here.”
At the moment, AI is operating in two dimensions. The first is deep: for a well-bounded domain, such as voice, image, or other expected-pattern recognition, we can expect linear improvements in capability, perhaps to a point where it will be difficult to discern whether we are being listened to by a computer, or by a human. For simple, code-able instructions, this sounds quite the thing, if that’s your preferred form of communication.
Meanwhile, we’re looking to go broad - “Give me a data set and offer me insights about it.” Algorithms can offer in one of two ways: either they work on the basis of anomalies, looking for unexpected messages in bottles across an ocean of flotsam; or they require seeding with some kind of domain knowledge, context or hypothesis which can then be tested. Some companies (I was talking to Google yesterday) are getting very good at this stuff.
But yet, as I say, something’s missing from the equation. I don’t know what it is, but I am starting to understand why it is missing. Simply put, on one side we have mathematics, algorithms, patterns and rules; and on the other, biology, hormones, irrationality, caring, identity. What they can’t do is be a bit dumb or flighty, go the extra mile or wait around long after it feels pointless. Nor would they want to, if they had a concept of wanting in the first place.
I’m saying this as I think it’s a really important element of what we will see in years to come. Computers may be able to generate mood music that pushes the right buttons: all algorithms have to do is A/B test every combination and tweak their variables as they home in on the right ones. In other words, they can do what computers can do, sometimes with world changing consequences.
All of this to reinforce that we may be heading towards an augmented world, one in which mundane and mechanical tasks and decisions are automated, and we have extra information to work with. But the notion that we are heading anywhere more futuristic than that remains highly unlikely, in my opinion. To whit: either I am wrong, in which case we need to start creating legislation around a post-singularity, artificially intelligent world, or I am right, in which case, we need to start planning around notions of algorithmic augmentation.
Even if we do arrive at some post-singularity world at some point, the chances are that we will have already spent several decades working through this, far less exciting yet equally risky scenario: we should, therefore, plan for it. This means not being distracted by media-friendly notions such as ‘robot rights’, nor dwelling (as we do) on current issues such as privacy (as it is currently presented). Complacency is not an option: the world of 5-10 years needs a whole new set of laws and norms, based on what algorithmic augmentation brings.
Here’s an article for this week.
Travellers know what they want: all we have to do is listen to them
With my programme-director-for-travel-forward hat on, I’m always amazed at how easily we switch from one psychology to another. The ill-fated Stanford Prison Experiment is one case in point, but we can look closer to home — not least how we create vs how we use technology. In this article, I propose the idea of listening to our travelling selves, rather than trying to second-guess the notional needs of others. I know, crazy, right?
What is the smart shift? And the incredible shrinking transistor
In the final section of Smart Shift’s first chapter, I summarise the book’s purpose — “as the transcontinental train of change hurtles on, is there any handrail that we can grab? The answer, and indeed the premise for this book, is yes.” As it didn’t seem fair to leave you on quite such a cliffhanger, I’ve included the first section of the next chapter as well. Which, not uncoincidentally, covers Babbage. But not Ancient Greece: for that, you’ll have to wait until next week.
10-13 – Bulletin 12 October 2018. Innovating fast and slow
Bulletin 12 October 2018. Innovating fast and slow
I can’t remember who originally said it, but it’s true that we have a two-track industry: sometimes change comes very quickly, and sometimes, it takes a lot longer. There’s multiple dimensions to this. Not least our inability to spot a good idea from a duff one.
Google Glass for example, or more broadly, 3D TV were touted as game changers. The former was quite likely to be an idea before its time, and the latter, well, perhaps we simply didn’t need it. That didn’t prevent multiple global corporations pouring literally billions into the concept, however.
And meanwhile, we have the success stories. Apple did not invent the tablet computer, but boy, did that Jobs chappy know how to drive home a point. He was right: previous incarnations of similar products simply didn’t cut the mustard.
Today, it’s largely true that Apple products command a premium: other companies have caught up, and in many cases surpassed the capabilities. Apple broke the (Wintel) mould, set the bar and freed up a lot of innovation-led thinking.
All the same, Apple happened to be in the right place, with the right ideas, at the same time. Last week I wrote how maths had been waiting for infrastructure to catch up; in this game, it can be more important to catch a wave, than ‘re-invent’ the surf board.
Individual innovations happen at a moment in time because they can: right now, there’s a queue of great ideas biding their time, founders setting out their stalls and hoping they don’t run out of money before the wave hits. If they do, don’t fret, hopefully the VCs will back the next great idea…
Meanwhile, many industries appear considerably behind the curve. In fact however, they both are, and aren’t: in travel for example, of course we will one day end up with some kind of micro-charging mechanism based on what we do, rather than what we book. The fact this doesn’t exist is because it can’t, not yet.
My old colleague Martin used to tell the story of Michelangelo: when asked how he carved the Statue of David, he replied, “I simply removed all the stone that wasn’t David.” We would do well thinking about the future in much the same way: take away the challenges, and let it unfold naturally.
Smart Shift: The world at your fingertips
No articles this week, but here’s a book section. Starting with Darius the Great and ending with Peter Gabriel, this latest extract from Smart Shift covers the explosion of networking and communications.
And finally, if there’s one thing this newsletter seems to have proven, it’s that I am unable to write something short. Nonetheless, this is the fortieth I have written, somehow. Not quite sure how that happened, but here we are.
As ever, thanks for reading.
Best, Jon
10-22 – Bulletin 19 October 2018. On technology, the nature of work and long haul flights
Bulletin 19 October 2018. On technology, the nature of work and long haul flights
Augmenting all kinds of intelligence
By the time this lands in your inbox, I will have been on and off an aeroplane, having been to a conference (GitHub Universe) in San Francisco. I’m glad I went, but I’m also reminded of what put me off long-distance travel a few years back— you can pretty much write off a week.
But wait. How can I be glad I went, if at the same time I’m writing off a week? I didn’t deliver any reports, for sure. At the same time, I learned, I engaged, I advised a bit, all reasons why I do this job. I was not alone: 1,500 people came from multiple countries to do the same.
There’s an easy point to make here, which is that we still need physical presence to get certain things done, at a certain level. Frustrating it may be for video conferencing providers, but you can’t beat a face to face meeting. We know this; more important is what it means for the future nature of work.
Cutting to the chase, we’re told many jobs will disappear due to AI/automation/disintermediation/cloud/<insert buzzword here>. We shouldn’t be surprised about this — whole businesses are disappearing, taking their jobs with them.
At the same time, whole new businesses are appearing and with them, new types of work. Data scientists, knowledge workers, programmers and engineers are exponentially in demand, even as transactional tasks are superseded or automated away.
But more than this. A fear could be that only ‘smart’ people have a place in this brave new world: achieve post-degree-level status or find yourself consigned to the railway arches. A future for the educated few, and not for the broader populace.
But then, we have the fact that I, and millions of others, are travelling huge distances in order to get the job done. Human interaction cannot be automated away, even if it can be made more efficient. Which means people will have to be, and stay good at, well, being people.
It has been written that nine different kinds of intelligence exist. Each of these can be augmented in some way through technology, but a mix of personalities will continue to be what makes the world go round. I have a plane ticket to prove it.
With the above in mind, here’s a couple of articles for the week.
The best practice game changer that is GitHub Actions
First, a view from the conference. I’m always wary of seeing any new release of a product as a ‘game changer’ but a couple of factors play in the favour of this one: first that 31,000,000 developers have been the ability to build development workflows; and second, there’s no particular limit on what these workflows can look like. I have a feeling demand may go right up to the level of supply.
The third journey: the travel industry may not be transforming, but it is certainly transitioning
And second, topical, topical, on the nature of travel. I am loving speaking to travel providers across the board about how their organisations, and the industry as a whole, are changing. “The real innovation is coming from those thinking about what people want to do, as opposed to where they want to stay or how they want to travel.”
| :—– | | £7,950.52 |
10-26 – Bulletin 26 October 2018. The world beyond the bandwagon
Bulletin 26 October 2018. The world beyond the bandwagon
It seems to be a human truth that we need touchstones, things that people can rally around. The tech industry is no exception, indeed, it could be argued that industry analysts have ‘touchstone lifecycle management’ baked into their business models. Gartner presents its hype cycle like something that it observes, but in reality the company’s magic quadrants (and Forrester’s waves, and so on) are a significant factor in characterising, therefore making acceptable and confirming the existence of, new technology areas.
Once this has been done, once the bandwagon comes shimmering into life through sheer force of rhetoric, everyone and their dog will jump on board. It doesn’t really matter whether the theme, trend or capability is absolutely right: those vendors whose products fit its definitions will crow, and others will claim a level of differentiation (cf Geoffrey Moore. “Unlike other vendors of <insert technology here>, the Acme solution enables…”).
Meanwhile, analyst firms large and small then have something to advise upon. “It’s not as simple as that” offers a reasonable starting point for a lucrative career in explaining what the heck the industry is on about this time. And PR companies and journalists have a new, interesting topic of discussion to replace the stale bandwagons of yesterday, a new new thing to supersede all those old new things. It is what it is.
As they are embraced and enhanced, some touchstones gain an almost transcendent state. Best practices and training courses emerge, buzzwords take form and are adopted, people become experts in the field. But wait. This isn’t taking place from a standing start, prior to which these were just fields. For something to become the next big thing, it needs to have already achieved some kind of critical mass.
Sometimes a vendor will attempt to ‘create a space’ but this takes time and money, and success is not guaranteed. More likely is that sufficient organisations have already adopted a version of the capability under discussion, and find themselves suddenly thrust into a technology leadership position. To take a specific example, we saw it when all those users of network event management systems discovered they had been doing something called IT Service Management… which then became Business Service Management.
It can be a little befuddling, to find yourself leaders in the field when last week you were just getting on with the job. Even more so when all those home-grown systems, practices and processes, which worked as far as they did, become not quite as good as the brand new gold standard.
For example, if organisations developing software find their methods misaligned, in name or deed, with the ‘standards’ imposed by the (say) Agile Industrial Complex” (hat-tip: Martin Fowler), they can hit a challenge. I remember working on a report years ago, which drew the scorn of at least one industry luminary for suggesting any way could exist other than the recommended way (even though our research showed otherwise). My favourite comment (I paraphrase) was, “Who even is this guy? He doesn’t come to any of our forums!”
The bottom line is, we are working in an infinitely complex and constantly changing industry. No absolutes exist, but every now and then we do arrive at definitions we can all agree upon, even for a short while. We should embrace these for what they are, not as a final answer but a moment of respite, of calm in the tumult, of shared direction and navigation. It’s everyone’s job to recognise the role of such constructs as a means to ease, not add to the pain.
November 2018
11-02 – Bulletin 2 November. There is nothing to hype but hype itself
Bulletin 2 November. There is nothing to hype but hype itself
It’s a funny old world, in which the topics of discussion are set for us and we talk about them, whether we like it or not. Last week I pondered the role of the analyst as a creator of terms, and therefore bandwagons. In doing so, however, we can starve oxygen from areas of equal merit, simply because there isn’t enough to go around.
What defines whether an area gets onto the podium, and is therefore deserving of sponsorship? The analogy is entirely appropriate, in this race to get ideas past the post. Competition is stiff: I remember a discussion with my old colleague Dale Vile many years ago, where he pointed out how tend, if not hundreds of potential game-changers are vying for position in the CIO’s budget.
I’m reminded of another conversation, this time with Dale Nix, who was at the time CIO for Forte Posthouse hotels. I was in his office, talking about the latest developments in IT but at the same time, hoping he would sign up to Bloor’s IT-Director.com subscription service. “Look at this,” he said, pushing a copy of Management Today across his desk. In it was an article, about CRM or similar. “When can we have one of these?” his boss had scrawled in the margin.
It’s one of the reasons I took some time out from this analyst lark for a while. It wasn’t, as I have suggested wholly that I became fed up of having an opinion; rather, I became frustrated with only having opinions about things that seemed to matter to the technology vendors I spoke to, to the detriment of other areas that were clearly, blindingly, having a major impact.
Social media was one, as part of the ‘wave of consumerisation’ which has changed the way we think about technology today. Hype? Maybe, but then it’s difficult to talk about technology at all without adding fuel to its various fires.
Case in point: here’s an article for this week.
Is Travel the Blockchain Beachhead?
It is fascinating how the number of people saying Blockchain is pointless seems to exceed the number saying that it has a purpose. You’d think that would kill it stone dead, but no, organisations keep thinking they have worked out a good use for the massively distributed ledger concept. I can’t see the fuss, nor the anti-fuss: I wager it will find good use in precisely the areas where it is useful. I know, right? And one of those areas may well be travel.
In other news: Travel Forward is go!
After several months of preparation, the Travel Forward conference hits Excel on Monday and Tuesday. In the build-up, I’ve loved speaking to senior execs at some of our best-known travel companies: speaking to real organisations is an excellent way of putting the hype in context. If you’re going to be in the area, do say come say hi to your friendly programme director!
11-09 – Bulletin 9 November 2018. Transform or die? is a red herring: the future will be physical
Bulletin 9 November 2018. Transform or die? is a red herring: the future will be physical
We already know the end game
Oh what a circus, oh what a show. My mind is cluttered by watching the travel and hospitality industry go to town, filling the whole of Excel as it did. This was my first time seeing such a big event come together from behind the scenes — I have never seen so many chop saws, as stands as big as I have seen were built literally from scratch.
My role within World Travel Market, as it is known, was to run a technology-for-travel conference programme, which (like all the best shows) somehow, magically, all came together. It is equally fascinating to watch an industry in its element: there’s more progress than you might think, but less than the players might like.
Travel has its digital bogeymen, the AirBnBs and Booking.coms that everyone detests even as they try to emulate them. Detests, really? Fair enough, that’s probably too strong a word. Cloud-first companies are like the cool kids that join school in the third year, coming from nowhere and somehow capturing the adolescent flag. It’s not so much hate, as confused envy.
At the same time (and despite the best efforts of consulting firms to imply everyone is at the start of the journey unless they’re already at the end), most organisations are already taking a punt. Most have apps, most buy in to the notion of customer experience, most are dabbling in various forms of agile development or are looking to get on top of the analytics capabilities they invested in years ago.
The point is, it is plain silly to suggest that they need to ‘understand’, or ‘get it’, or have any other form of epiphany. I’ve read endless papers that suggest our organisations need top-down buy-in, or stakeholder engagement, or whatever. In the real world, the chances are this already exists, at some level; yet still, it is not enough to go up against those upstarts who are digital by default.
And it is precisely this which is being played, a screw to be turned, a feeling of inadequacy that remains a constant, however much effort is being put in. It’s coming from a hard reality that digital change can’t take place while physical resources also need to be managed. And it’s a lie.
We’re on the cusp of something different. Amazon is opening physical, if highly automated stores. Uber and others are investing in vehicles. Why? Two reasons. First, once the algorithms commoditise, software-fist organisations need to look beyond that to differentiate. And second, because humans like working and being with humans.
And travel will be at the forefront of this. Is it always necessary to have a person working through the options for a particular trip? No, but many are prepared to accept a premium for doing so. And as for the feeling of welcome… to extrapolate, I’m reminded of a recent “Game of Thrones” tour in Dubrovnik. Of course it could have taken place with an iPad or an AR headset, but, yeah, a guy in a suit of armour was a lot more fun.
Traditional organisations already know they are on to a good thing, even as the rug is being pulled out by startups that don’t have to worry about the physical — yet. But they will. For many, it’s a case of hanging in there, building the future at the same time as not losing the assets created deep in the past (which is akin to not selling off school playing fields for housing).
The future will be physical, and it belongs to those with the staying power to keep hold of the past at the same time as delivering on the potential of what technology brings.
11-16 – Bulletin 16 November 2018. Marketing is the new porn
Bulletin 16 November 2018. Marketing is the new porn
You can’t beat an attention-grabbing headline, can you? But in this case I think it is appropriate. Back in the early days of the Web, so went the principle, most of the great innovations came from the pornography industry: first to e-commerce, first to video and first to fleecing punters of every penny they had whilst ensuring a steady stream of addicts.
I paraphrase of course, and I confess to not be speaking from experience. Equally lacking is my inside knowledge of the previous holder of the technological crown, the military, from whence of course came the Internet Protocol itself. By the time I came onto the scene, personal computers had opened the door to a new breed of happy-go-lucky hacker whose only interest was typing lines of code from magazines.
How naive and halcyon those days were, for any school kid, back in the day… but that’s a subject for another bulletin. Where was I? Ah yes, marketing. To introduce the topic I feel another digression coming on, around clichéd anecdotes. I reckon I have about ten that I keep trotting out, whether I’m writing for myself or others…
For example, if I am going to write about computer viruses, I will probably reference John Brunner’s Shockwave Rider. Here’s one from 2013 but they go back much, much further — probably to 1999 when I first started writing about such things. If security is the topic, I might also reference the quote “never put down to malice what can be ascribed to stupidity.”
And meanwhile, when talking about various aspects of commerce, I might quote the apocryphal question asked of a criminal: “Why do you rob banks?” “Because that’s where the money is,” he is reputed to reply.
And so, to marketing, and the current obsession with ensuring the we give customers what they want — this latter all too often, still, a euphemism for the aforementioned fleecing punters of every penny. I’m not going to turn this into a rant: I’ve railed against the horrors of ‘monetisation of data’ before, and you’d have to be crossing the Sahara backwards on a camel with a bag on your head, not to have had some personal experience.
Nonetheless, the fact that we all shoulder the burden of being algorithmically marketed to, even as we (and certainly many of the people reading this) are working for organisations doing the deed, bears some scrutiny. I’m reminded of speaking to a director of a charity about the practice of ‘chugging’ (charity mugging, another personal bugbear). “But it works,” he said, with a what-else-are-we-supposed-to-do expression.
Indeed. Not only is the use of technology to reach inside our wallets and purses the most obviously tappable opportunity; also, in a world where margins are tight and a race to the bottom exists across retail and other commerce, it offers a potential lifeline. Not to mention all those digital-first startups that seem to have whipped the lion’s share of custom from under the noses of traditional merchants, providers and everyone else.
So, all attention is on the use of technology for marketing. Digital transformation is all about, you guessed it, selling more things to more people. Or at least, orienting your organisation so that understanding buying behaviours becomes a major influence. Pretty much every new technology is first seen in terms of what it can do to enhance customer relationships, i.e. do better targeting, Case in point: the low-hanging fruit for AI, for all its potential, is around recommendations engines.
Meanwhile, it’s without irony that ‘customer experience’ is a big thing at the moment: CX is one of those strange topics which only exists due to its inherent absence. “You mean, we should try to understand the customer better and deliver better as a result?” said no company ever, apart from all those who had completely lost touch with their raison d’être. Which, wait, is an awful lot of them.
I’m not downhearted. Attention will move on, from direct targeting to differentiation; I have no proof but I like to believe that the hold of ‘brands’ over individuals, the product version of our celebrity culture becomes more tenuous every day. We can wear Gucci spectacles for example, but they will be made by one of two companies, to the same standards: only the logo will change. Of course I could be wrong, we are as sheep after all, but I hope not.
On that, cheery note, here’s the latest section of Smart Shift.
A platform for the web of everything
Who remembers Prince Charles talking about “grey goo”? An interview with Graeme Hackland, top bloke and CIO at Williams Formula 1 team, took place a few years now but remains illustration enough about the law of thresholds, which has taken sensors from chunky things on aeroplanes to tiny devices that litter our transportation and other systems like dust.
That’s all for this week. Still queuing those articles up and (extra curricular) have been progressing both novel and musical, watch this space!
Cheers, Jon
11-23 – Bulletin 23 November 2018: You can’t program serendipity
Bulletin 23 November 2018: You can’t program serendipity
I’ll be heading off to AWS Re:Invent on Sunday. As well as thinking once again about the paradox of having to go half way round the world to meet tech industry people face to face, I’m reminded of the serendipities of doing so, such as meeting multiple people called Dave.
The very nature, and indeed joy of serendipity is that it follows a non-algorithmic pattern that we can nonetheless follow. For example, if I stay at home, I will be far less likely to bump into someone on a station platform who might offer me an opportunity, or make an introduction, or, well, whatever!
Strangely, this goes a long way to explaining the AWS business model. I’m not referring to all that HQ-in-Luxembourg shenanigans but the nature of creating an all-encompassing platform. I genuinely don’t think the AWS powers that be know what it is for, as that isn’t the job; rather, the goal is to ensure that whatever people are using right now is incorporated.
This goes some way to explaining the astonishing, and continued announcements of new features, functions and packages supported by the AWS platform. The company has done a great job of catching little waves, not differentiating or caring about whether they are important but allowing them to happen, making them available. Let serendipity decide could have been the underlying mantra…
…until more recently. I’ve noticed the rhetoric around cloud providers evolve, not only from its original “everything to the cloud” towards “everything to the cloud but the stuff you don’t want in the cloud”, but also towards “which cloud should I use for what?” As providers mature, they are offering more evaluate-able services, for example around artificial intelligence capabilities and IoT models.
The result is that genuine reasons are emerging to choose one over another, that is beyond “we have everything you might need” kinds of arguments. It has been difficult to keep up with it all, to be frank, but as the debate moves up a level or two, it will get both easier to understand and therefore harder for providers to differentiate.
To put it in concrete terms (and to go up even higher in the stack), “We have great virtual machines, fab storage and every open source package you might need,” is a very different proposition to, “We have the best solution for healthcare/retail/<insert industry here>.” Right now we are somewhere between the two.
Ultimately, cloud providers have been deriving value from that old chestnut of disintermediation, a.k.a. giving people what they want more easily than the traditional alternative. All well and good, one might say, as it has got them to where they are; now this source of differentiation is being used up and as a result of inevitable market evolution, providers are looking to differentiate in other ways.
We continue to live out what has been termed the platform economy, which has been based on operating build-it-and-they-will-come models at a most strategic level. The future will continue to lie in building on platforms, but as lower levels commoditise, so will battles be fought in terms of the solutions, not the platforms, on offer. Proactive serendipity may have had its day.
In other news, here’s an article for this week.
On Value Stream Management in DevOps, and Seeing Problems as Solutions
A frequent habit in tech is to describe something that isn’t working in a way that suggests an answer: “Data Leakage Protection” is one of the more obvious. In business circles meanwhile, the whole notion of efficiency relates to the fact that much of what we call work is unproductive and pointless. I know, I know. So, Value Stream Management is both an answer and a set of questions — what’s not working, and how can it be fixed? When applied to DevOps, it’s also an indicator that it doesn’t deliver pots of gold without a bit more forethought.
Extra-curricular: Super-Awesome — the Musical continues apace
Sometimes, things move from a nice idea to a “blimey, this might actually happen” state almost by stealth. I started writing a musical on a whim in 2010, and completed a first draft earlier this year: suddenly I have collaborators, songs and, well, if you’ve never heard something you’ve written sung by a professional singer, I highly recommend it. Now planning a reading early next year, followed by a workshop and then, perhaps a Kickstarter but a long way to go before that happens! Keep watching this space…
That’s all for this week, thanks for reading!
Cheers, Jon
11-27 – Bulletin: The arrogance of startups
Bulletin: The arrogance of startups
Let me start by saying that I admire anyone who wants to set up on their own.
Trouble is, some ideas just aren’t that good. And this begs the question, what does differentiate it from the rest?
The trouble is, you’ve got to be visionary and all that, but there’s a fine line between that and being an idiot.
December 2018
12-01 – Bulletin 1 December 2018. We don’t see things as they are, we see them as we are
Bulletin 1 December 2018. We don’t see things as they are, we see them as we are
“Why do simple when you can do complicated” - A Breton proverb
It’s a quote from Anaïs Nin, I believe. As well as a lyric from that popular beat combo Marillion, in a song which deliberately pulled together a series of proverbs, platitudes and quotes, and set them to a country dancing back beat. But I digress.
A webcast I was recently involved in hosting incorporated one of my pet hates, namely a conversation around “the market for” a certain technology. Last week I talked about characterising problems as solutions: it’s also possible to characterise solutions (and therefore problems) as markets.
You know the thing, and if you don’t, it goes something like this: “The technology area is called NNT (New New Thing). Just to put it into context, Gartner estimates the size of the market for NNT to be $3.2 billion…”
I’m not calling out Gartner here, but the company (or IDC, Forrester or a number of others) is frequently quoted. You can pretty much guarantee that nobody would ask how the figure was derived, even if Gartner would tell them. Which they won’t.
Markets thrive on figures, so they are a necessary element of doing business, even if perceived by some to be flawed. I’m not sure they are, any more than the fact that we are all in the land of the blind, a wilderness of future-facing mirrors in which any estimate is better than none.
But that’s not my point, or at least, not the point I wanted to make. Which is: what is a technology vendor doing talking about market sizes, however calculated, when the audience is nothing to do with people who care about markets?
For the record, I’m not calling out this specific vendor either: it’s a bad habit of the industry. I suppose it could be justified for reasons of comparison: it kind of says, “this area matters.” But so does, “We spoke to 3,000 large companies and they all think NNT is awesome.”
So why does it wind me up, get my goat or otherwise cause a ripple in my otherwise flat calm? The simpler answer might be what I’ve already said, that it is in some way irrelevant or inappropriate… but no, I’m not very good at simple answers. Perhaps I’m part-Breton.
More to the point is how the framing might skew behaviours. By seeing a solution (or indeed a problem) as a market, the goal becomes less about resolution of a challenge, and more about achieving a sale. Of course, one necessitates the other, but the cart should not come before the horse.
I’m a great believer that technology provision, done right, should manifest as a win-win: nothing makes me happier (okay, a pint on a Friday night makes me happier, but bear with me) than hearing a customer say about a product, “This thing is brilliant, couldn’t do without it.”
In this scenario, everyone gains: the vendor makes money, the purchaser does whatever it makes possible, and all can smile. Make it about market sizes however, and the win-win is no longer part of the equation.
No articles this week as I have been pillar to post at AWS’ annual event in Las Vegas (yes, I should have posted this before I got on the plane). Definitely two articles next week, not least my thoughts on cloud in general, and the conference in particular. Watch this space!
All the best, Jon
12-07 – Bulletin 7 December. The enterprise as a platform
Bulletin 7 December. The enterprise as a platform
Why do complicated when you can do simple?
Since I quoted the Bréton proverb ‘Why do simple when you can do complicated’ a few weeks ago, it’s been turning round and round in my head as it seems to pretty much set the scene for all that prevents technology from working. I remember a conversation with an analyst a few years ago, where he likened trying to change the enterprise environment as the equivalent of mining. I paraphrase, but his analogy referenced the largest mineshaft in the world as being roughly a kilometre deep, compared to the 13,200km-diameter planet we occupy. So, he said, while any effort to change might feel like it’s doing a great deal, it’s equally only scratching the surface.
This is one of the reasons, if I may, that big consulting firms are a bit of a con. Not that they mean to be, but, honestly, who can name a single large company that has re-invented itself as a slick, genuinely agile business? All these terms like digital transformation are very nice, and they certainly look good when wrapped in some futuristic stock image but, frankly, it’s never going to happen. Which is one of the reasons my money is on everything-as-a-platform.
Back in the day, when I was an OO consultant, I learned that the thing is less important than the interface: once you have posted an API call, you don’t care what happens behind the scenes as long as you get the response you need, in the time you need it. We’ve been through various waves of this, from right back at Yourdon and Constantine’s 1975 work on structured programming (maximise cohesion/minimise coupling), through (indeed) OO, to component-based development, web services, RESTful interfaces and the (ahem) rest.
The point is, “the enterprise” as we know it can also be accessed as a notional platform of services. Back in the day, I had to deliver some UML training to the British Library but on the first day, I was told how the organisation had an embargo on any new IT spend: essentially, all activity had to be on maintaining existing systems. The five-day course became an exercise in working out what BL’s users wanted to do, then mapping that onto what ’services’ BL had available: it was all a facade, behind which was a set of monolithic systems. But, as I was told several years later when I bumped into the head of IT at a Microsoft event, the idea of seeing existing systems as a platform… had worked.
That’s not to say it’s a panacea, or magic bullet or whatever, we all know how there is no such thing (and indeed, file “Technology X is not a silver bullet” alongside “Tech Y is dead, long live tech Y” and other rinse-and-repeat articles that do the rounds about, well, every new trend). And indeed, whatever you do, don’t let any such notion fall into the hands of the enterprise architects, who will map the heck out of it and end up with service catalogues you can prop a car up with. But seeing existing, joyously legacy technologies through the lens of an interface means you can let them be for a little longer, putting the effort into what you can build on them rather than trying to change them wholesale.
Just a thought. And speaking of simple, here’s an article for this week.
Five Questions for: Melissa Kramer of Live UTI Free
As I note in the pre-amble, not every innovation needs to be a smorgasbord of buzzy technological terms, nor should it be: we can all learn from this healthcare example in which access to good information can go a long way towards better diagnosis and treatment of this common condition.
That’s all for this week, until next time! Jon
12-14 – Bulletin 14 December 2018. Who keeps changing the terminology?
Bulletin 14 December 2018. Who keeps changing the terminology?
What’s in a name?
TLA’s seem to appear out of nowhere, arriving fully formed into mainstream discussion: Value Stream Management is just the latest in a series. To understand the process behind the creation of a new term, here’s what happens, as a semblance of steps:
First, technology evolves, or lurches forward, depending on your point of view. Innovation tends to follow a path of least resistance, in which organisations without so much baggage (often startups) grab hold of a techno-combo to deliver something that either wasn’t feasible before, or would have been, if people had got their act together.
Second, everyone starts doing whatever the technology enables. So, for example, that cloud computing thing, or in layperson’s terms, “using a virtual machine running on someone else’s hardware.” In a matter of months, the new technology area goes from looking like an exception, to something with potential to be a norm.
Third, the analysts start to get a whiff of it… Industry analysts are before all else, trend spotters — they spend time talking to people about what businesses and consumers are doing with technology, and so have a good vantage point to see when a theme is becoming a trend.
…then fourth, they give it a name. What happens next is, in some meeting room in some office block, a bunch of analyst-y types express their fascination about what is going on, and agree to write a report about it. Which needs a snappy title. The name could come from ‘the thing’ itself, or if none exists, the analysts will make one up.
Fifth, the report will look to ‘define a space’. IT industry analysts don’t think in terms of solutions to problems: if the answer is “change the way you do things,” they’re not so interested. But if software or hardware is/can be involved, suddenly things are very interesting as this means you can create a ranking of whoever provides said software.
Sixth, technology providers orient themselves around the new name. Gandalf (the wise, then the white) knew the power behind a name, and so do IT companies. Vendors cited in the report feather their nests with it, while those missed out rail against pay-to-play industry corruption; and meanwhile, a bunch of others with similar products will claim to also do ‘the thing’.
Seventh, industry experts spend the next period helping companies with ‘the thing’. Once a bandwagon has been created, the cycle plays out with the larger majority of organisations trying to work out if they need whatever it is being talked about. Consulting firms will tell them they can’t do without it; analysts will say, “well, if you’re going to do it, here’s what to think about” and so on.
And round we go. From this analyst’s perspective, I am still taken by surprise by the apparently sudden appearance of new terms and mechanisms, with most people I come across giving the impression that they knew about it all along (for fear of looking dumb). Generally, then, I go through a wave of panic and inadequacy before realising (a) I could understand what it was about after all, and (b) the implementation challenges were, and remain, the same as ever. Perhaps one day I will find I really have missed something enormous but thus far, thus good.
Here’s an article for this week.
Five Questions For… Seong Park at MongoDB
I’m currently spending my time thinking about processes and roles, hence my increasing focus on Value Stream Management and hence also, this conversation with VP of Product Marketing, Seong Park, about the relationship between (essentially) a database company and the developer community.
Cheers and all the best, Jon
12-21 – Bulletin 21 December 2018. The loneliness of the cybersecurity professional
Bulletin 21 December 2018. The loneliness of the cybersecurity professional
Where the bad things are
I do feel sorry for anyone that works in cybersecurity. They’re like the people who thought bicycle helmets were a good idea before anyone else thought they were cool, or like anyone in health and safety, or quality, or any other discipline that (from the outside) looks like it’s trying to get in the way.
Worst case, security professionals are lumped into that humourless group that wants to prevent people, businesses, whoever from getting on with their lives. Like, you know, traffic wardens. Unlike whom, it’s the security guy that takes the hit when things go wrong. As I say, it’s a thankless task.
It’s the security professional’s job to predict what might go wrong, and therefore create an opportunity to do something about it in advance. What a task, when everything you say is a hair’s breadth from doom-mongering. It’s one of the reasons I advocate visibility as a precept: that is, tell the board what the risk is, and then let them decide whether to play safe, or fast and loose.
Here’s the idea: in business leadership, one of the main jobs is balancing risk — in general this is measured financially, although business leaders are not alien to the concept of saying “what the heck” and just doing something anyway, based on gut feel (indeed, this increasingly feels how the West is run, but I digress).
So, forewarned is fore-armed. Trouble is, in many cases we just don’t know what the potential consequences are. When writing about data privacy in recent years, I have talked about aggregation and peripheral risk — that is, what if a data sample makes it look like you were in the vicinity of a crime?
Such risks still exist. However and unfortunately, I failed to consider the possibility of using harvested behavioural data to target incite-ful ads that have had an unexpectedly strong influence on our democratic processes; nor did I predict the use of such techniques by foreign powers. I wasn’t alone.
Our inability to spot such world-changing consequences of our technological use further undermines the credibility of the crystal-ball-gazing enforcers we call cybersecurity experts. Which does lead to models such as cyber-recovery planning, that is, “plan for failure as it’s all going to go wrong anyway,” but can also feed a “why bother, as it’s all going to go wrong anyway” attitude.
Security pros are between the rock of an intransigent, uncaring business, and the hard place of failing to spot some future cataclysm. And yes, I do feel sorry for them.
On a more cheery note, here’s an article for this week.
AWS Re:Invent 2018 Reflects an Industry Coming of Age
Technology is at an interesting juncture, as we move from a wave of fragmentation and explosion to one of standardisation and simplification. The former was caused by the joint forces of cloud infrastructure and open source software, and the latter is happening because just using either his no longer a differentiator in itself. That’s my take anyway, with consequences for even the biggest cloud players such as AWS.
That’s all for this week. It just remains for me to wish all of my readers a very happy Christmas (don’t eat too much plum pudding), and see you next week. Thanks for reading!
Jon
12-28 – Bulletin 28 December 2018. What ho 2019: predictions of a bygone age
Bulletin 28 December 2018. What ho 2019: predictions of a bygone age
The more that things change…
Why are bricks the size they are? Frankly, I don’t know, but I bet it is down to a number of factors which had to be weighed up one against another. Not least ease of laying, production quality and, probably, aesthetics. The result is a reasonably standard set of dimensions which can be seen the world over.
In a similar way, many of the things we do with technology work best when they are at a certain scale. Code modularity, for example, balances internal and external complexity, probability of dependency, functional specialisation and generalisation. The result is similar, whether you are working with old-school programming functions or all-new microservices.
The same notions apply for business processes, user stories, data models and, probably, any type of abstraction. We could call it Goldilocks’ First Law in that it feels just right, balancing the needs of cohesion and coupling. It can be equally straightforward to get wrong, as I have found out across various software audits and consulting assignments.
What has this got to do with predictions, I hear you say? It’s a good question, not least in that I didn’t mean this bulletin to head in the direction it has. The plan was to make some general point about right-ness, then use that to set a few expectations for the coming year.
Having derailed myself (prediction zero: this won’t be the last time), I could nonetheless continue regardless. But heck, in for a penny. Instead, here’s a few ancient truths dressed up as analytical wisdom:
1. Something that wasn’t quite possible before becomes possible. Possibly because it is cheaper to do, or processing/networking/storage passes a threshold. In this camp are various applications of AI, IoT and other data-intensive use cases.
2. Something gets standardised and everyone adopts it. Standards often happen because everything is using a certain thing anyway, or it passes an adoption threshold of some sort. Suddenly we go into this doublethink stage, where people act like they’d always been doing it. Cue: Kubernetes and microservices.
3. A consensus is reached that if something is going to happen, it needs to be managed. Generally this will be because that new new technology has proved itself in pilot studies but crashed and burned when adoption went enterprise-wide. From the failures fo early adopters come frameworks and tools that were always necessary, in (ahem) hindsight. P.S. Hello, Value Stream Management.
4. Something that everyone said was pointless becomes both successful and mundane. You know, that new kind of tech it seems that everyone is banging on about, yet most seem to say it’s over-hyped nonsense? Well, whaddyaknow, there was a use for it after all, but it’s not actually that interesting. Those who said it was cool are now, frustratedly, trying to say, “I told you so,” but, frankly, nobody cares. And Blockchain.
5. Something which looked like it has done its time proves to still be useful. While many applications end up in the elephants’ graveyard, the capabilities that they are built of tend to last a bit, or a lot, longer. Cue yet another article (I should know, I’ve written plenty) titled, “Tech XYZ is dead: long live tech XYZ,” which points out the flaws in suggesting the demise of, well, anything. So, no, serverless models will not wash away everything that has come before them, sorry.
There we have it: now we can all write a few predictions. Or alternatively, we can finish the roast, have a final, wistful look back at 2018 and wish each other all the best for the new year, whatever may come to pass. Merry Christmas, and may health, happiness and perhaps a little value-based positivity come your way.
This has been by 52’nd newsletter, which means I lasted the year without running out of things to talk about. Hurrah, huzzah and thank you all for reading this far, and for this long.
All the best, Jon
P.S. A few more thoughts from me about the nature of prediction happen to be the next section of Smart Shift, so here it is. Not sure what happened to the formatting, but scroll down and it’s there.
2019
Posts from 2019.
January 2019
01-04 – Bulletin 4 January 2019. A culture of change, or a change of culture?
Bulletin 4 January 2019. A culture of change, or a change of culture?
The more that things… oh never mind.
It’s happened again. There I am, on a call with a well-recognised technology company in its space, with a solution to a problem faced by numerous organisations. We’re talking through the challenges it solves, how simple it is to deploy, the benefits it brings… and then I ask, perhaps naively, a question about what else needs to be in place in order for it to, well, work.
“It does require a change of culture,” says the spokesperson, to my complete lack of surprise but immediate thought of, yes, what was I thinking? Of course it does — require a change of culture, that is. And of course, that is the hardest thing of all.
We shouldn’t blame ourselves. Humans are creatures of habit, and in these complex, ever-changing modern times, life can be more about coping strategies than any level of progression. We do the jobs we have always done, not because we want to but because there barely seems to be time to think about doing it any other way.
And, when the going gets tougher, we try to take (back?) more control. I remember how, as an IT manager in the early Nineties, I decided it was a good idea to log out anybody who had left their workstation switched on over lunch. That’ll show them, I thought, and never mind if they lost unsaved work, at least we were more secure.
In hindsight, I was both increasing business risk (which I think I’ve written about before), and reacting in a knee-jerk manner to the fact I was struggling to manage everything in front of me. Many of the things I did in that period were beneficial: setting standards, implementing processes and so on, but logging people out wasn’t one of them.
I’ve also written about just how complex businesses can be. You don’t have to work for a massive company for it to be (seemingly) overburdened with conflicting information, multiple ways of doing things, minor disagreements, politics and fire fighting. But any one of these things can slow any efforts to change to a standstill.
Which is where cognitive dissonance and bias come in. I have not experienced childbirth but I can only assume, given just how painful it is reputed to be, that our capacity for denial is second to none — otherwise the one-child policy would need no enforcement.
A similar mechanism may be at play for any individual who has worked through the trauma of organisational change. We can come out with platitudes and mantras about communication and collaboration, but any facilitator who has experienced getting people to actually talk to each other will know just how much of a wonder it can be.
And so, be it data-driven decision making, or agile development, or continuous learning, or digital-first, or any other term that consulting firms spout, each can be hamstrung from the outset by a lack of ability to actually change.
The strange thing is, that’s not how it’s presented. What we should be doing is leading with this, fundamental cultural aspect. “Can your organisation change?” we should ask. “No? Well, don’t bother reading further, don’t install any new software or engage anyone until you feel you can. Then, once you’re confident, we shall talk.”
Makes perfect sense, yet again and again, we start from the point of view, nay forlorn hope that this time, things might be different. We read the white papers, we engage the consultants and we find, yet again, that we can’t be like Jimmy or Jill down the road, you know, that person who seems to be able to do everything.
And, meanwhile, organisations that are able to change keep on doing so, and wonder what all the fuss is about. At least until they reach a certain point, having believed they would not be susceptible for so long they took their eye off the ball. At which point, they become much like any other. And round we go again.
No articles this week, as normal service is still being resumed. A very happy new year to you, and thanks for reading.
Jon
01-12 – Bulletin 11 January 2019. The Nature of Incremental Legacy
Bulletin 11 January 2019. The Nature of Incremental Legacy
Of Mice and Trains
“Innovation,” I once said, “is like a mouse running on the roof of a moving train.” Suffice to say that (as often happens, but I have learned to deal with it) my remark was met with a blank look. “You see,” I started, and then realised that, like all good miners, I needed to know when to stop digging.
Until now. You see, there’s always more someone can do. If someone comes up with a great idea to, say, completely change how we approach booking overnight stays, than before long someone else will offer a service to help with the booking process, or offer a post-stay clean, or, well, you get the picture.
And, as the original service commoditises, the bolt-on services become the differentiator. Or, to go back to the scenario, as the train becomes the rails, the mouse becomes the train. The cycle continues, with new mice ready to sprint along the top of the carriage, at a relatively faster speed.
I really should stop digging before the analogy loses any semblance of credibility (How can the mouse eat? What if it is blown off by a sudden gust of wind? Where is the train going anyway? Etc). This incremental nature drives how one wave of innovation can seed another, and of course feeds the agile, “be prepared to change” thinking.
Meanwhile, another, related hypothesis that I have been banding around over the decades is that nothing ever really goes away. We have this idea that technology acts as a tsunami, sweeping away everything that goes before it… but in truth, old tech tends to stick around. You know, that stuff we call ‘legacy’.
An interesting, yet ultimately pointless debate whimpers on, around how so much time is wasted managing existing technology estates, leaving insufficient time for innovation (essentially, how can you simultaneously be a mouse and a train driver?). Pointless because, sure, it would be good to have more time… but what we like to write off as ‘legacy’ is actually better termed ‘reality’.
Once we have created things, we have to look after them, live in them, keep them well-maintained and efficient to operate. For sure, we may be able to manage just as well, if not better, with less stuff: a good clear-out leaves room to think, even if it feels like a waste of time overall. We need the things we have…
… even if we get someone else to manage them. Two other debates spring from this: first, the nature of outsourcing in a business, in terms of how to give to others things that they are better at, so we can get on with what we are good at. Staff canteens, for example, or indeed, cloud-based infrastructure.
The second debate involves what we want to innovate on, and how to afford the cost of innovation. Existing infrastructure (whatever it is) will have a cost of maintenance and improvement: we can get this down, but we can’t do away with it. Meanwhile, innovation also has a cost, which is more around how much effort we want to allocate towards it.
The bottom line is that these two costs do not have to be linked: indeed, they should not, as doing so suggests that one can in some way impact the other. Rather than an IT budget which robs Peter the train driver to pay Paul the mouse, the two need separate philosophies. Breaking the link could be the single most important thing any organisation makes on its path to innovation.
01-20 – Bulletin 18 January 2019. When is a skills gap not a skills gap?
Bulletin 18 January 2019. When is a skills gap not a skills gap?
The only time I have ever heard the term ‘lemma’ used in anger was in my second year at university, in my (I think) T22 lectures around the theories of computation. I can’t remember the name of the lecturer but I do remember admiring his large, bushy beard… sadly it was not enough to distract me from the fact that I rarely had a clue what he was talking about. Another word I remember him using was ‘intuitively’, generally placed between two completely incomprehensible statements. Perhaps I should have concentrated more: I do remember thinking that, if only I could understand what it was he was saying, I might be better off.
Nonetheless, the notion of lemmas stuck; as did the minor linguistic epiphany about a dilemma being the juxtaposition of two opposing ideas. Every time I hear the word dilemma, I proudly tell myself about this wonderful feat of etymological prowess. I did try to tell others, but it fell on strangely deaf ears. I know, right? To the point: this entire digression came to me when I was thinking about one example of a di-lemma (even if it is not necessarily a dilemma. I will quickly move on).
Technology, we are told, is going to put a bunch of people out of a job. I won’t google for a link right now, but articles on this are legion (to the extent that I felt pressed to write a counterpoint, also somewhere out there in the ether). The principle is that automation does away with both manual and intellectual labour: following the usual, attention-grabbing headline, most articles suggest that people will (have to) find new work to do. My many beefs with this perspective are irrelevant, but could be summarised by noting that the same writers were probably wrong about the death of books as well.
Meanwhile, however, there seems to be no halt to the need for skills. Alongside cultural issues preventing change, a lack of knowledge and expertise in <insert technology area here> is generally stated as a top-three reason slowing down progress. Now, I’m not being so trite as to suggest that all them poor blue- and white-collar workers can ‘simply’ become programmers. However it does strike me as interesting that the hive mind can hold both perspectives in its virtual head without noting any connection or conflict between them. And I do wonder whether this need is as wrong-footed as the previous one.
How so? An interesting thought experiment is to look back on how we might view the now, from the perspective of a few decades’ time. Some might suggest that we will have achieved the Singularity by then, which I doubt for a baker’s dozen of other reasons. But whether we have or not, the chances are that today’s world will be marked as being pretty primitive compared to what’s coming. Not only will the the platforms we depend upon have become much smarter and self-managing, but also, the ways we use technology will be more mature and sophisticated.
Example: a Facebook friend pointed out how the “ten-year challenge” was interesting not only in how people were more savvy about the potential risks to privacy (“You’re just enabling facial recognition AI to get better trained, guv”), but also, how the topic could be written about and understood by a broad audience. We will look back on this period as akin to frontierlands, ruled by lawlessness and corruption as we discovered just how powerful technology could be. And then, we emerged from a state of naive acceptance of abuse, to a place where it was blindingly obvious what was right and wrong.
In other words, both technology and we will become more savvy. At the moment we’re still in a state with tech where we talk about ‘involving the business’ even though ‘the business’ is already more than involved. And there’s a larger point: many of the skills we think we need are to do with coping with the technological mess we currently live within. My hypothesis is that technology will become both more capable and less intrusive, doing away with the need for (say) social media “experts” and allowing us to get on with tasks that are a bit more rewarding. Putting alternative scenarios such as Skynet aside, that may mean we find the skills we require are the same ones we always needed.
01-25 – Bulletin 25 January 2019: On the insight, opinion and defensibility of industry analysis
Bulletin 25 January 2019: On the insight, opinion and defensibility of industry analysis
Two analysts walk into a bar…
It’s sometimes difficult to distinguish between an industry analyst and someone you might meet down the pub. Not just because analysts do tend to enjoy the conversations, surroundings and other accoutrements of that great, British watering hole (though always in moderation, naturally). But also, it might be said that some of the opinions, recommendations and other pronouncements could just as easily come from the old soak at the end of the bar, as someone who makes a living from so-called ‘insights’.
Before anyone takes affront at this clearly goading statement, I’ll make a comment of my own: that I am guilty as charged. I can claim that, while I have in-depth knowledge about some things, I have on more than one occasion been asked about something I know less about. What to do? Through experience I’ve learned to back up most views with a data point or two (“Make it defensible,” as my old colleague Dale might say), but I haven’t always adhered to this rule.
Despite this clear need to put fact before, during and after opinion, a second point is that we, or at least I, don’t always know why we/I know things. Analysts have the luxury of spending an awful lot of time looking into what’s going on and as a result, can start to get a smell, a frisson or an uneasy feeling about something that is going on. Add that to having seen a certain pattern taking place in the past and, well, before you know it you’re keynote speaking and putting ‘futurist’ on your twitter handle.
Okay, I haven’t done the futurist thing. But case in point is where we are with technology platforms right now. ‘Platform’ is a great term as it means everything and nothing, defined more by what people do with it than what it is. The past couple of decades have seen open source, cloud, mobile, agile processes and a whole bunch of other things driving ‘the platform’ which have enabled success just by using them — pretty much every one of them there startup companies has benefited from both of these, just by using them even as older companies have been slow to do so.
That’s just a thing, a well-documented one at that (indeed, you can read several books about it. Chances are you won’t, nobody does, they just buy them and read the first chapter. But that’s another story). But back to my point: my tummy is telling me that the Pareto principle is done with, and the point on the law of diminishing returns curve is kicking in, where it is no longer enough to just do the thing. Symptoms abound, from Amazon opening (robotic) shops to Uber testing out (robotic) cars.
Can I prove it? No, of course I can’t, not with hard evidence — we’re talking about things that haven’t quite happened. But, like the bloke in the pub, I have been talking about it with peers, testing it, finding out whether it holds, ahem, water. Maybe with a splash of lime. Evidence will emerge, one way or another, at which point your normal analytical service will resume.
Extra-curricular: Super-Awesome - The Musical
Nothing to see here other than the fact we are moving to a ‘reading’ stage. Watch this space and let me know if you want more information.
Thanks for reading and all the best, Jon
February 2019
02-01 – Bulletin 1 February 2019. On Terminology Refresh from OO to Microservices
Bulletin 1 February 2019. On Terminology Refresh from OO to Microservices
Old Wine In New Skins?
And then, I read a recent interview which talked about Rapid Application Development, Object Orientation, 3-Tier Architecture and Test-Driven Development. Of these only the latter appeared as a term this side of Y2K, and no doubt its origins are from before it. And, as I told the author and interviewee, “having been steeped in the DevOps and platform-based world for the last couple of years, it’s quite refreshing to read something which uses (let’s say) more ‘mature’ terminology.”
There’s a strange phenomenon going on in everything around DevOps, CI/CD, microservices, Kubernetes etc, which is that it is framed as something new at the same time as re-inventing the wheel — the challenge is to work out what really is new, in this case it’s probably mostly about dynamic orchestration into scalable cloud-based architectures using a modular approach.
Earlier this week I gave a talk on all things digital, not least the notion of Fail Fast which, done right, can trace its origins to Barry Boehm’s spirals of the Eighties. Meanwhile, architectural practices around Kubernetes/containers/microservices etc harks back to Yourdon and Constantine’s principles from 1975, as well as prior work around Modula etc.
Meanwhile the current “Shift Left” and “By Design” rhetoric (two sides up the same smaller peak) align with Test-Driven Development, which fits with best practices around writing unit and integration tests I was taught in the Nineties (not that many were that disciplined, in my experience).
Two thoughts on this. First that it’s unnecessary to rub anyone’s noses in it, more that such principles clearly still apply. But second, this isn’t just about doing that “I’ve seen this all before” thing, because this isn’t a simple case of re-inventing the terminological wheel. Rather, it’s important to spend a bit of time to work out what has changed in the current context; then, them old principles can be applied back.
For example, the point at which Moore’s law + “industry standard” x86 architecture outgrew a single machine and made the idea of multiple emulators on a single machine viable, that changed some things (we call it virtualisation) but didn’t affect basic notions of modularity, nor how to keep on top of complexity.
Meanwhile however, this latter point is why I tend to give short shrift to people who talk about ’serverless’, ’no-ops’ or ‘functions as a service’ in terms of doing away with everything that went before: the economic arguments about “one day you’ll just write lines of code and the environment will take care of the rest“ make swathing assumptions about architectural complexity, modularity, testability and so on.
It’s not that we need hard boundaries around our code (which of course don’t exist anyway, as per Turing, everything is a Turing Machine, the rest is about representation of choice) but more that, we need theoretical constructs that make it easier to control and validate the software-based complexity that we create. Objects, modules, microservices, virtual network functions, it doesn’t matter what we call them but they all fit into that goldilocksian category of not too big, not too small.
The bottom line is that BS can be both “new and improved” and “it’s all the same” - both are wrong. I could go on — I’ve been watching with interest activity around Kanban, that lean/agile change management approach, and thinking about how it overlaps with DevOps and other philosophies (the answer is, by quite a lot). Definition, it would appear, is largely in the eye of the beholder: perhaps that should be the subject for another newsletter.
Thanks for reading, Jon
02-10 – Bulletin 8 February 2019. On the Internet of Things, and when is a market not a market?
Bulletin 8 February 2019. On the Internet of Things, and when is a market not a market?
Don’t question the number
One thing I’ve noticed in this topsy-turvy world of estimates and opinions is just how diverse they can be. A cursory glance at Internet of Things market size predictions from 2015, 16, 17 and 18 (hat-tip to Louis Columbus for rounding everything up so neatly every year) shows two themes. First, how it is difficult to identify a baseline (are we talking consumer IoT or Industrial IoT, is the latter the same or different to IoT in manufacturing/utilities, etc etc).
But second, how wildly different the figures actually are. “Gartner predicts 6.4B connected things will be in use worldwide in 2016, up 30% from 2015, and will reach 20.8 billion by 2020,” said one 2016 article. But wait, a year earlier, we heard that “The number of connected devices is projected to grow from 22.9B in 2016 to 50.1B by 2020.”
Now, this is not the moment to pitch Gartner against the World Economic Forum: no doubt both would say, “read the small print” in which case we could go into some kind of “my definition is better than your definition” posturing. I’m not sure why such comparisons are rarely, if ever made; perhaps we fear some kind of response, or (worse) perhaps nobody really cares anyway.
What? Am I really saying that such huge efforts of market sizing fall upon deaf ears? Not precisely; equally I know just how many times I’ve sat through a presentation with some huge figure attached to a market area, to think to myself, “I guess that’s quite big, then.” Sizing a market defines a market; once the exercise has taken place (and the market is defined), the size becomes less relevant.
Which is why I don’t think anyone ever really questions the figures. Once we have a market, or an industry, or a solution space or what ever you want to call it, once it has a life of its own, in some ways it defines itself as a self-fulfilling prophecy, or simply as something nobody ever got sacked for talking about. In this case, probably the most important consequence is for chip and networking manufacturers, as they justify their strategies to investors and indeed, analysts — which makes for a strange feedback loop.
Which is interesting in this case, because it may not actually exist. Eh? Aren’t I shooting myself in the foot, given that I’ve written reports on the topic? Let’s think about this. First, computers have two choices, either to sit outside of the rest of the world, or to be in some way embedded: that isn’t new. Second, sensor-based monitoring and remote control have existed since the invention of the thermostat, or RADAR, or (insert your preference here): not new again.
So can we really talk about a ‘market’ based on the continuation of either of these themes? Back in the day, when I was spending more time on this stuff, I argued for a closed-loop model, i.e. if you can’t sense, analyse, decide and influence a connected device, you probably haven’t got IoT. The rest is just more sensors, which might be worth measuring in itself but it possibly wouldn’t command the kinds of figures ($520B, according to Bain last year) to make for a ‘market’.
Like it or not, the industry needs ‘markets’. We may not agree with them, in which case we can write things that disagree (and in a strange way, endorse the conversation). They may outlive their usefulness, if they ever really existed at all (cf current rhetoric around cloud/hybrid/multi-cloud; I’m also reminded of predictions around ‘smart phones’). At least we can all take solace in the fact that the market for markets is growing at a CAGR of 30% Y/Y, and looks like it will continue at least unto 2030. Whatever that means.
More Smart Shift: The information fire hydrant
The next big section in Smart Shift covers how software builds upon the massively powerful computer architectures we see today. First off (following a digression into the golden age of steam, with a mention of my old colleague and ferry master Rob Bamforth), I look at the vast volumes of information we have been generating.
That’s all for this week, thanks for reading!
Cheers, Jon
02-15 – Bulletin Friday 15 February 2019. Looking beyond the bill of virtual rights
Bulletin Friday 15 February 2019. Looking beyond the bill of virtual rights
Complicity in the Information Age?
Life is full of contradictions, and we are all big bags of wind and cognitive dissonance. Okay, that’s not fair, I can’t speak for everyone: I am a big bag of wind and cognitive dissonance, and the rest of you can make your own minds up.
Sometimes our^H^H^Hmy contradictory behaviours are very clear, to the point of hypocrisy: coffee shops cover several favourites of mine, such as the way that we carefully get our cards stamped even having just paid what must be marked up at ten times its material cost; we stand in line for an astonishing amount of time to watch someone froth some milk; and then there’s the callously hipster-ised decor against the environmental impact of coffee cups. But use coffee shops we do, or I know I do.
And don’t start me off about flexatarianism. But yes, behavioural economics — a.k.a. “working out what we are actually going to do, as opposed to what we think we should or would do” — is a fascinating area.
Looking back, the bulk of this newsletter over the past year or so has been focused on the contradictions, imbalances and accepted behaviours of how we use technology in the corporate world. It’s a two-edged sword, it’s influenced by things we all understand but only discuss down the pub, and so on. That’s fine, as far as it goes, and I thank every one of you who reads it and offers the occasional piece of feedback.
Meanwhile, however, technology is having a pretty big impact on society as a whole. It’s not hard to argue that it social media has changed the nature of democracy; the algorithmic nature of marketing sails close to the wind; whole new forms of addiction, fraud and other ills have emerged which could not exist without technology to support them.
The overall position is that technology is ‘just’ a tool, which delivers more benefit than it costs. I often use the example of the iron age, which enabled the creation of both plough shares and swords: would we rather not have either, even if we could have avoided our own ability to discover the smelting process? Thinking even more deeply, would we rather not be innovative at all?
These latter questions are perhaps the most fundamental. We can’t stop progress, one might say; but more than that, we can’t stop the main cause of progress: human inventiveness. What we can do is recognise the ramifications of applying consequent innovations into a chaotic, complex and already-corrupt global culture. To do so thinking it’ll all be alright is more than naive, it is foolhardy. Yet foolhardy we are, living the digital dream even as we watch its worst consequences play out.
I have long said that GDPR is a law two decades behind its time, solving the problems of yesteryear. Don’t get me wrong, the fact we now have corporations being held to account for how they deal with our data is a good thing. But it is not the only area of technological impact. Lawmaking is incredibly slow, meaning that an obviously wrong act such as ‘upskirting’ is treated as a new piece of legislation, taking a legal cycle to implement rather than it being framed as a broader breach of privacy.
The result is an opportunity cost, as we focus on individual situations rather than looking at, and dealing with technology’s broader cultural impact. This goes beyond governance by design (as advocated by Smart Shift): I believe we lack what might be perceived as a collective conscience, or ethical framework, to which we our particular society can align, and from which governance frameworks can emerge. Contradictions and cognitive dissonances such as the illiberalism of trolling or pubic shaming, for which no legislation currently exists, should be more than the subjects of journalistic and pub-related debate.
And indeed, we shouldn’t let our natural tendencies towards cognitive dissonance to shy away from what has already unfolded. Or at least, I shouldn’t — I can’t speak for anyone else. I’m telling you the plot but at the end of Smart Shift I advocated for a declaration of virtual rights, which set out what individuals can expect from the information age (I was not the first to do so, though I did get in before Tim Berners-Lee). If Western society is to continue to function, we can, and should, and need to go further than that, to understand our virtual responsibilities, what is acceptable and what is not, and then align to such principles in our lawmaking. I remain optimistic that we can do so, but to succeed, we first have to start. Not sure what comes next from me, other than a will and a focus.
Smart Shift: The information fire hydrant
Okay, I’ve already spoilt the plot but you’ll have to wait for the denouement. In this section I draw on conscientious objectors and the Tower of Babel to explore the volumes of data we have been creating, and indeed my favourite ever analyst faux pas — IDC’s notion that the world would run out of computer storage. Oh and ‘Hadooponomics’.
02-22 – Bulletin 22 February 2019. Fake letters from cyberspace. Oh and AI.
Bulletin 22 February 2019. Fake letters from cyberspace. Oh and AI.
Okay, I admit it, I have failed. In the first ever of these bulletins I let hubris get the better of me and suggested they might reflect Alistair Cooke’s Letters from America, which would report on the happenings of the day and add a little observation… whereas I have tended to lead with observation, and occasionally throw in a note of happening. It’s a fair cop.
As I flick though my weekly browsing history, I find a number of themes emerge. First, that people using computers still seem to be beset with the same old problems. One of my friends’ hard drives died without them taking a backup, while another was struggling to rescue data from a defunct Macbook. Which makes me wonder just how much we have advanced.
Speaking of advancing, we’re not that different to how we were according to a Register article entitled “Secret mic in Nest gear wasn’t supposed to be a secret, says Google, we just forgot to tell anyone.” So, yes, a Nest device incorporated a microphone. As the article notes, “A decade ago, Google “mistakenly” put Wi-Fi sniffing into its Google Street View cars, which slurped data from people’s home networks as the cars drove past. The “accident” had been patented earlier.”
I’ve also been reading a fair amount of world news. I confess to having become one of those people who opens his news app every morning, just in case something cataclysmic has happened… but usually finding that it’s pretty much the same minor (relative to full scale war), yet notable sequence of steps on the road to catastrophe we seem to be on. Not with a bang but a whimper, eh? I remain optimistic about the future, but pretty flummoxed about the present.
Flummoxed, indeed, by nature of the discourse about how technology and society are interacting right now. We’ve somehow arrived at a point where a singular group of voices can have a massive impact, it’s happening on all sides of that political spectrum and beyond. Some see it as the arrival of online mob rule (and I learned a new word: ochlocracy) but that doesn’t seem to capture it somehow.
I’m not going to talk much about this stuff, but given how the nature of discourse has changed, it clearly has a bearing. To this outsider it appears that many online articles available today are written by people with an agenda, which (when coupled with the kinds of echo chambers we have created) go through a period of self-reinforcement until what emerges is an evangelist. Personally, I feel very strongly about my uncertainty faced with the complexity of life, and I don’t want it undermined by having to take a position.
Funnily enough this reminds me of way, way back, when I was a programmer for Philips and was reading up about how much we were abusing our food chain — specifically factory farming and hormones applied to meat. I chose (and I still choose) to avoid such food products; back in the day, I was questioned about this to such an extent, “Why don’t you just become vegetarian then?” that in the end, and for over a decade, I did succumb to this strange kind of inverse peer pressure. It’s perhaps not that different to the take-a-position-if-we-want-to-or-not environment we find ourselves in today.
The result is that it is never enough to take anything that is written on face value. To whit, whilst googling about echo chambers, I found something called the Echo Chamber Club, which had the intention (as far as I can make out) of providing a forum for discussing controversial topics. The ECC wrapped this time last year, as its editor Alice Thwaite was doing her masters. She’s still writing — here’s a piece on the mutual recognition — as far as I can tell on the notion of building bridges and finding common ground.
Which seems like a good place to start. I also picked up on some threads by Jason Gots about the nature of radicalism, in this case on the left but I don’t believe the principle is party political. And meanwhile, I was pointed to Frank Furedi’s article about modern anti-Semitism. Furedi doesn’t mince his ‘radical democrat’ position, which if I’m not mistaken tends more to the right.
So, what have I learned? Somewhere in all of that is tech, in its role as amplifier and instigator uncontrolled and unharnessed. As a final read, I can recommend CB Insights’ article “How AI Will Go Out Of Control According To 52 Experts” — which does a good job of bringing together multiple positions so that the reader can boil it all down and make their own mind up. Or maybe just cherry-pick the views that reflect their own: it was ever thus.
Believe it or not, I am trying to make sense of it all, but the answers as yet elude me.
Smart Shift: An algorithm without a cause
Interviewing Augustin Huret was one of the most surprising experiences of my career, not least because the interview was punctuated by him taking me from La Defense to the Eurostar terminal at (what felt like) high speed. You know that scene in the Matrix, right? Anyway, Augustin’s claim to fame was to bring his father’s machine learning algorithms to market, through both enhancing them and simply being there when the cost of compute fell to a low enough point to make them viable. So, what if we could process any quantity of data in so-called ‘real time’? This section takes a look.
Thanks for reading as ever, Jon
March 2019
03-01 – Bulletin 1 March 2019. Amazon Dash and the Art of Self-fulfilling Prophecy
Bulletin 1 March 2019. Amazon Dash and the Art of Self-fulfilling Prophecy
It’s been four long years since Amazon first launched its Dash button… and yes, if you put a comma after ‘launched’, this does work to the tune of Prince’s Nothing Compares 2 U but that’s not important right now. More relevant is the fact that the button has ceased to be, has shuffled off this mortal coil. It is no more.
Of course some will say that it was never going to work, even as Amazon cites apparently plausible reasons for its demise. To be frank, neither position is on particularly solid ground. Probably the German ruling earlier this month tipped the balance against the plucky devices (bans are like that), but most likely is that the company took a punt on something, and then decided to focus on other things.
Like Alexa, for example. Or indeed, any of the thousands of initiatives the company has underway. Fail fast is all about deciding what to cull, so that (like any good gardening project) others might thrive. This “suck it and see” approach is pretty much what defines most of the innovation we see today.
Strangely however, we still operate using mental models that pretend otherwise. When Dash was launched, it was going to be a definite thing, no ifs or buts. And indeed, two years ago it was still being presented as a success story. This makes sense: nobody’s going to buy something that is just being put out there in case someone might like it.
No, sir. Any new product has to be presented as the best thing since sliced bread - or indeed, if Indiegogo has anything to do with it, “Sliced Bread — Reinvented”. A fait accompli, a game changer, a paradigm shift. Which is really, really interesting as it plays with our brains. We can either look into whether the new thing really can work, or trust our instincts, or indeed, just go along with it.
Few people have time to do the former. In Dash’s case, this would involve having a deep knowledge of the fast-moving consumer goods supply chain, right down to what products fitted the “aw, dang, I’ve just run out and I need more in 24 hours” model. My guess is, not many, but that’s me trusting my instincts so what would I know (as James Governor once remarked, “I don’t buy it, so nor should you”).
The third option is what any purveyor of a new product or service hopes: that we all say, yeah, looks legit, let’s have a go. While this may sound glib, our entire industry is constructed on the basis of enticing early adopters, creating a sufficient head of steam such that, when initial goodwill runs out, enough momentum exists to carry the new thing through the rough times before others pick it up.
The model, characterised by Geoffrey Moore in Crossing the Chasm, has been honed to the point of pseudo-science but interestingly, it relies on people believing it is true whilst ignoring the possibility that, just perhaps, people might not want the thing that is being presented to them. This matters less than whether a successful launch has been achieved.
I’m at a risk of losing myself on this thread, covering as it does the triumph of belief systems over reality, so I will try to round things off. We, as an industry, have created a dream factory: follow a certain set of steps if you want to change how people think, and therefore get them to adopt your latest innovation. But more than this, we have come to believe that the dream factory is something real.
We’ve created notions such as Early Adopters — ostensibly the more advanced thinkers, the more agile, people who get where the puck is going to be. At the same time, this group are the magpies, easily distracted lovers of shiny things, who will potentially adopt whether or not the something is actually useful. Particularly, if I may, if it looks good on a CV.
Back to Dash, I don’t believe it ever had to succeed, no more than any other initiative. There’s no climb-down, no heads will roll, that was never the point. That’s not how success happens and we all know it. Yet, the next time Amazon, or anyone else makes an announcement, it will be presented in such a way that ignores this fact to the point of denial, fooling nobody and everyone, just like last time.
Smart Shift: Opening the barn doors - open data, commodity code
This section opens with the 2010 earthquake in Haiti and looks at the nature of open data and source. “While neither software nor data asked to be open, they each have reasons to be so,” it says — how interesting it is, to revisit this in the light of more recent privacy abuses.
Thanks for reading! Jon
03-10 – Bulletin 8 March 2019. New, accidental cloud native empires are emerging
Bulletin 8 March 2019. New, accidental cloud native empires are emerging
I had an interesting call on Friday, a review which broached the concept of imposter syndrome. Which, given that I was speaking to some experts in this particular field, was quite something to behold. “We’re worried that we haven’t got anything new to say,” they said, to my astonishment. “That’s OK,” I told them: “You can just employ some the the standard tricks industry analysts use to look clever when they haven’t a sausage.”
I say this without embarrassment — who in a position of technology industry influence hasn’t been asked about a topic they only have a vague idea about, only to give (without a pause) some apparently sage response? Ah, just me then. Now I’m embarrassed.
To whit, that whole cloud native thing. I’ve struggled with this topic, not least because there appears to be two types of people in the world: those that get it absolutely, and those who don’t get it at all but would like to. If you’re a technology-based software startup, you will probably fall into the former camp, and if you are what we might term a ‘traditional enterprise’ (i.e. everyone else) you may well represent the latter.
Given just how much ink has been dispensed in the name of, “Hey, you, yeah, you old company! You need to transform! Be like Uber! Or you will fail!”, this communications barrier deserves a bit of scrutiny. As an industry we have our fair share of buzzwords, so one also needs to cut through that. And I am sure there’s plenty of imposter syndrome flying around, and indeed, people making stuff up.
So, what have I learned so far about this gap? My current thinking is that provenance is everything. Whereas traditional organisations may well have existed before IT was even a thing, a lot of the new breed coalesced around a (hopefully) good, technology-based idea.
If software or service vendors, many germinate on a petri dish such as GitHub, developing a freemium, then potentially a pro service on top. This explains why there’s so many of the blighters: it’s a bit like a version of The Sorcerer’s Apprentice, where every notion can be turned into software. This has created a cloud-native ecosystem which doesn’t have, and therefore cannot perceive, doing things any other way.
Such companies, and indeed people have been happily engaging with each other through good times and bad, trying to make their way. When someone more enterprise-y comes along, the latter are made very welcome even if they are a bit slow. And all this traditionalist group has to do is “be cloud-native” and it can reap all the same rewards. Come on in, the water’s warm!
Like everything in life, however, this is just a phase. The bright sparky young startups have come into existence because they could, at this moment in time, but the kinds of challenges they are having to deal with are less and less about the functionality, and more and more about the service. Equally, we are moving towards a place where standardisation and governance are going to become more of a thing.
The consequence is that startups are inevitably going to become more corporate, even as more corporate organisations start to pick them off. We’re moving out of the brainstorming, fan-out stage and into a fan-in. Already we’re seeing acquisitions and the uncertainty that comes from them, and we are on the brink of seeing leaders emerge.
Interesting times. It’s highly likely we shall see some of the current legions of software startups rise to the top — possibly no more than a handful. I’m reminded of the early noughties, when companies like CA, IBM and BMC were buying just about everybody… it took another couple of years before they started to retire all but a handful of the technologies they had in their portfolio.
So, what comes next? A small number of very rich people, and a bunch of others that will go play golf for a while before it all goes round again. Or maybe I’m completely wrong… but I don’t think the current situation is sustainable. In a few years’ time we’ll all look back at when cloud native companies were still independent: “Now look at them, they’ve gone all corporate,” we’ll say. Still, it was fun while it lasted.
03-17 – Bulletin 15 March 2019. On Rubik’s Cubes
Bulletin 15 March 2019. On Rubik’s Cubes
Innovation is spatial, and success abhors a vacuum. Sorry, I just had to write that down because, as I was thinking about what to write this week, it suddenly came to me and made perfect sense. Now all I have to work out is what the heck I was on about.
What an interesting industry to work in. Back in the day, I chose to do a computational science degree largely because I found it interesting, but with no real clue what it was all about. If someone had said to me that everything was going to depend on computers in a few decades, I might have paid more attention to my studies and less to the art of pool playing. Ah, hindsight. I might also have bought a few shares.
Back in the day, though, the future was by no means clear. A trend towards ‘open systems’ was happening; personal computers were finding their place; but the big old tech companies still held a great deal of sway. I could list them but most have gone. And meanwhile, every single possible new direction was presented as a game-changing paradigm shift.
It was, and continues to be, difficult to separate the signal from the noise, the correct predictions from the elephant’s graveyard of possibility. On the upside, I have come to realise that neither matters as much as what I shall characterise as the spatial nature of innovation. Aaaaand, here we go.
Why does a good idea happen? Sometimes it is because it is so brilliant in itself, that it gains instant recognition — like the Rubik’s cube, for example. In tech, such examples are the absolute exception; more likely are examples that grow with a kind of groundswell, as the solution responds to a previously unresolved need.
Most of the “game-changing paradigm shifts” that we see fall into this category, from RESTful interfaces to tablet computers, cloud computing to social media. The notion of a groundswell has been documented before. However I’ve not seen, hear or read anything about the nature of the new spaces that we create with innovation, and how what some see as success are actually more of a land-grab.
I think Jeff Bezos and Mark Zuckerberg get this, which is why they have looked to take as much as they could, at any cost, before anyone thinks to impose boundaries. For Bezos, it’s all business, driving his “Day 1” philosophy, a strategy which still looks like it has a way to run. For Zuckerberg meanwhile, it has been about playing fast and loose with privacy. He’s been getting it in the neck recently, but so what? His empire (and fortune) has already been built. He expanded to fill a space, and (in the short term at least) is unlikely to be pushed out.
I could have named other names, but pretty much every big cheese in the industry has achieved success by following this approach. Under Steve Jobs, Apple created an exclusivity-based space but did not succeed at its social networking efforts, which has meant it can now bang the privacy drum. Meanwhile, Cambridge Analytica’s rapid rise and almost immediate fall followed the same tactic.
I know, I’m in danger of stating some self-evident principles of marketing, but there’s something deeper going on than just defining a market segment, pushing it through ad campaigns and thought leadership, getting an analyst firm to name it and then running with it until the world gets bored. That’s not the same as a new innovation leading to the creation of a part-technological, part-cultural void which is hungry to be filled.
In terms of predictions, let’s take Artificial Intelligence, which is currently experiencing precisely the stuff listed in the previous paragraph. However, we are yet to see behavioural changes caused through some mainstream adoption of AI: these will create a new space, which some, currently unnamed organisation will fill. I’m guessing the same will happen with a still-to-be-developed use case of Blockchain.
This makes things less about being in the right place at the right time, and more about being able to recognise a vacuum and do something with it. Sure, it’s a good business tip — “Go where the dark matter’s going to be” or somesuch — but it’s also a fundamental missing piece from our lawmaking and governance. As long as we miss this, our national and international rules, codes of conduct and general conscience will be constantly behind the curve.
Smart Shift: Thanks for the (community) memory
This week’s excerpt is all about peace, love and the starting point for much of what we see as the ‘open’ aspects of technology. “While this debate will run and run, the dialogue between innovation and community looks like it will continue into the future.” As also the darker uses of technology: while this bulletin does not directly cover events such as the devastating and evil shootings in New Zealand, I am racking my brain at how it could be directed towards the light.
03-23 – Bulletin 23 March 2019. Cybersecurity vs complexity
Bulletin 23 March 2019. Cybersecurity vs complexity
You can’t manage the universe molecule by molecule
It’s an old adage, that it’s easier to break something than to build it. Or maybe I just made that up, but it rings true (and I have many years of experience with lego to back it up). Case in point: a cybersecurity breach.
I could cite Facebook’s recent admission (following a leak) about passwords being stored ‘in clear’, but that’s just one in a long series which serve to demonstrate that, however hard you try to get everything right, all it takes is to get one thing wrong and the house of cards comes tumbling down.
Cybersecurity has changed, over the years, but this fundamental principle has not. Once upon a time, we relied upon ‘fortress’-style approaches such as defence in depth — protect the perimeter and the insides will look after themselves — but these were plagued with insider threats and admin errors, challenges caused by configuration issues and flaky software. And, indeed, complexity: it was repeatedly proven impossible to close all the doors, lock all the windows and seal all the plumbing.
I know we relied upon such approaches, as I was repeatedly asked to give presentations about why the model no longer worked. Pictures of Jericho, of walls coming (a-)tumbling down were my go-to content asset, together with some boxes joined together with lines of course, and a curve which tails off toward the top. You had to have one of those.
Then, we arrived at the place where there are no walls, where you might as well be wandering naked in the desert for all the protection traditional system security might give you. It’s about at this point where I was asked to write a book about security architecture, which somehow needed to straddle the perimeter-based old world, and the unbounded new. It kind of succeeded, though whether it struck the right balance or just sat on the fence, I am not sure
In this wall-less world, complexity is king: it has moved from an “entropy of the universe will get you in the end” status to becoming the norm. Back in the day, a starting point for securing everything was to know everything you had (or at least be able to wrap something around it). This is now impossible, not only because you can’t manage everything, but because everything goes all the way down.
Case in point: I’m reminded of Intel’s Spectre and Meltdown issues, during which it was revealed that today’s processors have a processor within, running a customised Linux operating system: get to that and you have the keys to the kingdom. Technology really has become like Stephen Hawking’s Turtles All The Way Down anecdote, with each level of turtles being subject to the same level of vulnerability.
Which leads to a bit of a challenge as, after all, you can’t just give up. Even if it only takes a small thing to mess up everything you have done, and it is impossible to keep on top of everything you have, and even if you could, you couldn’t hold all the detail on it anyway. Some are advocating for the cybersecurity equivalent of early warning systems, which may catch when something is about to happen; this goes back to back with recovery processes (i.e. the least you can do is have a plan in place for when all goes wrong).
Others talk about security by design, which is about putting security features into stuff from the outset, rather than trying to bolt them on later. This sounds great, but (and it’s a big but) doesn’t take into account the interactions between stuff. Back in my programming days, the hardest bugs to solve were the ones that feel between two stools — an uninitiated or mistyped variable over here would cause a problem when passed over there. In today’s world, where anything could talk to anything, the potential for such weaknesses is obviously (not wanting to overstate) “quite big”.
Is there an answer? I think there is, in that one can look after one’s own sheep before worrying about the rest. I can create (well, I can’t, but one can) create a loosely scoped, anything goes technology-based innovation, or I (bear with me) can create something similar, but within which I have been very careful about all the pieces that make it up. That’s a kind of architectural approach, at least in terms of level, though ultimately it doesn’t matter about what the architecture is; what matters more is whether I (quiet at the back) am in control.
In other words, even in this astonishingly complex world, keeping it simple could be key to keeping it secure.
Thanks for reading, Jon
Smart Shift: The return of the platform
If you want a poorly thought through and trite literary reference, look no further than this chapter’s nod to (author) Thomas Hardy’s Return Of The Native. But I digress. The platform economy is as much a story of corporate power, as any altruistic tale of innovation. It was ever thus.
03-29 – Bulletin: 4G vs fixed in the office
Bulletin: 4G vs fixed in the office
We are each the elephant in the room
I’m not going to lie, I’m a person of routine. Despite being able to live an essentially freelance lifestyle, I still get up on a weekday morning at a certain time, I go to an office in town (yes, I have an office in town) by 9.00am and I get on with my working day.
I would also, if I could, subject myself to the vagaries of being subservient to work — at the end of the day, I would struggle to leave on time — were it not for the fact that I share transport, and would therefore find myself short of a lift.
Based on three decades of working with others, I don’t think I’m unique on either count: what’s fascinating is how our need for consistency often plays against our ability to ignore when we are being taken for a ride (or indeed not, if I miss mine).
We are stuck in certain models. To whit: the office I from a marketing agency has a broadband line of, well, let’s call it sporadic quality. Despite being in the middle of a town: while my home village now has two fibre options, Cirencester’s main street has none. I know, right?
More interesting (for the purposes of this bulletin) than this clear failure of service provision, is the fact that we have been going along with it despite the availability of alternatives. For the record, the 4G signal in my second-floor office is both consistent quality and high bandwidth.
I have never had a problem with the mobile data signal, indeed, I have used it when the broadband has been down. So, why have I never thought of, you know, ‘just’ switching over and going fully mobile? It’s a question I, and the agency owner, have been asking ourselves.
To his credit, Adrian (for that is his name) is currently testing a 4G hub from EE: the prices have dropped to tens of pounds per month, for hundreds of gigabytes, so it would be rude not to. At the same time, we have agreed between us, it makes the “but it’s not a fixed line!” part of us squirm.
Which is pretty crazy. Whole swathes of the world, particularly what us decadent westerners refer to as ‘developing’, rely entirely on mobile rather than fixed lines. Possibly one of the most disruptive innovations still to happen, that of 5G-powered IoT, is (well, obviously) based around mobile.
And yet, still, part of me feels ‘mobile’ is in some way less adequate than ‘fixed’, even though I am here, day after day dealing with the consequences of dodgy broadband. To repeat, I don’t think it’s just me: indeed, I think this psychological block behind much technological inertia.
The point is that it’s real: one can (by all means) tell me that I’m dumb for being so stuck in my ways, but that doesn’t actually achieve much. There’s an enterprise link to this, in that so many of our corporate decisions, or indeed, lack of them, are going to be psychological.
What to do? I’m not absolutely sure, beyond recognising that people aren’t good at change. If companies accepted this from the outset, they could save a fortune in change consultants, a.k.a. people brought in at vast expense when things turn out to be harder than initially believed. Just a thought.
Smart Shift: Society is dead, right?
Enough about technology: Smart Shift moves its focus onto how tech has impacted what we do (spoiler alert: the following section will be about who we are). First, we cover the technological breakthrough that begat all others: the wood cut.
Thanks for reading! Jon
April 2019
04-07 – Bulletin 5 April 2019: On visibility vs observability, collaboration and tree hugging
Bulletin 5 April 2019: On visibility vs observability, collaboration and tree hugging
Natural Language Killers.
I’m just back from São Paulo, where I was hosting a travel technology conference. Don’t ask me how I get into these things: I sometimes feel like Mr Benn, the cheery chap from the children’s series who walks through a magic door and into his next adventure.
But I digress. Knowing the value of such things, I gave my introduction in Portuguese; I also had my presentation slides translated (thank you Vanessa, Rodriguo, Paula and team). Which led to a number of discussions about the meaning of words.
I try not to use jargon, I really do; when I find myself doing so, I endeavour to explain it. But sometimes, what I assume is a standard phrase, widely adopted, turns out not to be the case. Take, for example, ‘ecosystem’ — which in Brazil is as likely to have ecological, as business connotations.
The phrase concerned was: “Embrace the ecosystem.” Makes perfect sense, right, particularly in this networked, open, collaborative age? For sure, in principle, but it could just as easily read “Hug a tree,” which wasn’t quite what I was trying to get across.
That was only part of the issue, as the words weren’t directly replaceable. It wasn’t just the jargon but assumptions about context — if someone wasn’t aware that a collaborative network of suppliers existed, for example, should they really be advised to embrace it?
In the end, I went for “Actively engage in open partnerships” (or indeed, “Esteja aberto em suas parcerias). I was aiming for a spirit of proactivity, an open-ness to new forms of collaboration. All good, I hope it came across. Even if one participant did say, “You could have just used ecosystem.”
All of which is a long-winded introduction to a word I have only recently come across: that of ‘observability’. As I have written before, the world of cloud-native startups has emerged separate from that of enterprise IT: it is natural, therefore, that it has its own terminology and indeed, jargon.
It is also natural that such turns of phrase should be used in the belief they are some way normal or well-understood. I paraphrase but “organisations need observability” was pretty much the gist of what was said. Sounded legit, I thought, hoping that I would understand more as the conversation went on.
And indeed I did. It is a thing, that in the world of cloud-based, distributed applications, it can be very hard to know which bit of which microservice is doing what. So, if the whole application is running slow, the challenge is knowing where the problem is.
Observability, then, is about knowing what is going on, where, in a cloud-native application. There’s a neat explanation here, which uses Twitter as an example. The article goes on to say how observability is more than just monitoring, which is all well and good. But.
And there is a but. There’s a hidden assumption in the middle, that the challenge is unique to these new-fangled, cloud-based applications. For sure, they are pretty amazing — the fact I can get a live feed from a globally accessible messaging platform is incredible.
Yet old-fangled applications can also be complex and highly distributed, requiring notions of, well, observability. Trouble is, they don’t use the term; they don’t even frame the problem in the same way.
In large-scale enterprise applications, the approach is more around “end-to-end service management.” That is, the pieces making up the delivery of a service should be trace-able, from user interface right down to the server hardware, so that problems can be solved when things go wrong.
Clearly, massive overlaps exist between the two notions. But they illustrate how difficult dialogues between cloud-native and traditional enterprise groups can be. Each might think (as I did, I confess, a little bit, with ‘ecosystem’), “Surely they’ll understand what I mean, it’s obvious, right?”
A long time ago, when I co-wrote ‘The Technology Garden’, we established a core tenet of IT delivery as “Establish a Common Language.” At the time we were talking about common terminology between the different parts of the business and IT; the same applies to our bifurcated technology industry.
Enough said, for fear I might introduce some jargon. Perish the thought.
Smart Shift: The empty headquarters
In this section, we look at how business is moving from the physical and tangible, to the virtual and invisible. Which sounds a bit like a song. Perhaps it is.
Until next week, Jon
04-12 – Bulletin 12 April 2019. Alexa, please explain “Explainability in AI”
Bulletin 12 April 2019. Alexa, please explain “Explainability in AI”
A funny thing happened to me as I hosted a webinar last week. Grosso modo (as they say in French), we were talking about the complexities artificial intelligence when, jokingly, I suggested that we could ask a bot to explain it. “Alexa, explain AI,” said a panellist. We chuckled; moments later, a message popped up on my iPad, from a listener: “You just made my Alexa sit up!”
Of course, I was unable to resist this clear opportunity. “Alexa, play Breakfast in America by Supertramp,” I said on. “Yep! Playing out now - good choice,” came a response. Achievement unlocked — controlling someone’s playlist via live internet TV. I know, right? Behind my typically calm, sober facade, I was giggling like a school kid.
I shall try to avoid the distraction of ‘going off on one’ about the potential security risks of such functionality (“Alexa, open all the doors!” shouted through the letterbox) — though it does remind me of the time when computer hardware used to be delivered with all the doors left open, and it was up to the administrator to close them — turn off rsh, anonymous FTP and so on. No doubt these pesky devices will need the same.
Distraction mostly avoided. Back on the AI track, and indeed explaining things, several recent discussions have turned to the need for AI to explain itself. Following some client work last year, I’ve been following the rise of ‘explainability’ in AI: witness the emergence of a new piece of jargon that feels just like a normal word, at least to people who have used it for a while (and cf ‘observability’ from last week’s bulletin).
Explainability in AI, or indeed XAI (You want more jargon? We got it!), has quickly become a mainstream topic due to repeated examples of potential bias in its results. I say ‘potential’ not through any unconscious bias of my own — though I have plenty — but because we lack the information we need to know which elements are actual bias, vs which are not.
Explainability goes much further than justifying why a particular conclusion was reached: it enables us to identify flaws in both inference and training. Inference is largely the domain of the algorithm, basically, we can use a sharp or a blunt instrument to cut the data we have; whilst training is about the data set.
So, yes, guess what? If you train up your image recognition on white faces, you will be less able to identify non-white features. But without explainability, you won’t know which features were undistinguishable due to algorithmic bias in inference; which were down to a lack of suitable training data; and which were ‘simply’ because the software was crap.
The debate in this area is complex: for example, do we need to understand how an algorithm reached its conclusions before we can trust it, as this article suggests? I’m not convinced, as I’m more concerned with why than how; I also think there’s a spectrum of usage models between complete denial of the outputs of any algorithm, and wholesale, automated adoption of what it says.
In addition, we need to be careful not to treat explainability with the same level of hype as we seem to apply to AI as a whole. If one starts from the point of view that we are approaching the singularity (in which AI is as smart as the human brain), then we have reason to panic… but we are decades away from that.
Don’t get me wrong: examples like the algorithm-driven targeting of political ads on Facebook are enough to give us cause for concern. Now I think of it, the Alexa point is linked — in that we allowed our first iterations of AI to just work, but as ever, the poor nephew of governance is having to catch up, running behind the cart of cabbages like Oliver Twist.
As I write this, I notice that explainability may have to be written into US law: critics are suggesting that such a move could “limit the benefits” of AI. I don’t believe this is the case: if we can work out ways to discover amazing things about the data that we hold, we should also be able to say how our algorithms reached the conclusions. Or indeed, offer a legal dispensation for an algorithm which demonstrates clear benefit without bias, for example in a healthcare scenario.
Such a position would leave humans in ultimate control. Whatever our foibles (and indeed, biases), that has to be a good place to start.
Smart Shift: Music, books and the Man
In this week’s excerpt from Smart Shift, we look at the nature of streaming services, and their impact on music, video and book publishing. The bottom line: artists are earning more than ever, but there’s more than ever of them.
Feedback welcome, as always.
Thanks for reading! Jon
04-21 – Bulletin 19 April 2019. Let’s talk about IT security… oh, let’s not
Bulletin 19 April 2019. Let’s talk about IT security… oh, let’s not
The art of changing conversations
I’m not going to lie. It’s not the 19th of April, it is indeed the 21st and I’m sitting in the garden in the shade. Where better to ponder the world of tech: I am reminded of the time I took myself off from an event in Nice, sat on the beach for a while and finished a report. “Of all the things to do on a beach in Nice, you chose to write a report,” someone said to me. My reply: “Yes, but of all the places to write a report…”
To business, and so to conversations. Markets are conversations, that’s what the Cluetrain luminaries told us (oh, I miss those optimistic, even if abjectly naive times); or in the parlance of sales, people buy from people. Within dialogues, words have alchemy: what we call charisma is often an artfully turned phrase, delivered with confidence.
The sometimes fast-moving world of tech offers a fascinating petri dish for the power of conversation. Get it right and you can perhaps change the world, that’s the theory. In reality, our verbal interactions more often hinder than help, as we trip each other up, debate irrelevancies or indeed, stymie progress by not saying anything at all.
Nowhere is this more clearly demonstrated, than in the IT security game. Nobody wants to pay for security, apart, that is, from a minority of paranoid types who see scary stuff everywhere (it doesn’t help that it really is there, if you look for it). No surprise that many of this group end up working as IT security professionals.
Meanwhile, the rest of us want to continue our naively optimistic journeys, on the basis that bad things only happen to other people. I’ve written before about how the best time to get money allocated to security is the day after a breach. We literally wait for the horse to bolt before we buy locks for the door.
All of which makes the job of security marketing just that little bit harder than other areas of tech. There’s no direct business benefit, however hard we bang the “you can take more calculated risks if you are better secured” drum; equally, it’s difficult to make security cool and aspirational, like (say) cycle helmet design; so security vendors are forced to resort to other conversation-changing tactics.
At one end of the process we have, “Let’s get the analysts and press to say how important it is.” I’m not saying that tech security firms are better at PR than other vendors, but in my experience they certainly have more appreciation of the need to be talked about in the right way.
Or indeed, the wrong way, as any way is better than no way. On the topic of breaches, one company’s disastrous security story offers cautionary gold dust, the foolishness of one (Talk Talk pops into my head) to be hollered from the rooftops in the hope that others might listen. In the absence of a real breach, why not create a near-miss scenario to generate coverage — such as handing out “Music CDs” outside London stations, which actually create a “You’ve been pwned” message.
This need continues into the “internal sale,” that is, the part of the conversation where someone working for a company needs to convince his peers, or boss, or boss’s boss, to allocate the funds. I’m not a betting man but I’m guessing more use is made of security-related magic quadrants (that Gartner tool for saying, “Look, mate, you’d be stupid if you didn’t get some of this — and here’s a shortlist of companies that can do it”) than in other areas.
Each of the above requires constant rejuvenation. Right now we talk more about cybersecurity rather than IT security: I stuck with the former because I’m a curmudgeon, who is also worried he might be misusing a term that he doesn’t understand as well as the old one. Nonetheless I understand the benefits of giving industry segments and terms a refresh, if it helps to keep them being talked about.
All of this to make a conversation happen when nobody wants to speak. Thinking back to my own experience of post-breach security budgeting, a common question from my Finance Director was, “Do we have to have it?”: if I couldn’t answer with absolute certainty or charismatic guile (I was never very good at either), then no money would come.
It was the case then and is still the case now, which is a significant factor in my view that we need to build security into products rather than add it on later: simply put, we won’t do the latter, which makes the former all the more important. In the meantime, our marketing representatives will continue to use every tactic in the book to make a dialogue happen, in the knowledge that otherwise, it simply wouldn’t. It was ever thus.
That’s all for this week. Happy Sunday, and thanks for reading.
All the best, Jon
04-28 – Bulletin 26 April 2019. On transformation, word processing and Jacuzzis
Bulletin 26 April 2019. On transformation, word processing and Jacuzzis
There’s a moment in the seminal, moving and poignant film Trading Places, which for some reason pops into my mind every time something emerges as new, improved, reinvented, reimagined or otherwise presented as untouched and serene from the tumultuous storms of innovation. The moment comes as Eddy Murphy’s character Billy Ray tries out a Jacuzzi bath for the first time: “When I was growing up, if we wanted a Jacuzzi, we had to fart in the tub,” he says, to the discomfort of Randolph and Mortimer Duke.
To whit, peer programming, which is not such a novel concept these days (Kent Beck’s Extreme Programming approach is 23 years old this year). When I started as a programmer a decade before, we simply did not have enough computers to go around, so myself and Mike worked together on a shared workstation, debugging code and solving problems with a fair amount of banter and just a soupçon of astonishing competitiveness. The result: we got quite a lot done.
Fast forward to today and we see similar concepts being repackaged as discoveries: I don’t have a problem with that, as each generation needs its own epiphanies. More of an issue is how we seem intent on creating the situations in the first place, which need such moments of joyous, simplificatory release in response.
To whit: word processing and textual information exchange. This should be a problem space which, once solved, doesn’t need need re-solving; but as such, acts as a pH test for the ebb and flow of technology as a whole. Word processing was one of the first tasks to which computers were set: a logical merger between the typewriter and the ability to store and retrieve data. It was, and should still be, that simple. And yet.
And yet. When I was put in charge of the hardware and software being used for a complex software project, one of the first conundrums I hit was the licensing of the Interleaf desktop publishing package. Interleaf was comprehensive, powerful and de facto for even the simplest of memos; it was also expensive, with the result that people spent periods waiting for a license before they could get on with their working lives.
This was, of course, insane. I advocated (or at least tried to advocate) the use of text editors for memo creation; I even created templates with a textual representation of the company logo. Strangely, people preferred to wait, prioritising form over function. I should add that these were days before email: the notion of the ‘memo’ still had its place.
The point is, I’m not sure we have progressed that far. In today’s world, we fafe a plethora of choices as to platform, tool and feature; at the same time, each is looking to add a variety of bells and whistles to avoid being outflanked by the competition. Alongside Office 365 in all of its online/offline versions, we have Google’s toolset; elsewhere Apple and a variety of other providers bring their own, not to mention the staggering number of online platforms — Wordpress, Medium and so on.
From a writer’s perspective, each is like trying to listen to to some quiet jazz in a shopping mall: any attempt to get words down on a page is bombarded, creative signals drowned in noise. Or the words themselves become locked into some provider’s private universe, with who knows what terms and conditions. Lost completely are the most obvious notions of “write something down, share it more broadly, be able to find it later.”
Without turning this into a rant, I think such realities characterise where we actually are: technology remains a complex, rapidly changing jungle of possibility, and we are a long way from arriving at a point where we can get on with the job. This is neither good nor bad; it is, however, worth putting it back to back with the fact companies, and people, are told they must change if they want to keep up. Perhaps it is not they, but technology, that actually needs to get its act together.
Smart Shift: (Over)sharing is caring
In not unrelated news and weaving in the Mormon Church and the Coastline Paradox, this week’s smartshift looks at how we have been creating large quantities of data about ourselves, and handing it over to online corporations. Go us.
May 2019
05-05 – Bulletin 3 May 2019. Security vs risk, and monkeys writing Shakespeare
Bulletin 3 May 2019. Security vs risk, and monkeys writing Shakespeare
Show me the money
Before I continue, I was saddened to learn this week of the death of Cyril Freedman in 2017. I doubt you will have heard of Cyril: I worked for him a few years back, as technical lead at a medical startup he co-founded. I remember him saying how the first company he started involved a table and chair, a telephone, a notebook and pencil. I remind myself of this frequently, particularly when technology seems only to be adding complexity to an otherwise simple situation.
I’d also like to say thank you to all those who comment on my bulletins. If you are interested, I send it out to over 300 people weekly, a third of whom read it. It’s a nice number of people I generally know and rate, and generally the feedback is “Keep ‘em coming,” which is pretty much the reason I do. I did receive a couple of specific responses to the 19 April newsletter, about IT security:
Ian: “Actually its a risk process. And it is a continual process in some organisations. And it’s tested, frequently. Because of the day zero issue nothing is foolproof, but a lot of people are not as complacent as they once were.”
Graeme: “Whilst technology built with security considered up front would be welcomed, I suspect it stifles innovation and so we’ll always rely on the patching and 3rd parties to provide security.”
On the first point, yes, thank you Ian, that’s reasonable: all sweeping statements are doomed, including this one. Having worked for both government and financial industry clients, it would be folly to suggest that organisations don’t ‘get’ risk. It’s a difficult thing to measure: if you ask someone whether they care about security, they would of course say yes but then we have a case of, “don’t watch the mouth, watch the feet.”
And meanwhile, the point illustrates why I’m a huge advocate of communicating security risks in business terms, to the board. If one says, for example, “We need mechanisms to prevent anti-phishing attacks,” the obvious answer is “Why?”; however, if one says, “3 of our nearest competitors have suffered financial losses of <insert figure here> due to anti-phishing attacks,” the decision then comes down to the board as to whether they see the risk as acceptable. All businesses take risks, so it becomes a question of adding IT security risks to the pot (a.k.a. risk register) in the clearest way possible.
Which brings to the second point about security stifling innovation, which to be fair, is also about risk. Innovation cannot exist in a vacuum… well, it can, but taken to extremes we are with analogies about monkeys typing Shakespeare. It makes a modicum of sense for our innovation efforts to be directed, focused and measured, delivering on both efficiency and effectiveness criteria; it also makes sense that we want to remove whatever might slow innovation, for example overbearing security tools and practices, at the same time as maximising the quality of the outputs such that we don’t just add costs down the line.
As my old friend, mentor and value management guru Roger once told me, ultimately it will all be down to money, or at least financial measures. Which makes sense: so the question becomes whether we try to do things right first time, versus accepting a longer term risk (and potential cost) as a way of making something happen at all. Sometimes (business) success is about audacity, but this needs to be balanced against failure caused by out-and-out recklessness.
All of that, and I never got round to the introduction I meant to use for this week so I am going to shoe-horn it in at the end. It is, at least, about money: perhaps one of the most fun things I have done in recent years was a presentation I gave for Hyland Software, in which I re-enacted the famous scene from Jerry Maguire, darting back and forth across the stage to replicate the conversation between the Tom Cruise and Cuba Gooding Jr. Whether the audience was enthralled or bemused I have never found out, but I loved doing it… perhaps not everything can be measured in monetary terms.
Smart Shift: The Dark Arts
Confession: I must have quoted this paragraph from John Brunner a dozen times in white papers and reports, through the years. This week’s section is (funnily enough) all about security, from Stuxnet to Snowden. “The only war going on is one for the soul of the Internet.” But can it be won?
Thanks for reading, Jon
05-10 – Bulletin 10 May 2019. Forget Pareto: let’s talk about why digital transformation is a lie.
Bulletin 10 May 2019. Forget Pareto: let’s talk about why digital transformation is a lie.
Putting the horse before the cart
Epiphanies can come from the strangest of moments. There I was, half way through drafting a bulletin and I suddenly realised, no, it’s all wrong. Or it’s all right. Or whichever, but it’s going to be OK. Shame really, as I was quite enjoying writing about the Pareto principle and how, from a consulting perspective, it never needs to be proven — faced with backlogs of challenges and options, prioritisation will always the right thing to do.
But then, I recalled the moment back when I was preparing a presentation for O’Reilly a couple of weeks ago, and I remembered that I had planned to let my mind be blown. How so, I hear you say. Well, I say, I hope you are sitting comfortably. Then I will begin. Spoiler alert: it’s about the untruth that lies at the heart of digital transformation.
So, here we go. You know how people are always looking for cool analogies to explain why it is important to change? Often, they will use the example of the saddlewright, or indeed the carriagemaker that went out of business because they failed to recognise the arrival of the motorcar? Ah, says the presenter, you don’t want to be like that saddlemaker, do you, not in these disruptive times? You want to move to where the puck is going to be, rather than carrying on as though change wasn’t happening, right?
Right. But wait. That was over a hundred years ago. So, might I ask, what is it with people suggesting that change, or dare I say transformation, was something new, and/or purely technology driven? Yes, I get that the combustion engine is technological, but so was the steam train. How far back do we need to go, before things generally stayed the same from one generation to the next?
Now, perhaps I am making too big a deal about this. I would be happy to take that, if it weren’t the case that the entire consulting (and to be fair, technology) industry is making a very big deal out of this. Transform or die, we are told. Digital Darwinism is the name of the game, and only those who evolve quickly will survive.
So, here’s the clever bit. All those who say change is important are correct; and yes, it is catalysed by technology right now. But the need and propensity to be agile, lean, dynamic is not a technology-centric idea; rather it comes from the business needing to be more competitive, to differentiate, to remove inefficiencies and waste, and to respond to the evolving needs of
customers. Ah but, you say, these are all being influenced by technology! Ha, got you! Yes, indeed. But the need for the business to adapt exists in its own right, independently of the symbiosis between what we do and the tools we use.
Not convinced? As evidence for the defence, I present Tom Peters and Robert H. Waterman, Jr.’s 1981 book In Search of Excellence, which collates the principles used by successful organisations. First on the list is “a bias for action”, driving change into the organisation from the top. It doesn’t matter what the catalyst for change may be — regulation, geography, demographics: what matters more is the ability to react to it. And incidentally, the second principle is to be “close to the customer”. It’s like they invented the principles of startup success, who knew.
I’m not just trying to make some philosophical point here; rather, it offers a better starting point for setting business strategy. Digital transformation mantras start in the middle, suggesting how it’s worth looking at how technology can create new opportunities; at the same time (and as I’ve written before), it leads to discussions of how real progress requires both agility from the top and cultural change across the board. However, to accept the reality that it was always about being dynamic puts the (beautifully saddled) horse before the cart: dynamic organisations are better able to adopt, and adapt to digital technologies, or anything else that might come their way.
The manifestations of this switch in thinking are legion. I’m hearing repeated examples of how even advanced technology adoption is resulting in the same old issues emerging: examples such as silo-ed data lakes, or the need to rationalise the multiplicity of DevOps processes, both come from a place where people are trying to do the same thing over and over again, rather than thinking adaptively at a senior level. Dynamic thinking organisations can not only react; they are in a constant state of reaction, able to pivot when need be even if it means making some pretty dramatic decisions. For some reason I’m reminded of Microsoft, first when it embraced the Web, and second when it embraced the cloud.
I think I’ll stop there, point made and all that. If I had one wish, it’s that we stopped talking about digital transformation, or any other form of sudden shift; and rather, help businesses recognise that the key to success remains the same as it always was, which is to make, and act on decisions. Not all organisations are good at this, for sure. But if I wanted to spend my consulting dollars, it would be on mentoring the organisation on how to become better able to change (whatever was the technological or other mantra of the day).
Becoming agile is so much more important than becoming digital, with a final caveat: as long as the wider world adopts these short-term transformational views, it is worth playing along with them. Sure, tell the markets you are transforming digitally; but if you want your business to be around in ten years, make sure agility is leading, not following.
Smart Shift: The Two-Edged Sword
Speaking of change being nothing new… “When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole.” Thus spake Nikola Tesla in 1926, closing off this chapter about what technology has brought us, for better or worse.
Thanks for reading!
All the best, Jon
P.S. Thank you to those lovely readers who told me that you are looking at this with images blocked, so you are not showing on my stats. And thank you to you for reading as well :)
05-19 – Bulletin 17 May 2019. On personal epiphanies and Schrödinger’s AI
Bulletin 17 May 2019. On personal epiphanies and Schrödinger’s AI
There must be a law somewhere which states that epiphanies have to be personal. That would explain the feeling that businesses never really learn the basics of, say, agility or how to build software right: each generation has to work it out for themselves, possibly through a-ha moments following periods of trauma.
What’s strange is how we don’t really acknowledge that to be true. We do talk in terms of organisational maturity (there’s a four-stage model for everything), but we don’t recognise that new people can take things back a step or two, as they haven’t learned what the more ‘mature’ might see as the basics.
As such, we normalise where we are, and take for granted that others might share our understanding. Case in point is artificial intelligence, which (as is very apparent) is currently seeing a surge of interest. Or is ‘it’, given how AI means different things to different people? I’ve been fortunate enough to talk to quite a few folks over recent months, which tend to fall into certain camps.
For the uninitiated, AI tends to refer to software algorithms that follow a two-step process: first is to ‘learn’ some rules from a large set of data (e.g. what a cat looks like) and the second is to ‘infer’ information from new data (e.g. “That’s highly likely to be a cat”). The way these algorithms work is, the more data they get, the better they can infer stuff.
So, no, it’s not really intelligence per se, but it’s still pretty handy. Academics have been doing it for years, using various forms of algorithm (such as neural networks): in general the goal is to improve how such algorithms work, oh, and write papers about it all. This thinking exists both inside academia and in business, as that is where AI expertise is most likely to come from,
Outside of academic types, there’s a couple of epiphany-type situations that emerge. The first is how organisations are moving from discrete AI projects to a notion that AI operates like a service, to be applied as and when useful. I’ve seen this a-ha! shift in consulting firms, I’ve seen it manifest as the issue of ‘silos of AI’, and I’ve seen it discussed in terms of how DevOps principles can be applied to AI applications.
A second sudden realisation comes when one starts thinking about whether one can have confidence in the results of a given AI application. AI is both probabilistic (again, “That’s highly likely to be a cat”) and vague — algorithms don’t necessarily know why they have reached a certain conclusion. So-called ‘explainability’ (yes, there’s a term for it) comes in response to very real issue: it’s one thing to mis-identify a cat; it’s another to, say, rule out women from job searches.
On the point of dodgy results, there’s various reasons why it might happen: learning data quality/quantity, algorithm effectiveness or other factors might all play a part. Without explainability, it’s difficult to know the cause. But anyway: the result is an ohmahgawd moment, when all that squeaky clean AI looks like any other bit of flawed software. It’s like that moment after a purchase when you realise, yeah, it’s just a car. Whilst epiphanising, one might even say, “it’s just garbage in, garbage out, innit?”
Clearly (from my anecdotal experience), to the initiated, AI’s potential bias is a very real and challenging issue, and explainability offers at least part of the response. To those not so far down the track it is a bit like car accidents or (cf last week) security breaches, in that they only happen to other people, are not as important as doing the clever suff, etcetera.
Perhaps the bottom line is a choice: either we can impose a need to think about bias and explainability through (legal) governance, or we can adopt a wait-and-see approach in which people work it out, and have epiphanies, by themselves. Given the possible results of the latter approach (amounting to misdiagnosis for undefinable reason), I personally would prefer the former.
Thanks for reading, Jon
P.S. More Smart Shift next week.
05-27 – Bulletin 24 May 2019. Technology platforms and all that
Bulletin 24 May 2019. Technology platforms and all that
Terminological topics
Is software development an engineering discipline? It’s certainly nice to think of it in that way. Indeed, it’s a topic I covered in the first article I ever wrote, back in 1994 — titled Craft or Science? Software Engineering Evolution. “This article presents how far software development has come, explaining how, although software engineering not yet a truly “engineered” process, it is certainly en route to becoming so,” I said. And indeed, it remains en route.
At it exists in this nether land, at one end highly mathematical and at the other, decidedly abstract, the world of software lends itself to a variety of ill-defined, yet still valid concepts. For an extreme example of this I remember an old colleague, Martin, talking to a client about how to measure a successful user experience. “It needs to be, you know, hm,” he said, delivering a very positive “hm”. I know, right? Completely vague. Yet “hm” became a touchstone for the rest of the conversation.
And so we find ourselves bandy-ing around words such as ‘experience’, or ‘architecture’ or whatever, none of which can ever have a concrete definition because they are not being built on a grammatically complete foundation. Speaking of which, we have ‘platform’, at once a word that makes perfect sense and one which is too generic to mean anything at all. It’s been around for a while: at the risk of digressing (heavens, no!) it was one of my first job titles.
As a pre-amble, something you may find if you live in a foreign country for any length of time is that you start losing grasp of your mother tongue. After we had spend a year or so in France, we would find ourselves speaking Frenglish, or worse, a mode of speech which would make no sense in either language. I’m racking my brain to think of an example but trust me, it caused house of fun.
When I found myself responsible for the software development environment (hardware and software), I was given a title in French (Responsable, Gestion de Moyens Logiciels) and asked to come up with my own in English. Naturally, the title Platform Support Manager seemed to make perfect sense, even though it didn’t make any sense at all. Sure, I was supporting the platform… no, I wasn’t, that makes it sound like I was carrying some wooden structure on my back. But I was supporting the plate-forme of tools.
Platforms have become more popular: we are, today, in the ‘platform economy’ in which companies can rise from nowhere and scale rapidly based on cloud-based infrastructure and open source. Just a handful of years ago, the term was used slightly differently, referring to companies who provided all the software elements corporations might need. I remember, back then, having a ‘debate’ about whether Microsoft was a platform company; I was told it wasn’t. Microsoft was certainly trying not to be, at the time. But it just made sense, to me, to think about it in that way.
More recently I’ve been thinking about how to break the terminological deadlock around ‘digital transformation’ (and apologies for banging on about it as I get my thinking straight). I’m starting to believe the answer lies once again in the notion of platforms, but this time in terms of how enterprises think about their IT infrastructures. I could say more, but I appear to have run out of space and besides, it probably deserves a blog of its own, without the clutter of reminiscence.
So, for once, I will stop. Thanks for reading!
Jon
P.S> You know I said ‘More smart shift”? I lied, sorry! There’s always next week…
05-31 – Bulletin 31 May 2019. The importance of being dumb
Bulletin 31 May 2019. The importance of being dumb
Let me tell you a secret
I’ve been working for 32 years this year. It seems hardly possible: not only do I wonder where the time has gone, I also wonder if anything I have to say can still be relevant. But don’t worry, I have a cunning plan. Simple in execution, planning and indeed, thought.
Across the years and particularly in recent times, I’ve learned a technique that has, and is still, serving me very well. In a nutshell: act dumb. I can’t even claim it was my idea. The original concept came from Denzel Washington’s lawyer character in The Philadelphia Story, as he would ask, “Explain this to me like I was a three-year-old.”
Doing so has several advantages. The first is that it puts the onus onto the person explaining things, without being threatening. The second, without a doubt, is that is a great way of covering up that I really don’t understand what the person is saying. I first learned this when I came back from my period of hiatus as an analyst, where for a good while it appeared that the world had indeed moved on. Infrastructure had become hypercomposable, software was epic-driven and microservices had replaced any notion of stacks.
Perhaps, I wondered, all the issues had been resolved, that IT and the business (and indeed, IT and IT) were having a party oh, and the lion had lain down with the lamb. At the same time, I had a nagging feeling that no progress had been made at all.
The notion of saying “I don’t understand” came quite early in the process of re-acclimatising myself with technology. I tried it first as an admittance of defeat, then (when the world didn’t collapse around me) I tried it again. It helped that I was largely ghost-writing, so I didn’t have to keep up any pretence of expertise.
After a while, it became something I did, and I still do, every time. I’ve found it is more than just a handy trick. In fact, it puts me into the mindset of someone who really doesn’t know what the great and the good are talking about. Which is more people than one might think.
There’s two things you can do wrong as an analyst (that is, on top of the astonishing arrogance, travel demands, high fees and all that). The first is to believe that because something is being talked about, everyone understands it already. In this industry, full of 1984-esque doubleplusgood newspeak, nothing could be further than the truth.
It’s not just the nouns but the philosophical stance. In my current research (watch this space), I have found that a number of vendors have dropped their own terminology in favour of what is being termed Value Stream Management. The trouble is, Google searches (and indeed, their own web sites) often still use the old ‘positioning’; and meanwhile, we’ll always have the “I’m VSM and so’s my wife” charlatans.
Industry analysts act as a proxy to decision makers who don’t have time to work out what the heck is going on. This means, if analysts can’t make sense of something, the people they represent don’t stand a cat’s chance. And let’s not beat about the bush: obfuscation is a useful marketing technique.
I did say two things, didn’t I? The second, then, is to make the assumption that because something has been talked about for a while, everyone probably already understands it, and indeed is already doing it. Cloud, for example, may be ten years old (at least in the shape of infrastructure as a service) but some organisations may still be dipping their toes in and are looking to understand the basics.
The need for clarity is a lesson that can’t be learned too often. Very recently, I was asked to explain DevOps by a non-technical colleague. As I did so I realised just how much jargon I needed to unwrap and dispense with. The whole notion of DevOps assumes you have already existed as a Dev and come up against the challenges of Ops, for a start; it also takes for granted concepts of agile software development, none of which are intuitive (if they were, everyone would do it by default).
So, yes, there is always room to act dumb. Or perhaps, not to feel dumb when asking the world of tech to explain itself. There’s enough complexity in the world already, without creating more.
Smart Shift: The Quantified Identity
Finally, the next chapter in the series, covering self-surveillance. Who knew big brother was each and every one of us? Oh and a little riff on beacons (the old sort) and Voyager.
Thanks for reading!
Jon
June 2019
06-08 – Bulletin 7 June 2019. The Art of Anecdotal Evidence
Bulletin 7 June 2019. The Art of Anecdotal Evidence
Pies, damn pies and charts
As just about everyone I know knows, I’m in a band. I never meant to be, it just kind of evolved, out of a ukulele based collective that friend and fellow podcaster (see below) Simon co-set-up. The “never meant to be” element is relevant, as I find I’m learning new things, not least how to sing in front of large audiences, but also the notion of an audience of one.
Performing is by its nature a little bit (okay, a lot) narcissistic: delivering artistic and creative things into a vacuum is no fun for anybody. At the same time, it is a complete lottery: sometimes you get the adrenaline rush of a huge, animated crowd, and sometimes you get what a friend refers to as a ‘paid rehearsal’ in which nobody is paying much attention.
At times like that, I have found, sometimes just one person will be actually enjoying what the band is doing… at which point, in my head at least, they become the audience. I can think of a gig we played in Swindon a couple of years ago, where one chap was nodding, singing and enjoying every moment. I have no idea who he was, never seen him since but for two hours, he was who I was singing and playing to.
And that was enough, no, more than that: it’s a real privilege to offer the gift of a tune or two, and to have an appreciative smile in return. And… and now I find myself wondering how to relate this point to technology. It all made so much sense half an hour ago, when I started writing this thing.
In my line of business, over the past decades I have spent quite a lot of my working life answering questions for people. I subscribe to the Einstein principle — I might not always know the answer, but I probably know someone who does so I can find out (and often, the person asking may just need a bit of help to answer their own question).
I have also spent my days surveying, researching and writing about it all. And making plenty of mistakes along the way, and hopefully learning from them. One mistake is, of course, to take anecdotal evidence as something quantitative: a problem exacerbated by the social echo chambers we inhabit.
I’ve been seeing this a lot recently, as I have consorted with the kids from the cooler end of the tech spectrum — those we might call ‘cloud native’. For this group, notions of collaboration, iteration, of trying something and getting it out there, are the norm, to the extent that they may well be surprised to find that others don’t follow that, straightforward and productive path.
Meanwhile, I still liaise with many people who work in what might loosely be called ‘enterprise IT’. In this group, linear thinking is the most likely approach to succeed and it can be very hard to convince people otherwise. I remember a corridor conversation at a large government department, where I singularly failed to convince a colleague why waterfall approaches weren’t always going to be the best option.
Perhaps never the twain will meet, perhaps not; trouble is, if you look at IT trends as a whole, you will end up with an amalgam of both, or a view from one side or another. It is almost impossible to get a high-level picture of how, in many large organisations, the mind might be willing but the body is weak; or how individuals may have the right idea, even if their teams and groups are still lagging.
Which brings to the audience of one. Again and again, I see research telling organisations what they should be like, rather than engaging with them individually and determining what might work for them. It’s the dark secret of consulting: the ‘thought leadership’ may tell decision makers to “disrupt or die” but the day to day activities of change are much more mundane.
What can we get from this? That quantitative research will always be wrong? Not so fast. Research offers a useful starting point to have a conversation, to engage on points of similarity, to help someone get a view on the broader context of their industry or domain. At the same time, it should almost always be rejected in favour of finding out what the specific organisation is actually like.
Smart Shift: Augmentation in progress
In this week’s section I bring in Wim Wenders’ film ‘Until the End of the World’ to introduce the topic of Augmented Reality. “What we learn from such films, in general, is that the fight remains between good and evil, between acts of immense inner strength to overcome situations of utter peril.” It was ever thus.
Extra-curricular: Getting Away With It Episode 7 - Post-Lechlade Festival Blues
Speaking of bands, podcasts and indeed, small audiences, Simon and I actually managed to bank another episode of our GAWI (as we have come to call it) podcast. It includes mention of headlining in a beer tent, barfing waltzers and strange phrases, Huawei and Google, scanning for skeletons in concrete, tech giants breaking industries… and a poem from Simon.
06-14 – Bulletin 14 June 2019. Why do we assume the Internet will ‘just work’?
Bulletin 14 June 2019. Why do we assume the Internet will ‘just work’?
The connectivity disconnect.
I was recording a webinar yesterday. For the uninitiated, this is the closest I will ever get to being on telly: an hour of sitting around a table chatting in front of cameras, to be broadcast to the world. Well, narrowcast: the audience tends to number hundreds rather than thousands, but heck, it feels like what telly might be like. They sometimes even have makeup.
It’s amazing what can be done this days, with a few HD cameras, skilled operators and a network connection. What’s that you say, a network connection? Yes, you know, that bit that absolutely has to be there, otherwise nothing can work. I know, obvious, right? Less obvious is why we tend to take the network so utterly for granted.
The Internet is a clever beast. It’s built on the back of protocols that assume the worst: data packets can be re-sent if not received, or routed in multiple directions in case of failure. Like Kevin Costner’s futuristic (yet ill-fated) postman, the message has to get through. Yet the way we have configured things, we generally connect across a single wire — making for a single point of failure.
To whit, the webinar had to be postponed (for only minutes, in the end) as a ‘network problem’ was resolved: no connection, no webinar. I’m not saying the world needs redundant (as an aside, which idiot decided that the same word would mean both extra and superfluous?) networks; rather, I can pretty much guarantee that nobody gave the network a moment’s thought up until that point.
After all, it should just work, shouldn’t it? That certainly seems to be the thinking of the many metro-Californian types I seem to speak to, who present tools such as Google Docs as a straightforward alternative to desktop office applications. I don’t hate Google Docs. However… (and yes, this sounds like a moment where I now reveal my prejudices)… however, I would like it a lot more, if it worked in environments where network connections were sporadic.
Such assumptions pervade the thinking of a large number of people for whom infrastructure is ‘just’ a platform (a.k.a. anyone who works in software, sorry guys). The network operates quite differently to other infrastructure elements: in that golden triangle of processing, storage and communications, the relationship between the first two and outcomes tends to be linear — the more you have, the better the result.
For the network however, the relationship is both linear and binary. Scale it up and down by all means, but also act in the knowledge that it can either be present, or not. In this way, it has more akin to power than other kinds of resource. (Cue philosophical diversion… I feel a picture coming on… I need a whiteboard… and, back in the room!)
It is also fantastically complex. Deep down, data networks do their very best to exploit the laws of physics: if you see the difference between a theoretical ‘digital’ broadband signal wave and what actually gets sent down the fifty-year-old piece of copper between your house and the box down the road, you start to appreciate the challenge. “How far can we push this thing” is a familiar concept for low-level network engineers.
At higher levels, protocols for data transfers, for streaming, for signalling and one-off events, vary according to both the type of information being transferred and the equipment being used. There’s some fascinating work taking place around ultra-low bandwidth networks, which can support minimal needs (“your cow has a temperature”) with equally low demands on resources.
In other words, networking constitutes a series of trade-offs, trying to get the maximum out whilst minimising risk of failure. We like to think of data networks as single streams of light, but in reality they are like million-lane highway interchanges, each of which has to manage tractors alongside juggernauts, motorbikes with skateboards.
Upon which sandy basis, we build our castle-in-the-cloud existences. As with the webinar yesterday, it was interesting to note the final advice: that any internet-centric approach needs to consider connectivity above anything else. Basically, if you can’t connect, you’re stuffed: this should be so obvious that it doesn’t need to be said, but in reality, precisely the opposite is true. We take our networks for granted, and this needs to be called out.
Smart Shift: We are all makers now
Speaking of resources we could not do without, where would we be without the lowly screw? In this chapter we think about manufacturing and the rise of Alibaba, the Internet of Things and 3D printing.
Thanks for reading! Jon
06-21 – Bulletin 21 June 2019. The dark side of progress and the case for industry-scale ethics
Bulletin 21 June 2019. The dark side of progress and the case for industry-scale ethics
Beyond the echo chamber
We are in the fourth industrial revolution, so we are told. Disrupt or be disrupted. Jobs will be lost. No they won’t, but they will change. Kids are smarter than their parents. Science is always right. Resistance is futile.
Some of this may be true — I don’t think anyone nobody meant to unleash the transistor onto the world as a bad thing, any more than I believe that the designers of Facebook algorithms or Twitter streams planned to unlock such bias, or such anger. Nope, no siree, I don’t think any said group thought about such stuff at all. Which is the point.
I have a theory about this, by the way, one which I hope is objective… though objectivity seems in very short supply in the current narrative. We are all driven by agendas, complacencies, instincts and I spend a certain amount of time pondering what my own are, and how they will (or do already) manifest.
Back to the theory. The technological centre of gravity lay first with the governments and researchers, then with the corporations, then with the young and forward thinking… and then, i.e. now, with the generations and demographics that were never consulted about any of it.
And they are coming back in spades. There’s a local Facebook group I am party to, which (with no sense of irony) is called “Local Town for Local People.” It is renowned for people complaining about things, often the same things (potholes, bad driving, the lack of respect…) it also has some great threads about old photos.
In said group, voices that might be considered ‘progressive’ are in a minority. I don’t believe the thousands-strong group is so because it overly respects a certain sub-group of the town; rather, it is representative of the actual conversations that, until recently, happened behind closed doors, in pubs and on street corners.
Welcome to the real world, progress. And welcome to the backlash against traditional, top-down behaviours from anyone in power: if everyone has a voice, then you’d better start listening to it. In the past, charismatic leaders could get lucky by aligning with the zeitgeist; today, anyone can do it if they are prepared to listen.
I’m not a super-great fan of jingoistic populism, but nor am I an advocate of complacency or assumptiveness. It probably is the case that the nature of democracy is changing, as opinions can be gauged (and people engaged) at a much finer level of granularity than before. And the debate, as nasty, bully-laced and mob-ruled as it can become, is taking place in public.
But still, the creators of technology continue as if all that is nothing to do with them, as if progress was unassailably a force for good, as if the next set of superpowers the industry was about to unleash were all about the positive consequences, and let’s not worry about the negatives, shall we?
I’m not sure I’m any better, for the record. I might bang on about governance and the like, as though I actually care, but do I really? Am I no better than someone in the alcohol industry or gambling industry saying, yes, we really think people need to be careful with this stuff, but you know, personal responsibility eh?
And this is one area I need to recognise the inherent bias in the systems we create. In driving technology business through positive case studies, we create an echo chamber of our own, in which the positives fill out the narrative with only a little bit of space left for what might go wrong.
I’ve talked about governance by design before, but perhaps we need to go further than that, to a notion of industry-scale culpability. I’m not saying the tech industry is inherently bad; I am saying that this notion that it is inherently good is wearing thin.
I’m not sure what the answer is; but I do know that the notion of ethics is largely consigned to individual efforts, and not to mega-trends (the hammering that Facebook is currently getting is the exception, not the norm). As an industry we have great power in our hands and with great power comes great responsibility.
Smart Shift: Is that a drone in your pocket?
In this section I cover all things autonomous, from driverless trucks to drones. I’m looking forward to self-driving pizza: you heard it here first.
06-28 – Bulletin 28 June 2019. On the constancy of learning
Bulletin 28 June 2019. On the constancy of learning
Where did it all go wrong? A reader and old friend called me out last week: “You have officially become a grumpy old geezer,” he told me, and quite right too.
Aspiring analysts take note, you can get an awfully long way with cynicism, which is right and proper given how much of the industry seems to be driven by naive optimism and assumptive hype.
Nonetheless, it can become more then a little wearing to be cynical all the time, like some end-of-bar bore who has seen it all before. “It’s just management,” one might say, or “It’s never going to work.” Etcetera.
There’s a fine line between pointing out the weaknesses in a position, and just being a bore. And besides, sometimes things are not as simple as they appear: generally, there will be a reason why the same things seem to come round again.
Often it is because they aren’t the same things, not quite, so it becomes a case of working out what the differences are. Microsoft had the tablet computer form factor nailed long before Apple, for example… but it didn’t have Jony Ive (and a thousand others) to smooth the lines and bring in the requisite minimalism.
A second reason (which I have covered before) is that every generation needs to have its own epiphanies. And thirdly, sometimes, something happens that causes everything to have to be sorted out from scratch.
Example: management of software artefacts. I was at the DevOps Enterprise Summit this week, and was lucky to be party to a number of fascinating conversations… one of which revolved around this topic. I know, right?
In my first job, almost 32 years ago, I was a programmer, which meant writing lines of code. Rather than having everyone doing their own thing in a corner, all that code was managed using a tool called SCCS — source code control system, if I recall correctly.
Did I say it was 32 years ago yet? But it makes sense, right?, to have a place to manage all that code-y stuff, otherwise the situation would be a bit chaotic, right? When I left my programming job, I became a software configuration management specialist, which was all about this stuff. Call me dull but I quite like it.
Quiet at the back. and fast forward to 2019, when a conversation with major financial institutions is about how important it is to manage source code… and how bad they are at it. They never meant to be, but the problem just, kind of, snuck up.
So, we have choices at that point. Call them all idiots, which is of no help whatsoever (and removed the log from thine own eye). Listen cynically to the epiphanies, in the most patronising way possible. Or indeed, think about the root causes of what made a bunch of smart people end up in a singularly un-smart situation… which might help do something about it.
I have been thinking about them, and my money is on… the Web. What started as just a bit of markup language (that’s the ML in HTML) has become the backbone of many software applications and services.
In turn, just as nobody bothered to manage web pages as though they were code, so we ended up with a massive set of programs, none of which anyone thought would need to be managed as software. And now, here we are, where that point is obvious but knowing it doesn’t solve the problem.
Sure, let’s be cynical, but (as with many things happening today) it was a lone, quiet voice that saw it coming in advance.
All the best, Jon
P.S. Apologies (even if none are needed) for the brevity of this edition but I am in the midst of moving house. Analogies abound, if only I had time to write them all down.
July 2019
07-06 – Bulletin 6 July 2019. On why technology is not a journey, and lasagne
Bulletin 6 July 2019. On why technology is not a journey, and lasagne
This gets a bit meta
There’s a game I play pretty much every time I see a technology-related announcement. Case in point, last week I saw something from a vendor which pretty much said, “Now with extra security!” (I am racking my brains who it came from, but no matter, it was reasonably standard). To the game: turn the sentence around and see what thought processes that triggers — in this case, “You mean, before, it had less security? What the heck?”
To be fair, it’s not just me. Many years ago I was particularly taken by a cartoon involving the lasagne-eating Garfield, watching the ads on television. “New and improved! New and improved!” the ads blared out. “To think that, up till now, it was all old and inferior…” the cat opined.
Three dimensions spring to mind. The first is that, yes, it’s just advertising: each release has to be presented as better than before, otherwise people won’t buy it. The second, that the tech industry is changing and progressing, making improvements inevitable and welcome. And the third is that, yes, the previous generation of the product or service really was a bit rubbish.
To shift it up (or down) a level, the whole thing reflects the weight we put on the current narrative. “It’ll be nice when it’s finished,” I sometimes say, without actually joking: behind the humorous facade is a genuine belief that whatever technology has planned for us, this is not ‘it’. We are not there by a long chalk, yet, and strangely, we try to act as though we are.
Nor do I particularly believe that we are on a journey of some form, though it clearly does feel that way. A journey implies some kind of general direction and route; whereas the progression of technology is more like participating in the multi-dimensional bastard offspring of Cards Against Humanity and the Mousetrap Game. For sure, we are moving, but with no clear grasp of the rules, or what is around the next corner.
In large part, we accept the change that is happening to us; or we ignore it; or we wring our hands about it: the one option not available to us is to stop using it, for fear of being considered a luddite or for the simple reason that we quite like it, really. And thus we rely on more superficial narratives that carry us forward, that keep the money flowing, that help others get their heads around it all.
Don’t get me wrong, I’m feeling neither negative nor cynical. I am, however, feeling that we are acting as the passengers of complexity theory, focusing on the knowns as the unknowns are too much of a challenge. Some pretty deep questions pervade, not only in terms of ethics or governance (which I bang on about often enough) but also, for example, on the nature of augmentation versus automation, the impact of insight on responsibility, the ability to game our natures, the alignment of interfaces and personality.
I could drill into each of these (indeed, ahem, watch this space) but more important is the fact that they should be part of the more general narrative, but they aren’t, not really.
<* I recognise several weaknesses in this argument, in that I am of course looking for some kind of philosophical perspective on technological progress. First that of course, people are discussing these things: I’m just not party to those conversations. And second, what we might call the framing effect: another issue I’ve seen frequently, where someone stumbles upon a way of looking at things and then, post-epiphany, can’t understand why the rest of the world just won’t do the same BECAUSE IT’S REALLY IMPORTANT. But anyway…*>
Simply put, we could be modelling the kind of outcomes we are trying to achieve from the use of tech. I’m reminded of the Security By Design lobby, which says grosso modo that we should be thinking about security needs at the very outset of creating something new. More broadly, I’m wondering whether, with a few flipcharts and post-its, we can get to a kind of “this-is-a-set-of-principles-we-can-all-adhere-to-by-design” model which goes beyond notions of security, trust, governance yadda yadda and gets closer to an alignment with who we are.
I don’t think the answers would be obvious or immutable. but at least we could have something to build towards, rather than the current acceptance that technology is something to be accepted as delivered, whether we like it or not. Often (case in point: social media) the answer will be both, but then at least we can make more informed choices.
07-12 – Bulletin 12 July 2019. On framing, specialisation and provenance as code
Bulletin 12 July 2019. On framing, specialisation and provenance as code
Crunching in the shoes of others
I was starting to think I was becoming a bit of a one-trick pony with this bulletin. You know, a bit naive, over-optimistic, potentially ranty and vague (or in the words of the immortal Neil Ward-Dutton, not very crunchy). All of these things may be the case but I recorded a podcast this week that brought it all into focus.
In case I haven’t mentioned it, I’m the lucky host of a podcast for GigaOm called “Voices in DevOps”. Lucky not only because it is a topic, or field, close to my heart; but also because I learn so much from the guests, each of whom’s job is to answer one simple question: what is going to make DevOps work in the enterprise?
Some guests take a view based on their own background: so, for example, I’ve discussed a product management mindset on a couple of occasions, and other times the conversation has turned to data and measurability, or collaborative best practice. Meanwhile I’ve been able to test out my own theories, for example around the Guru’s dilemma.
On a couple of occasions, such as this week, the debate has turned to how provenance influences how the situation is framed: essentially the meta-level of the above paragraph. So, for example, if you have a room full of people with a development background, they may take the view that everything can be programmed, and therefore should be.
On the first point, mathematically, they are not wrong. One of the late, great Alan Turing’s principles was that any complete formal language can be used to specify any other complete formal language (or to put it another way): in other words, you can express anything you like as a program. The second point leads to the enticing “as-code” notion, for example expressing infrastructure, security, test scripts as code.
Trouble is, just because something can be done in a certain way, that doesn’t mean it should. My guest this week (David Torgerson, of Lucid Software) warned against the danger of employing programming generalists, rather than specialists in other areas — such as, let’s say, database engineers. Yes, and indeed, you can express a database structure as code.
However, I was reminded of another conversation I had with Neil W-D, who has a knack of making things very crunchy very quickly. We were talking about the nature of programs or process steps to impact data and state, and vice versa, to the extent that the two intertwined. “What, you mean like, “is it a wave or a particle?”,” he asked. Yes, that’s precisely what I meant, if only I could have been crunchy enough.
In my experience, programmers do not always make brilliant database engineers and vice versa: actually, in my own personal experience, I’m happy to write programs but I find myself wanting to leave the room quickly if every I am faced with a spreadsheet… which, yes, makes being a research analyst a challenge sometimes. And don’t start me off about expenses.
But, to David’s point, we need people of both types, and a lot of other types to boot. Horses for courses, different problems require different brains, teams should be cross-functional not only because we need a variety of skills but because we need a variety of minds (which is also the root of my thinking about needing to pay more than lip service to ideas around diversity).
Without talking too much about the topic at hand, DevOps blessing and curse is that it has been led by developers — it is DEVops and not devOPS at its origin. Which leads to another strange notion, that operations can be automated out of existence, which is clearly an idea from somebody who has never worked in a data centre (cf yet another of these newsletters).
That’s all, really, apart from recognising our own preferred framing , and ensuring that we don’t see it as the only view. We can all benefit from walking in the shoes of others from from time to time, or risk believing that they don’t need to exist.
Smart Shift: The truth is out there
In this section we go from loyalty points to virtual cash, following the rise of Bitcoin and its supporting platform, Blockchain, and we look at its applicability to the music industry and other domains. “It’s not about the money,” says anyone who thinks it is about the money.
Kicking off with Superman and Kung Fu Panda (see what I did there), this section
07-22 – Bulletin 19 July 2019. AI vs Jobs, take two
Bulletin 19 July 2019. AI vs Jobs, take two
The pessimist’s guide to the future
I start this week with a couple of resolutions. Or rather, aspirations — they only become resolutions when I actually do them. So:
Resolution 1: to start collecting links about what I read, so I can actually connect to it and turn my idle opinions into commentary.
Resolution 2: turn my idle opinions into commentary.
Meanwhile, we are stuck with idle opinions, but with a bit of solidity behind them. This week, I noticed an article about the dangers of being optimistic about AI vs jobs — the optimist’s idea is that automation eats away the dull stuff first, leaving the more interesting stuff which has to be a good thing, right?
I’ve written from an optimist’s perspective myself, but I have taken a different line. Many jobs only exist because of our desire to both feel useful, to get value from one another and to cope with complexity. On this latter note, to whit:
I was chatting with Adrian (from whom I sublet my office) this morning, about the amount of waste in corporations — or to put it another way, the pointlessness of meetings. People spend (perhaps) a third of their time sitting round tables or on conference calls, much of which gets in the way of getting things done.
So, yes, complexity. We spend time tying to understand things, trying to get other people on board, to agree a shared approach — all of which gets in the way of actually doing something. I read once that the most successful people are those who make decisions, even if the decisions are wrong, as indecision is a bigger killer of progress.
In other words, we already waste a great deal of time doing jobs, and somehow we get away with that. I have a funny feeling that even if we automated a raft of things we currently do, we’d just carry on working (potentially more) inefficiently to fill the time we had saved.
Actually, I’m not sure that’s particularly optimistic at all. Anyway, tune in next week for an altogether more considered and link-strewn bulletin.
Thanks for reading, Jon
07-28 – Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google
Bulletin 22 July 2019. On air gaps, cloud hopper hackers and breaking Google
All Plaxo’s fault
Anyone remember “air gaps”? The notion was, and remains simple — if you don’t want a computer or its data to be hacked, don’t connect it to a network; to transfer data, stick it on remove-able media (for example, an external hard drive) and export it from one computer, then import it to the air-gapped other.
Why wouldn’t you, as a way of protecting your more sensitive stuff? A thousand reasons, not least that the time taken to export/import is too great. Or is it? I have to say, I am wondering, given news such as the Reuters investigation into Chinese ‘cloud hopper’ hackers, who first compromised some of the bigger service providers (looking at you, IBM and HPE) and then went after their clients.
The hidden message at the heart of the article is, organisations feel they have no choice but to connect all of their systems to the global network we call the Internet. “Teams of hackers… penetrated HPE’s cloud computing service and used it as a launchpad to attack customers, plundering reams of corporate and government secrets for years,” say the authors.
But what if it didn’t have to be so? Are we really in a situation where every single thing we digitise has to be considered accessible to, well, everybody? I’m not so sure of the answer: of course if we make it inaccessible, we can’t get to it remotely either; of course data integration offers many advantages; of course we should be able to put security mechanisms in place that actually work.
At the same time, however, I wonder if we have managed to convince ourselves that everything needs to be connected, by default and without exception, and we should deal with the consequences even if we are no good at doing so (or indeed, we don’t get round to it); or indeed, expect third parties to do a better job than us, even if they show they cannot.
I picked up one article last week which suggests that they know they cannot, either: that Google pays large sums of money to people who can find security holes in their code. The term used is “reward” though I can’t help thinking that we should be saying “pay-off” — it’s almost a reverse-bribe, made official and announced in advance.
I spend an inordinate amount of time writing about how people should get better at cybersecurity, and rightly so because they, that is we, should. But while we are not so good, perhaps we should not be so blasé about what data we allow to swish around.
It’s almost as though we have given up. I was suddenly reminded of Plaxo, that address-book-sharing app that pre-dated Facebook (founded by Sean Parker, who was ousted from the board and who then convinced Mark Zuckerberg not to let himself get into the same position — we all know where that has ended up). Connect everything, and teh world will be a better place! was the theory, anyway.
Which brings to other reasons for air gaps. Doing anything on a computer these days sometimes feels akin to trying to get a day’s work done in a fairground. I remember, back in the day, when I would get something done, then actively connect to upload it, catching up on any downloads at the same time. It felt quite peaceful, and no doubt still would.
I’m not saying we should switch everything off, or indeed, disconnect like heroes in a dystopian novel — just that it would be nice to have a choice. Uncoincidentally, I’m talking to a few companies right now that offer various forms of network segmentation, so you can create private connections across cloud-based microservices.
Perhaps this could be extended to work between yourself and your stuff: just putting it out there.
Thanks for reading, Jon
P.S. Thank you to Ian for the ‘cloud hopper’ story.
Breaking Google
From Davey Winder
https://www.ft.com/content/3ba7fc12-8205-11e9-a7f0-77d3101896ec
Internet of things sparks race to replace the battery, by Jessica Twentyman
And then I was suddenly reminded of Plaxo.
Ian sent me this:
Inside the West’s failed fight against China’s ‘Cloud Hopper’ hackers
August 2019
08-13 – Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code
Bulletin 9 August 2019. On microsegmentation and guardrails, and requirements as code
We are all surgeons now. Sorry, I mean programmers
Hello, and welcome to this week’s bulletin.
I’ve been spending quite a lot of time talking to people about microsegmentation recently. I didn’t mean to, however, it would appear that purveyors of microsegmentation products are like buses: nothing for ages, then three come along all at once.
What is microsegmentation, I hear you ask. For a start, it’s a word that means a great deal to those who know what it means, and just about nothing to anyone who doesn’t… whenever a new term comes along, it never ceases to amaze me when I watch the ease with which it is used.
“Trans-notionalism is all the rage,” we say, within minutes of having heard the term and, potentially, whether or not we actually know what it means ourselves. I’m not sure if this is driven by disdain, fear or ignorance, but it certainly isn’t a desire to help others get on board.
Anyway, microsegmentation. A word which could mean many things, depending on context. If I say ‘networking’ it starts to shimmer in front of the eyes — oh, right, yes, network segmentation at a micro-level? That makes sense.
Which is kind of right, but then again, not really. Microsegmentation is actually about network security: you can define specific routes around a network, which (I guess) you call segments. You define them, generally, at the application level, which means you don’t have to reconfigure the network each time.
So you could, say, declare that service A, B and C could talk to each other. The notion then implies that nothing else can see (via encryption) what happens between the three. Quite handy if (say) you wanted to gather information from a network of weather sensors, without anything else seeing.
So, “secure network segmentation, defined at the application level” might be a good description. Clearly this is too long, as we have microsegmentation instead. I’m not sure of why the “micro” is there, other than to make it sound a bit more snazzy.
It’s a good idea. Which begs the question, as so often in tech, why didn’t it happen before? The answer is, it kind of did, but it kind of wasn’t exactly right, for several reasons. First, as mentioned, network segments were traditionally configured at the network level. By network engineers.
Which kicks off a whole raft of issues, not least that it means you need to talk to network engineers. Don’t get me wrong, network engineers are great people, some of my best friends are network engineers. But having to communicate with someone else is always going to slow things down.
And what if you want to change your mind? We’re in an age where everything is supposed to happen fast, and iteratively, which making up new things every day. Ideally, you can do things faster if you give more of it to the people building applications (the developers), and let them make the decisions.
It’s good theory, but it falls down on two counts: one, that the developers will know what to do, and two, that all infrastructure today is magically simple, and requires little intervention. I’ve talked about the latter in a previous bulletin (TL;DR “It’s very complicated and can go wrong”).
As for the former, the microsegmentation approach allows for network engineers, and indeed security experts, to set what is possible, before giving away the keys to the kingdom: they can define usage policies, or in today’s terminology, “guardrails” to ensure developers don’t break anything.
Taking a step back, we can perhaps see why microsegmentation is emerging as a seemingly new thing, given how it could, notionally, have existed decades ago. First, it is as much about the eroding boundaries between technologists as anything. As with Network Function Virtualisation (NFV, also discussed before), our digital mechanisms are increasingly software-based.
This means they can be dealt with by developers, though it is not necessarily the case that they should be. Just because everything can be programmed, that doesn’t mean that the group referred to as ‘programmers’ are the best placed to direct everything. We wouldn’t imagine this to be true for healthcare decisions; nor is it the case for network configuration.
We may be arriving at a juncture where most things can be software driven, but are also reaching a point where software becomes the basis of communication, and not the consequence. If this sounds like I am going off on one, I am probably not explaining myself properly so let me try harder.
We need all the groups of people we have, security and infrastructure expertise, business and financial acumen, science and artistic endeavour. Software will infuse all such things, indeed, it already does but it is disjointed. But how about I, as (say) a healthcare person, could set out what I wanted and express it in terms that could then be programmed?
What this leads to is a common way of expressing needs. The whole field of requirements management has emerged over decades due to the complexity of translating needs into code, and we are a long way off simplifying that. A start, however, is to recognise our applications as transient, as a way of doing something right now, rather than as an end in themselves.
I will leave this here. If you have read this far, I would welcome your feedback. One thing is for sure: we are going to need a whole bunch more words.
08-23 – Bulletin 23 August. Automating operations and what's in a name?
Bulletin 23 August. Automating operations and what’s in a name?
Left brains and right brains
At a recent developer event, I happened to participate in a discussion about the role of what we call “operations” that is, managing and running IT systems, networks, storage and all that. The view, universally it appeared, was that operational IT was on the brink of being automated away, therefore rendering such roles redundant. Discussion turned to the fact that other roles would be available, so nobody needed to worry about jobs.
Which was nice, but irrelevant. Because IT infrastructure is not going anywhere. It only occurred to my flummoxed self some way through the debate, that the quite senior, enterprise-based people involved were largely on the development side. I have been characterising these as the right-brain creatives, largely directed by innovation, aspiration and other uplifting motives.
Meanwhile, on the operational side are the left-brain types, who need to work with reality and whose fault it will be if things start to fail. And fail they do, for reasons nefarious from power shortages to software bugs, and everything in between (it is completely relevant that the origin of the word ‘bug’ was based on a real insect, crawling across a circuit board and causing it to short).
Wait, I hear the visionaries say. It’s all about orchestration now. Work things out up front, write it in configuration files, throw it at the pristine racks of servers and it will just, you know, happen. That’s all very well, and I’ve been to the co-located data centre facilities with minimal staff where, indeed, it does all seem to be auto-magical.
Ramifications of this approach are that the configuration work still has to happen, even if specified in YAML (sands for YAML Ain’t Markup Language. Don’t ask). Such work can be given to Site Reliability Engineers, a new super-race of individuals that are absolutely bloody brilliant at defining infrastructure so it will just work. I’m pretty sure that’s the spec.
Or it can be allocated to developers, who, as we all know, love (and I mean LOVE) to spend their time doing things that aren’t development. “If only I could spend more time defining my target infrastructure,” said no developer I have ever spoken to. Okay, sorry, I’m being glib. The point is, however, that the job has to happen, even if the interface moves.
There’s more. In terms of day to day, keeping the lights on operations, much of the effort goes into dealing with consequences of poor, or incompatible decisions. These can be, variously, badly architected solutions; applications being used for things they were never intended for; prototypes becoming live products because of shortening timescales; and so on.
Many such challenges are as likely to happen in software, as in hardware. Ops people can suck their teeth for a reason: it’s because they will remember last time a certain thing was tried, and how badly it went for everyone involved. If you speak to someone shaking their head and looking negative, it isn’t because they were born that way, but because they have learned to be so.
We do have some potential hope coming from the latest, greatest trends in tech: I’m speaking about containers, microservices and all that (for the uninitiated, this means defining applications as a set of highly portable modules which can be run anywhere). With such models comes standardisation of everything ‘below’, which leads to less incompatibility, etc etc.
However these ideas have a way to go. The Kubernetes container orchestration service may become de facto, for example; but it doesn’t yet have everything it needs to support storage, networking or security, nor is there a generally agreed approach to building a Kubernetes application. We may be standardising one thing, but the rest is still very much to be dealt with.
And then, perhaps all we will have done is shifted the problem. With Kubernetes you can build fantastically powerful, yet complicated applications, buts of which could be running anywhere. And so, guess what, people will do just that, even when it is completely the wrong thing to do. And they will do it badly.
And when they do, somebody will need to be there to pick up the pieces, to work out where things stopped working, to isolate the problem and to feed back information that can prevent it from happening again. Who knows what they will be called, these clever people: something like “operations”, perhaps. No doubt in five years’ time, rooms of experts will tell us that such roles are on the brink of being automated away.
And round we shall go again.
September 2019
09-15 – Bulletin 13 September 2019. On the depth of learning and embracing frequent failure
Bulletin 13 September 2019. On the depth of learning and embracing frequent failure
New wine in snake skins
One of my favourite books was, and remains, The Voyage of the Dawn Treader by C. S. Lewis (yes, I cried when Reepicheep went over the sea). And one of my favourite passages is when Eustace is turn into a dragon. From a note (I think) he learned that, to become human again, he needed to shed a dragon skin (like a snake skin) and bathe in a certain pool.
So, he tried. He shed a skin, but it wasn’t enough. So he shed another, then another, bathing each time, but each time he emerged a dragon. Somewhat unexpectedly, Aslan the lion happened upon him: it’s not working, said Eustace. That’s because you are doing it wrong, said Aslan, who took out a huge claw and cut through Eustace’s many skins like an onion. That time, he emerged from the pool a boy again.
A couple of times in my career, I have been quite convinced I know it all… working back from the punchline, only to discover, quite uncategorically and without mercy, that I seriously do not. The first came just after I had been over-promoted to the point of deep stress, when working as an IT manager for a subsidiary of Alcatel.
Alongside the coping strategies and very real learning I was picking up on the job (I have, essentially, dined out on that experience ever since), I came to the conclusion that I had this management thing nailed: I doubted was anything else to learn about keeping saucers on sticks, running meetings, facilitating, time management or anything else administrative.
I then joined Admiral Management Services, a company whose ethos gave short shrift to any such idea of grandeur. Yeah, whatever, was the attitude: take some minutes and be a good boy, would you? Learning the hard way (failing fast and frequently), I unpicked everything I thought I knew and re-knitted it into some semblance of genuine best practice. Which I have also dined out on ever since.
The next big moment of big-headedness came a couple of years into my analyst career, when (at the heights of the dot-com) I thought it a really good moment to set up on my own. The lows of the dot-bomb followed almost immediately, mirroring both my feelings of utter incompetence and my bank balance. So many lessons learned, not least, cooking on a shoestring.
I didn’t mean to say all that: I was only going to talk about my memories of being in (what felt like) financial difficulty: any money we had was always in the wrong place, cheques bounced and bills went unpaid, with banks gleefully adding their fees to any debts incurred. I wasn’t going to say that either: I was only going to make the point that getting back on track didn’t just take extra effort: it took extra effort beyond what I thought extra effort meant at the time.
Like Eustace, sometimes the problem goes far deeper than we have the ability to understand. Not least in questions of learning, particularly when we come to it from a position of knowledge. Surely, people understand what is being discussed, we say, or can understand it as long as we explain it correctly. Even if they are a bit behind. And so, in areas of so-called ‘new thinking’, we can have whole conversations without realising that what we are saying is of very little relevance.
I’m coming to think this is the case for my own current area of specialism, DevOps. Even as I discuss how to make it work better (a.k.a. ‘to scale’), I have my good friend (and practitioner) Andy’s words ringing in my head: that nobody is really doing it, they just say that they are. Perhaps to get the analysts off their backs. It’s not just DevOps: the reason we can keep saying the same things about best practice, webinar after webinar, year after year, is that people still don’t get it.
To whit. I’m not sure what the answer is but based on my own experience, perhaps part of it is to recognise the fact that we are a lot less mature than we would like, as organisations and as people. And we need to deal with this as Eustace, not superficially but digging really deep, getting right down to the base, beneath layers and layers of traditional practice.
Not to do so will cause us to reinvent whatever we are talking about, for a number of reasons. First to avoid boredom — you can only hear people bang on about the same thing so many times, before cognitive filters push it into the background. Second because it loses its effectiveness as the world moves on. And third, because lovely marketing and PR people want something new to talk about, in the name of differentiation or thought leadership.
These days I have come to realise that ‘knowing it all’ is a false summit, a thinking person’s Tower of Babel. Rather than embracing my own inadequacy and giving up, I’ve found a joy and freshness in learning: in essence, I’ve come to terms with my own, valid feelings of imposter syndrome. Perhaps we could all do with recognising that we don’t all get it, neither at a superficial nor deep level.
There is no shame in this, as such an admittance is the first step in actually working out what the heck is being discussed. Like Eustace, we can all benefit from digging particularly deep in terms of what we don’t know, understanding the problem we are trying to solve before applying the latest iteration of solutions.
Thanks for reading, Jon
P.S. I said these bulletins would be more factual from now. This one isn’t but I’m on holiday, so sue me.
2022
Posts from 2022.
September 2022
09-22 – Bulletin 25 October 2019. On building a firmer foundation, and wibbling
Bulletin 25 October 2019. On building a firmer foundation, and wibbling
My kingdom for a platform…
Humans are magpies: we love shiny things. Trouble is, one of the curses of the technological revolution is that we have become inordinately good at creating them, in both hardware and software, and we can’t help but be distracted by them.
Consider for example, publishing a book. For hundreds of years, this was the domain of the few: authors such as Dickens relied on magazine publishing first and foremost, whereas now, “why don’t you self-publish” opens the door to a complex and overlapping variety of platforms. In audio and video, the story is the same, as it is in cloud computing: AWS has built a business by offering every possible option.
The result is a feeling of bamboozlement for the few, even as the many succeed. It also distracts from the point: I don’t think technology addiction is as big an issue (though it is big) as time people waste in their billions, lured into low-level procrastination via technological tools.
We want to produce, but in doing so we find ourselves consuming, forgetting what we came here for. At home, we call a certain moment in the supermarket “going into a wibble” when, overwhelmed by options, we end up standing, slack-mouthed, in the middle of the aisle. Completing the shopping becomes the challenge, and meanwhile ‘they’ have us exactly where they want us, helpless in their grasp.
I’m coming to believe that recognising this phenomenon, and other behavioural displays, is essential to plotting a route through this. Distraction, deflection and false debate has overtaken our politics only because we are not yet sufficiently hardened to it (in a nutshell, we are all being played). We’ll work it out, and when we do, these will no longer be such powerful weapons to be used against us.
Gosh, that went a bit dark. Meanwhile, I finally managed to scrabble enough… oh wait, I just need to take a phone call from someone who is offering free alarms to people living in my area… that actually did just happen, now where was I? Ah, yes, I finally managed to scrabble enough wherewithal to get the final chapters of Smart Shift online. In a nutshell, technology needs to be handled with care.
The landing page is here, and you can read my retrospective thoughts on it here. Bottom line: don’t let anything get in the way of what you want to do.
All the best, Jon
09-22 – I'm worried I don't have imposter syndrome...
I’m worried I don’t have imposter syndrome…
What a difference a year makes. This time last year, I remember it well, the feeling of being unable to sustain the weekly newsletter I had been writing. Work had taken an unexpected turn, in that one client had a stream of activity that was growing like topsy. It was all hands to the pumps as we looked to get on top of it all — somewhere along the way, I was given the rather snazzy title of VP of Research.
Nothing has changed. Throughout lockdown, things have stayed as busy as ever. We were very lucky in one way: when the company was relaunched a few years ago, it did so without expensive offices in downtown San Francisco, preferring to focus on delivery. The business is multi-location, multi-timezone and works on the basis of trust, that people aren’t off playing golf when they should be working. Turns out, looking at recent circumstances, that people in general are pretty trustworthy, if you give them half the chance. Every barrel has bad apples, etc, but the point stands.
So, yes, super-busy but nonetheless I’ve had time to reflect. Not least on my area of focus, which is how software is developed and deployed into operational use. There’s a massive irony here. My first job was as a programmer; I later ran tools and infrastructure for development teams; I went on to advise some pretty big organisations on how to develop software, and how to manage data centres, servers, storage, networking, security and all that. I’ve written books about it, for heaven’s sake — so how come, when I came to focus on this area again, I felt so out of my depth?
I’ve been tussling with this question for the last couple of years. Part of the answer is down to the fact that languages (including programming languages), mechanisms and tools change, as do infrastructure targets. Declarative and event-driven approaches are generally beyond me: you can put me in the category of someone who can write Pascal in any language. People change, and so do ways of thinking, of interacting. Which is why I felt even more out of place than usual, when I went to a GitHub Universe conference (they should have a special corner for old farts, with, I don’t know, ink pens to play with or something).
However, I’ve seen patterns emerge, patterns which have both eased my fears, and given me the opportunity to give something back. Back in the Nineties I was working at the forefront of what we might call ‘the agile boom’ — a time in which older, ponderous approaches to software production, with two-year lead times and no guarantees of success — were being reconsidered in the light of the internet boom. The idea was, and remains simple: take too long to deliver something, and the world will have moved on.
I’ve watched as methodologies have evolved, as programmers have been positioned as the kingmakers, as yet more books have been written, each claiming to have discovered some secret of unprecedented success, or a way to avoid inevitable failure. Look at Netflix and Amazon! Look at Blockbuster and Kodak! No doubt, these success and failure stories are real, but need to be tempered with the fact that four fifths of technology startups fail, whatever clever techniques they’re following. And meanwhile, many of the biggest, oldest businesses in the world lumber on.
Aided by online information sharing, the world has become lifestyle-oriented, with new and improved always preferable to whatever was happening before — and the tech industry has not escaped the grasp of faddishness. No wonder I was feeling out of place, as I probably wasn’t the target market. Meanwhile, beneath the surface lies a simple truth, that such short-termism means missing out on fundamental elements such as planning, setting strategy and so on. And the spin-off result of doing lots of things very fast, is to generate a lot of complexity which then needs to be managed.
Ah-ha, I thought to myself. It’s no surprise to me that we’re seeing what we might call ‘a wave of governance’ start to envelop the world of software development, as short-term perspectives are reconsidered in favour of getting things right first time. There’s buzz-phrases for this, of course, such as ‘shift-left’, which is about thinking about quality and security earlier in the process. Looking for governance-related topics to write about, I actively chose to align with the world of Value Stream Management (VSM), which essentially brings business process optimisation to software delivery. I paraphrase but that’s close enough.
Do I think that software should be delivered more slowly, or favour a return to old-fashioned methodologies? Absolutely not. But it is right and proper that current approaches should mature to take longer-term goals into account. I know that even our most darling of cloud-native mega-businesses are now struggling with the complexity of what they have created — good for them for ignoring it while they established their brand, but you can only put good old-fashioned configuration management off for so long.
So, yes, I’ve moved from a state of imposter syndrome to one in which I feel I can contribute some of the older lessons. And the good news is, it’s not just me that can do so. Many older, larger companies I have spoken to feel similarly out of their depth, like they can’t compete with newer organisations. For sure, they can’t suddenly become carefree startups; however they can recognise that many enterprise-y practices (such as aforementioned strategy and planning) are actually a good thing, which can be woven into new ways of delivering software. Put simply, as newer approaches evolve and mature, they become more appropriate for enterprise use.
I’ll leave that thought there, other to say: don’t feel you have to be down with the cool kids, they haven’t got everything right and still have plenty to learn. That’s as true from a corporate, as an individual perspective.
Someone, Somewhere — The Whole Of The World
In other news, I’ve written a song. It was quite cathartic to write and develop - you can hear it here. I’ve entered it in a local music competition if you like it and want to vote - it’s song number 34.
All the best, and perhaps see you next year if not before. Thanks for reading,
Jon
November 2022
11-11 – First, apologies. Not for this being the first bulletin for two years (I will come to that), but for the people who responded to the last one in September 2020 and never got a reply. Your emails have been languishing in my inbox ever since, ever-hopeful I might get my act together. Well, now, I have; or at least, I have once more. We shall see if this becomes a regular thing.
First, apologies. Not for this being the first bulletin for two years (I will come to that), but for the people who responded to the last one in September 2020 and never got a reply. Your emails have been languishing in my inbox ever since, ever-hopeful I might get my act together. Well, now, I have; or at least, I have once more. We shall see if this becomes a regular thing.
I am, actually, quite hopeful. This isn’t going to be a “everything’s different now, I’m sorted” kind of squirmy rationale, but rather, an observation on the state of online publishing. In my own case, I had multiple self-hosted and aaS Wordpress sites; I was testing out Medium; I had this very newsletter on Mailchimp. And meanwhile, LinkedIn was yoo-hooing for my time, as were several Facebook pages and online groups. All that notwithstanding that I write for a living, as a hobby, and as a raison d’être.
Note, this wasn’t people asking me to write stuff, I wouldn’t be so “don’t you know who I am” about that. Rather, the business model of all these places is that, to get eyeballs, they need user-generated content. Nothing new there, I know; but I would point the kind reader to the very real consequence of content-oriented paralysis, probably not all that different to the yoghurt-related stumblings we all experience in supermarkets.
All of this ably documented by psychologist Barry Schwartz in his book “The Paradox of Choice - Why More Is Less”. I have’t read it, truth be told, as there’s simply too many other books on similar subjects, and I wouldn’t know where to start. And equally, it’s one of those books that is so neatly summarised by the title that it doesn’t need to be read. Sorry Barry, and sorry too to the author of the 80/20 rule (though Mr Pareto is probably due a few royalties there).
In a Medium post on September 20, pretty much two years to the day that I wrote the last newsletter, I wrote, “Too many places to write, too many subsequent feeds, integrations and synchronisations to manage. In result, nothing gets done.” I realised that I needed to stop caring about where things were to be published (and such things as cadence, building audience etc), and focus primarily on the act of writing.
I’d love to say this came to me as an epiphany, as I sat overlooking a sunrise on a distant beach. The truth is somewhat more mundane: I was actually facing several database limit warnings from my hosting provider, as my main Wordpress site had been inundated with Spam and I hadn’t been clearing it out. For reasons unknown, any subsequent efforts to do so were unsuccessful - so I decided to export, delete, and re-import the lot. Little did I know that the various plug-ins I had installed over the years would conflict with each other: the import failed, and I was forced to think again.
Cue allegorical thought: whist I hadn’t been the smartest in not testing the import before deletion, for those who don’t know, this is a common cause of technological change in organisations large and small. We’d all love to think that IT strategy is set from the top for well-debated reasons, or let by smart engineers looking towards continuous improvement. Sure, these things happen, but equally often, IT teams are left scrambling to solve a problem caused by an inadvertent change, with consequences linked to unexpected dependencies. Truth, and as my old colleague and friend Tony might say, “It’s not the backups that matter, it’s the restores.”
So, I was stuck with a very large XML file, unreadable by Wordpress, and containing the past twenty years of my writings. Hmm. Fortunately the past couple of decades, fuelled by open source and more general online behaviours, have resulted in all kinds of scripting languages and libraries, available to even those with now-limited hands-on experience. In not many lines of PHP, I was able to extract the meaty parts of the XML file into separate RTF documents, which I then imported to my preferred writing tool, Scrivener.
Interestingly, they amounted to 480,000 words, which isn’t a bad corpus of ramblings. Equally, and when faced with having to post each one back online individually, I realised may of them didn’t merit publication - it’s not as if everyone wants to read about my re-installation of some early version of Linux, is it? And equally equally, I realised I finally needed to get on top of what I published where, so I could start writing again, rather than spending my time “maintaining content”.
So, gone was the self-hosted Wordpress (I created a new site and almost immediately got spammed - bye!). Gone also was the public Wordpress - I reused my script to pull multiple other articles into Scrivener. Gone was Substack, which I’d recently created with a misplaced notion that a new online destination might be just what I needed to get things going again. But in their place were Medium, which seems to be a reasonable place to host general content, LinkedIn for sharing more tech-related materials, and then a small, neat personal web site as a shop window, or mausoleum depending on which way you look at it.
And that’s that. As a result I’ve been able to re-upload about 100K’s worth of old stuff (of 480K: how’s that, Pareto?), sharing as I go on various socials, and now - phew! I can get on with writing. What about the internal choice, I hear you say? I was going to add something about the existential nature of what to write about, but even as I write this, I realise that is less important. By unblocking the pipe and letting the writing flow, that’s a large part of the problem solved. I’m not even going to worry about whether anyone wants to read it, not for now.
There - some allegorical thoughts, but most of all a release to get on with the job. I would add “extra-curricular” at the end of this missive, but it hasn’t really been very technical, has it? I will mention that, as part of this release of orthographical tension, I’ve taken the opportunity to start revising and uploading Smart Shift, my history of tech and its consequences, so watch that space; along with the decision to get other part-written and complete stories online. Hooks to all of these at my web site if you are interested.
There we have it, and thanks for reading,
Jon
December 2022
12-04 – Bulletin 2 December 2022. The System Is the System, whether we like it or not
Bulletin 2 December 2022. The System Is the System, whether we like it or not
Sir Galahad: Camelot!
Sir Lancelot: Camelot!
Patsy: It’s only a model.
Working in technology has an addling effect on the brain. Despite decades of trying to ignore the Kool-Aid, its residue has still established itself a beachhead in my mind, has created just a little furring in the pipelines of my thinking. Even as I look at the world, see its continued stumble forward into what I hope is a more enabled, empowered future, I want to believe the visions of a better place I receive on an almost daily basis. Everything will be better, cheaper, greener, I’m told – and while I know it isn’t simple as that, I want to believe the superfice, to take the bold statements on face value. I really do.
The problem isn’t that it is all made up; more, the challenge is how to tease what can be achieved, from what is being presented. Don’t get me wrong – I know we need a vision. As humans, we’re only likely to respond if we are excited enough to buy into what’s being presented.
We also love a charismatic speaker, who can make everything sound possible. I know I do – I still remember buying three bottles of some cleaning product from a man outside a hypermarket in Paris, just because he was so good at telling me how good the product was. The punchline isn’t even whether it worked, when I got it home; it’s that I never found out, because in all the excitement, I’d temporarily forgotten that I rarely clean my car. The three bottles sat on a shelf in the garage for at least a decade before they were disposed of. No, I’m not proud of this.
In business, it’s a challenge, isn’t it? Every organisation needs a vision, needs leadership, needs all the oompf it takes to make its employees want to get up in the morning. And similarly, the tech industry thrives on cranial energy, generating its own cryptocurrency by way of synaptic power. Without excitement, there would be no industry – and therefore no innovation, no breakthroughs, no digital transformation (there’s the K-A kicking in). I’ve written before about annealing, in which metal amalgams are heated so that they flow together, before cooling to form a hardened alloy – this is simulated annealing of the senses, just without the cooling.
The challenge is when it – the vision – doesn’t actually map onto an achievable reality. I remember one person I used to work for, who did a pretty good job of painting a picture of what the future was going to look like. “Trouble is, he’s started believing his own bullshit,” said a colleague. True or not, that’s the danger – if our own thinking (singular or corporate marketing) jumps some figurative shark, then we arrive in a nether-nether wasteland of unachievable goals.
Hyperbolic statements about the future are one thing, but similarly imaginary realities about the present are quite something else. We can all be skewed by our own perspective and experience (to the extent that we can believe everything is fine just because we are). Denial is a dangerous thing, and denial of reality is doubly so.
Some of this boils down to straightforward systems thinking. I’m not greatly experienced in this area so bear with me, but one thing I have learned is that it is a question of scope. If you look at manufacturing plant, for example, it takes in trailer loads of stuff at one end, and similar trailers line up at the other to take away its mass-produced outputs. You may have a great idea that it could be doing something completely different, but it won’t, any more than a car could double as a parachute.
And so it is in business, which takes in whatever it likes – widgets or liquids, ideas and concepts – and outputs something people will want to pay for. Organisations today are tussling with how to deliver on the promise of the technological age: they’ve been told that digital transformation implies some wholesale change in business models, a starting from scratch, a reinvention. I’m completely up for it, sounds great – but for two things. The first is that every single new organisation I have seen, as it scales, creates similar structures and has similar cultural/political elements as every other, past or present.
This is debatable. We have flatter structures, matrices, self-managed clusters and so on. But they all still have meetings, silos, career ladders, inter-departmental politics and the rest. No magical structure exists which somehow misses out our human traits, habits and foibles.
The second issue, a corollary of the first, is that making any change will inevitably hit a wall of, potentially insurmountable resistance. You can do what some are currently trying, which is to remove half the staff and keep your fingers crossed that you still have a business at the end of it. Or you can work through the well-founded principles of change management and make the adjustments that are actually feasible, and palatable for your organisation.
The digital transformation mantra may well have been “transform or die,” citing Kodak, Blockbuster and the like; but there’s even greater potential for “transform and die” – it’s funny how the tech industry chooses to ignore the demise of its own business empires (Yahoo! And Myspace, anyone?) even as it keeps telling other industries how they need to get with the program.
What’s the answer? I hate to get dull about this, but it reminds me of my days as a business analyst, where we followed a pretty boring, three-step methodology (even the word makes me want to yawn) when it came to business change. First, understand the as-is situation; then, define a to-be situation; then put together a plan for getting from one to the other. Insert “low-hanging fruit” and “stakeholder buy-in” and you’re pretty much done.
So, yes, to as-is and to-be. The point here is that, unless you’re going to take a leap into the unknown, you need to know what you have, before you decide how to change it. The art of the possible has to accept reality as it stands, working with the existing system rather than creating a new one (nota bene, the latter option exists: it’s called setting up a company).
To finish on a piece of good news, even such an approach requires vision, leadership and indeed, charisma to deliver. There’s plenty of milage to be gained from adopting technology in new and exciting ways, without feeling obliged to replace the farm with a casino in the sky. In fact, in the longer term, perhaps more so.
Apropos of vision vs reality, here’s an article I wrote this week about the “Journey to the Cloud” mantra costing organisations money, as it skews their technology strategies and causes them to spend more than they should, whilst achieving less benefit than they hope. I proffer “multi-platform architecture” as one way of thinking different. Here’s my post on LinkedIn about it, I’d welcome any thoughts you have (there or here!)
Thanks for reading,
Jon
2025
Posts from 2025.
May 2025
05-20 – On principles, in principle
On principles, in principle
Well, hello again.
I do like principles. Over the years I have spent my time looking for patterns in structures, in processes, in behaviour. To be fair, I can’t help myself — as a kid I used to do it with wallpaper, or the light shapes cars made as they passed (didn’t we all, before TV begat the wealth of multimedia channels we have today). From time to time, I’ve written them down — back in the late Nineties, for example, i documented how all processes could be defined as lines, V’s and loops.
Anyway I’ve collected a few of them, some adopted or borrowed, some epiphanies, some emerging from solving a tricky problem. I reckon I have another ten years in this game, one way or another, so now’s a good time to start distilling them down, making sense of them — if others can learn from my a-ha moments (and mistakes) then, great.
Here’s my current top seven from a tech perspective, and more broadly:
Orientation before action — know where you are, before deciding what to do. A very personal one, this.
Align with complexity — we’re not going to fix (or even understand) it all, and we can’t pretend it’s not there.
Plan forwards, map backwards — a corollary, as well as reflecting the first habit: start with the end in mind.
Architect not too large or too small — Goldilocks got it right, particularly when it comes to structure, data and process.
Deliver often and early — getting stuff done is about divide and conquer, and then get something out there.
Manage everything as a service — so powerful, this, as it enables traceability. As true for restaurants as applications.
Recognise value above all — ultimately, success comes when benefits outweigh costs, the fundamental equation.
I don’t know whether these will distil down to three, or whether I will end up with hundreds, time will tell (speaking of time, that has to be a principle in itself: I am increasingly convinced it is the most valuable commodity we possess). But it’s a good set for now. I fed them into Larry the LLM and asked what they shared: intention, it said, which was a pretty good answer. Not mine, I was thinking balance, but “intentional equilibrium” has a ring to it.
I don’t know where this is going, perhaps this will just be the latest sporadic bulletin from the steppes of rural England. Still, it’s now coming from a new, self-hosted web site, where I have collated the (literally) thousands of posts I’ve written over the years, as well as books, poetry and everything else. Thank you for sticking with it, and we’ll see what the next few years hold.
Cheers, Jon
Cheshire Cat
Posts published in Cheshire Cat.
1999
Posts from 1999.
April 1999
04-01 – Government launches IT White Paper
Government launches IT White Paper
Perhaps the most important elements we can glean from the government’s white paper “Modernising Government”, are the milestones it sets for the availability of its electronic services. The first is that 25% of government services should be available online by 2002. The second sets a target of 2008 for all government services to be accessible electronically. Pragmatic application of the Pareto principle – the 80:20 rule – is required to ensure that the most useful services (in terms of both utilisation and benefit) are included in the 25%. In this way the needs of most individuals and businesses should be met, and the costs of delivery of government services, should be dramatically reduced within three years. A lot, however, is dependent on facilities to access such services. Will the government run electronic and paper facilities in parallel? Will this be seen as the moment when the possibility of information haves and have-nots became reality, with connected citizens being ever better informed and better able to access services than “the great unwired”? Proactive steps will be necessary to avoid this.
The 2008 deadline might be “realistic,” as noted by Ian Taylor MP at the launch of IT-Director.com yesterday evening. However, when the time comes it will be very difficult to judge whether the government of the day achieves the target. The eWorld is commonly known to be changing at a pace of dog years; it is also recognised as an enabler of vastly different ways of working and living. It is likely that the services provided by government will change significantly over the next decade, particularly as it becomes clear that the internet is a vehicle for even the most insignificant to have a voice. Hence, even if the government of the day meets its goals, it is likely that whatever is achieved by 2008 bears little resemblance to that which was initially planned.
(First published 1 April 1999)
04-01 – IT-Director.com launched by Bloor Research, Certus and Silicon News
IT-Director.com launched by Bloor Research, Certus and Silicon News
Yesterday evening, the realities of how business will operate in the 21st century were very much in evidence at the House of Commons launch of IT-Director.com. Speakers Ian Taylor MP, Paul Cooper of British Gypsum, David Taylor of Certus and Robin Bloor of Bloor Research, all described how co-operative approaches, such as those enabled by IT-Director.com, would become essential elements of business success.
Robin Bloor, CEO of Bloor Research, described the role of IT-Director.com as a provider of “instant analysis” to its subscribers. This was noted by Ian Taylor MP as a key facet of the online service, acknowledging he was delighted that it would be available to members of the house. David Taylor, President of Certus, explained how IT-Director.com and Certus intended to provide a triangle of communication between end-users, vendors and decision makers. “IT-Director.com will help us provide a quality IT service to positively support business goals,” he noted. Paul Cooper, IT Director at British Gypsum, stressed the role of the future IT Director as focussing on business requirements and issues, whilst promoting and facilitating the use of IT. Continuing the co-operative theme, Certus also announced the launch of its 4th R campaign, involving businesses working with education to enable the next generation of IT-literate employees.
(First published 1 April 1999)
04-01 – Open Goal for Microsoft
Open Goal for Microsoft
Why do Microsoft want to open their source? The answer is that they probably don’t. For one thing, it is unlikely that the Windows code is particularly pretty. Any software that has gone through the same number of reincarnations as Windows is likely to contain plenty of redundancies, inconsistencies and downright errors. There are tens of thousands of seasoned developers that will be going over the stuff with a fine tooth comb the moment it is released to look for problems. Undoubtedly they will find and gleefully report them to the baying (and burgeoning) anti-Microsoft camp, adding to the company’s already tarnished reputation for quality. Given the current climate, it may be that the software giant has no choice but to surrender a little of its projected image of perfection. A little lesson in humility is not such a bad thing, once in a while.
(First published 1 April 1999)
04-09 – ISAs and Legacy Systems
ISAs and Legacy Systems
ISAs are quietly posing a significant problem to the IT departments of financial services companies. There are several factors for this. First, ISAs are essentially a management vehicle with which a customer can manage a variety of investment types. The implication for IT is that legacy systems which could previously be run standalone now require to interoperate. Similarly, business processes, for example for savings and for investment customers, are having to be merged. Second, the complex rules governing ISAs were only available in draft form until very recently, hence organisations have had very little time to implement ISA facilities in their applications. Together these factors are causing much of grief, not to mention cost. It could be argued that the ISA situation will be beneficial, as it encourages companies to integrate their legacy applications and, in the process, provide a better overall service. However, what with the Year 2000 and the Euro, it is probably an additional headache they would have been glad to do without.
(First published 9 April 1999)
04-09 – The Art Of War
The Art Of War
The Web is changing the world, in peace and, so it would seem, in war. All sides of the conflict, plus external agencies, are using the internet more than ever to aid them in their tasks. If the rumours are true, for example, systematic hacking of both military and civilian sites is becoming expected as a weapon of war. At the same time, as in Rwanda, relief agencies are reported to be using bulletin boards to help reunite refugee families. Donations to such agencies are now accepted online. News agencies are broadcasting up to the minute information, in a range of formats and languages, to a global audience. As with all tools, the internet cuts both ways.
(First published 9 April 1999)
04-12 – Global Village Voice
Global Village Voice
BT is launching a voice-over-IP service. Read this statement, now read it again. British Telecom, the organisation providing telephone and dial-up services to the vast majority of the British public, is launching a service which enables its users to communicate via the Internet at the price of a local call, wherever in the UK they may be situated. Admittedly, the service is only open to customers of BT’s own ISP. Also, it is known that Internet telephony still suffers from quality issues remeniscent of patching a call to a less-advanced foreign country ten years ago. But still, this move by BT is a milestone in the inexorable rise of the Internet. What we are seeing is the convergence of technologies and this has several implications. Given BT’s acceptance (and endorsement) of voice-over-IP, the question raised is that of relative cost. Subscription to BT’s ClickFree service is now free, and two BT ClickFree subscribers can call each other at local rates as long as each has a PC. In other words, users of telephone handsets are going to be discriminated against – these unfortunates will still have to pay the price of a long-distance call. This overhead will become difficult to justify as quality improves and hence more and more people start using the Internet for voice calls.
BT, like other telcos, has already responded to concerns on call pricing, with the price gap between international calls and local calls decreasing. This is fair enough, as most of the costs of a call are derived from enabling a connection to the local exchange and subsequent billing of the call. BT will be keen to keep its revenue streams open for as long as possible. Ultimately, though, the costs of both national and international calls will have to bow to the pressure of the Internet.
(First published 12 April 1999)
04-12 – Sailing the Shifting Sands of Silicon
Sailing the Shifting Sands of Silicon
One of the issues faced by hardware manufacturers is knowing which technologies and markets are a stable enough foundation on which to base a business strategy. Recent reports are an indicator of this, from IBM’s report of a $1 billion loss for its PC division, to Compaq’s shock earnings announcement today. Stable foundations are elusive, but companies have no choice but to continue their efforts to identify, encourage or even create points of sufficient solidity on which to develop and market their products. Even if some organisations, such as Dell and Gateway, have business models which are demonstrably successful for a while, it is clear that no one company has the monopoly on predicting the future.
One company which may be getting it right is Hewlett Packard. HP has a hard-earned reputation as a provider of quality, technologically sound systems and equipment. Unfortunately, it has also gained a reputation of being a little dull and it has become left behind in the systems race. There are signs that all this may be changing. Last month, HP announced it was spinning off its testing and measurement division. The company is now announcing a range of computers to be targeted at the e-commerce market. The new N-Class servers will run Unix as well as NT, proving the company’s strategic commitment to this operating system. Secondly, and perhaps more importantly, the servers are designed to be upgradable to Intel’s Merced 64-bit, RISC-based processor, due out next year. Hewlett Packard are not just aligning themselves with Intel’s strategy: HP is being instrumental in the development of McKinley, Merced’s successor, and are developing a number of key components of the chipset required by McKinley.
This approach is low-risk, considering that other parts of the industry (including Microsoft) are endorsing Intel’s 64-bit architecture. At the same time, HP appear to be injecting more pizazz into their product marketing. In doing these things, they are positioning themselves as a major player in the systems battles of the future. Who dares, might just win this time around.
(First published 12 April 1999)
04-14 – Applications - Rent or Buy?
Applications - Rent or Buy?
The chances are you have not heard of Corio, however you will have heard of British Telecom. These companies are linked by a common objective - to provide application services to customers over the Internet. Corio, a Californian firm, is linking with Peoplesoft; BT has struck a deal with SAP. There will be many providers of such services, making enterprise scale applications such as these available to customers that could not otherwise afford them. Smaller scale applications can also be rented out over the Web – for example Yahoo already provides email and scheduling and is reported to be considering providing word processing facilities.
The advantages of application outsourcing are numerous – for example, capital costs and administration overheads are enormously reduced and customers can take advantage of the newest releases of applications. Downsides are yet to be fully appreciated as this market is still in its inception, but are likely to include problems of interoperability between local and remote applications, issues with service level maintenance and difficulties in changing suppliers. Just as with outsourcing of IS departments a few years ago, perhaps the biggest problems will stem from the legal wrangles which will inevitably arise as customers find the service is not all it was hoped for. Application outsourcing, whilst appearing extremely attractive, should be treated with the utmost caution until its benefits and costs have been fully appraised.
(First published 14 April 1999)
04-14 – Internet Value – Business Sense
Internet Value – Business Sense
Nobody really knows what the Internet is worth, but whatever its value, companies which have successfully invested in the Web are reaping its rewards to the detriment of those left behind. At the Massachusetts Institute of Technology yesterday, Bill Gates underlined his predictions of ubiquitous Internet usage by saying that “Billions of dollars of market valuation are based on knowing how quickly this [Internet uptake] will happen.” Organisations unable to exploit these cash flows are proving the losers. For example, Merrill Lynch came in 12th in the US last year, down from 8th in 1997, relative to other securities companies setting up Internet IPOs. Merrill blamed the “less-than-constructive” advice of its (former) Internet analyst, but it must bear the cost.
Smaller companies, too, are suffering as they find that the Web does not wait around for the late arrivals. Bob Geldof’s flight reservations site deckchair.com, announced last September but not launched until today, will pay the price: the unique features it touted six months ago, such as flight availability, personalisation and online booking, have become standard on a wide variety of travel sites. The Internet is a club with little respect for its members: it remains to be seen whether deckchair.com is arriving too late to join.
(First published 14 April 1999)
04-14 – Linux - Immature but happening anyway
Linux - Immature but happening anyway
The open software movement has long been seen as a David, able to topple the Goliaths of greedy corporations and sluggish standards bodies. In reality, better parallels might be derived from the natural world, where myriads of chaotic organisms work together, over time reducing old orders to rubble and creating their own. It happened with TCP/IP, which was considered the de facto standard by the masses long before it was grudgingly adopted by the big people. It shall happen, indeed it is happening, with Linux.
Despite its ready adoption by the technical community, it has to be said that Linux is probably not ready for mass-market rollout. As a Unix derivative, Linux is too complex to be given to “just anybody.” Efforts are underway to change this – Caldera Systems, for example, are launching an easy-to-use Linux but even they admit it isn’t ready to be pitched against Windows. Linux also lacks the applications support of Windows, a fact acknowledged by Red Hat, another Linux distributor.
Yes, Linux is immature. However this does not seem to be preventing a desire for its adoption across the industry. In a recent poll of 3Com customers, 50 percent of respondents requested support for Linux, a fact which has caused 3Com to rewrite its Linux strategy. Other companies are experiencing the same upsurge and are responding accordingly. It is unlikely that the Linux bandwagon is a fad: rather, the global technical community is responding to the potential of a robust, flexible, free operating system. Experience shows that, once this level of consumer interest exists, it is unlikely to go away despite the best efforts of the giants to subvert or ignore it.
(First published 14 April 1999)
04-16 – For Banking, the Future is Online
For Banking, the Future is Online
A spokesman for NatWest said yesterday that Internet banking was a “luxury service” and that NatWest would not be closing any high-street branches. There are two ways in which these remarks can be interpreted. Firstly, that NatWest do not see the Internet as something which will significantly affect their business. This interpretation goes against the grain – the Internet is speeding up trends which are already well-established, such as the market’s acceptance of telephone banking. It is very likely that in the future, a significant proportion of banking transactions will be undertaken without a branch visit. Such a vision is difficult to realise at the moment, as the provision of facilities is dependent on customer adoption and customers are notoriously conservative. However, it is only a matter of time before the UK consumer decides that it is, indeed, more convenient to dial a number or type a Web address rather than try to find a parking place on a busy street in office hours. The second way to interpret the remarks is that NatWest are discussing the short to medium term. We must hope that this is so: when the use of online and telephone service reaches critical mass, no business will be able to avoid its effects.
(First published 16 April 1999)
04-16 – The Sun is Shining Again
The Sun is Shining Again
SUN Microsystems, it would appear, can breathe again. SUN have never been unsuccessful, not in monetary terms; however in technological terms the last few years have been a little tense, to say the least.
Not so very long ago SUN was the provider of choice of Unix workstations and servers. The mainframe was dead, so it was assumed, and the lower end of the market consisted largely of PCs running MS-DOS. The workstation market was virtually sewn up, with users finding the term SUN as synonymous with workstation, as Hoover was to vacuum cleaner. Need to develop software? Use SUN. Need to run a CAD application? Use SUN. They had it made.
Things started to turn sour for SUN when Windows 3.1 was launched. Within months, the costs of workstation hardware and software seemed all too expensive relative to the apparent cheapness of a PC running Windows applications. IT managers, used to running PCs, saw cost savings and very quickly the de facto desktop became the PC. SUN were on the run.
Smaller companies might have buckled under the joint pressures of Intel and Microsoft, but somehow SUN have refused to let go. Recognising that the desktop battle was lost, Scott McNeally turned his full attention to the still lucrative server market. His timing was both fortuitous and impeccable: within months the World Wide Web had left the launch pad and a whole new, rapidly growing server market had been created.
SUN have not had it easy over the last couple of years but with equal measures of good luck and judgement they have transferred their brand. SUN is now the Internet server of choice: its alliance with AOL is ensuring a continuation to both its revenue stream and its reputation.
(First published 16 April 1999)
04-22 – Quantity is not the only answer.
Quantity is not the only answer.
IDC are predicting a shortfall of 600,000 networking professionals across Western Europe by 2002. This figure is based on an expected requirement for 1.6 million network staff, given current trends. Phew. This is scary stuff. It gets worse: if the situation is not rectified, say IDC, we could be in for a European slump as businesses fail to leverage their growing success and existing assets due to inadequate infrastructure. Dark days are ahead, it would appear. Or would it?
There exist a couple of factors which, together, might augur a less bleak future. The first lies in the past, it is something we have always know. Motherhood, if I may. The second is a direct result of providing global connectivity, in which network professionals have already played a significant role.
So to the first point. IDC rightly say that training and development needs are becoming acute. However it is well known in networking circles, as in others, that real experience has far greater value than the classroom learning by rote that is becoming commonplace for vendor-sponsored certification programmes. The ability to answer a multiple choice questionnaire, a networking engineer does not make. Street knowledge and management experience tells us that some individuals, qualified or otherwise, are worth their weight in gold-plated terminators. We know who they are – they come in, they diagnose, they tweak and go away again, leaving us to wonder why so many features of networking equipment remain undocumented. Identify these people within your organisation. Treat them well, pay them well and keep their skills current. Involve them in infrastructure design decisions and strategic rollout programmes. Or leave them in the workshop, but don’t be surprised when your network management policy is jettisoned because people are too busy fighting fires.
The second point is newer and requires a little more thought. There has been, indeed there is being, an explosion in connectivity caused by the convergence and standardisation of communications technologies - we call this internetworking. The internet is bringing with it a raft of new options and opportunities to businesses. For example, applications outsourcing enables smaller businesses to take advantage of enterprise applications with minimal administration overhead. Secondly, telecommuting is happening more slowly than predicted, but it is globalising: it is now possible to outsource services (from managed call centres to point secretarial work) to companies and individuals literally on the other side of the globe. Such outsourcing of both systems and services is implicitly cutting the requirement for local infrastructure, hence the need for engineering support can be reduced leaving businesses to concentrate on their core offering.
Smart businesses will be worrying about the future, inevitable skills shortage in network staff. Training and development programmes are essential, but so is looking after incumbent staff. The smartest businesses of all will be looking to profit from all the opportunities presented by the internet revolution. Good luck to them.
(First published 22 April 1999)
04-22 – The hidden cost of Internet Real Estate
The hidden cost of Internet Real Estate
Wallstreet.com was sold for 1.03 million dollars. What, the company? No, just the name, to a Venezuelan casino. The purchasers don’t think they are crazy, so good luck to them. Across the Atlantic, the premium on a good “.com” name may be seen as worthwhile, but may have some impacts back on the old continent.
Despite the existence of national addresses such as .co.uk and .co.fr, the .com domain is accepted as the domain of the global enterprise. 90% of words in the English dictionary are now registered as .com domains, as the prediction is, “if you want soap, go to soap.com. If you want flights, go to flights.com.” This may or may not be true but it certainly makes sense and seems to be being borne out by patterns of Internet usage, hence the high premium on the better terms. The downside is that the majority of .com addresses are registered to US companies. So, if you want flight tickets, you may end up paying an American company for the privilege. European businesses may lose out to US companies purely because of Web serendipity. This may not be a problem for most European companies – the french will, for example, go to “savon.com” rather than “soap.com”. But watch this space – American entrepreneurs will not take long to wake up to the commercial opportunities they are at the moment too busy to cover.
(First published 22 April 1999)
04-22 – Turbulent times for IT stocks
Turbulent times for IT stocks
If we needed any more proof that we are in the midst of a huge wave of technology change, we have just had it. IBM have made a stonking profit, shattering analysts’ predictions. But there’s more – Lucent, another old dinosaur, has done it as well. Compaq and HP are down but SUN is up, Informix is up, Unisys is up. What’s going on? Simple. Companies which are successfully leveraging the Internet, either by providing the technologies to support it or by using it as a business tool, are winning. Companies which fail to tell a coherent eCommerce story, or which fail to exploit the capabilities the Web provides, are losing out. Sure, the world is more complex than this simplistic view might suggest, but even with other factors taken into account, companies big and small are finding it very difficult to buck this trend.
(First published 22 April 1999)
04-26 – Coming soon to your screens ….
Coming soon to your screens ….
Surely most exciting developments on the Internet at the moment are those relating to multimedia. Well, let’s face it, these developments would be exciting if we could take advantage of them. The possibility of a full surround sound audiovisual experience, broadcast directly from the Web, is, frankly, non-existent at the moment. Even with a leased line, the closest we can get is a low-quality, stop-start audio stream. Having said that, we all know that the bandwidth will increase and the technology will improve… but even then the chances are that what we want (streamed, high-quality full-screen video) will remain a distant mirage for some time yet. We do, at least, know that it will come. It is a good job that the technology is not yet ready, because if it were possible right now, it is probable that most of us wouldn’t be quick enough to do anything astounding with it. We have 3-5 years of slack before the immersive multimedia wave hits – will we be ready?
(First published 26 April 1999)
04-26 – Spam, spam, spam, spam, spam
Spam, spam, spam, spam, spam
Spam is back in the news again. Not special american processed meat, as I’m sure you know, but unsolicited email. How this won the name “spam” is lost in the mists of dog time, but it shares some characteristics with its namesake – instantly recognisable, unavoidable and often without taste. So – whilst Software Warehouse is saying that it probably can’t take Insight to court for spamming the former’s customers (with a dubiously obtained database), the European Parliament has voted against an anti-spamming amendment to the eCommerce bill.
In fact, in a funny way, it is Software Warehouse (or their ISP, Planet Online) that should be in the dock as well as Insight. According to UK Data Protection law, the keeper of personal data is held responsible for its upkeep and security so, in a case like this, they should carry the can. There have been several cases in the past, however, of information being obtained form ISPs, which are a common target for hackers. It is probably unfair to expect any ISP to somehow be able to secure this information when there is a global network of internet junkies conspiring together to look for security holes. The odds are undoubtedly stacked against ISPs.
It is probably also true to recognise that spam is a global problem. The Internet is too complex to be shored up against the distribution of unsolicited information from any far-flung corner of the world-wide web.
Given the current inevitability of spam, however, we should not sit idly by - legislation is still necessary. National and international laws currently lag far behind the technology curve, but efforts should continue to ensure that this gap is closed. Long arm laws, which enable prosecutions across international boundaries, are being implemented in the US and this concept should be extended to our own shores to counter the web’s global nature. Information misuse laws need to be modified and extended to take into account the new possibilities afforded by technology. The biggest problem here is one of timeliness - law-making cycles should permit the development of a flexible, responsive legal framework which keeps its currency against a fast-moving technological background. Otherwise, unscrupulous users of the Internet will be permitted to keep one step ahead of the law, conducting injustices without ever committing a crime.
(First published 26 April 1999)
04-27 – IT Security doesn’t have to be Rocket Science
IT Security doesn’t have to be Rocket Science
It is worth reading Silicon.com’s headline that the UK government is exposed to hackers. Firstly, it says that security firm NTA monitor found that 31% of .gov.uk email servers were running flawed software. That’s the scare-mongering bit. It goes on to say how NTA monitor discovered this – by sending emails to the servers to identify which software the servers were running.
It is commonly known in IT security circles that the best way to identify security holes is to use the same techniques as the hackers. Indeed, suites of tools are available which act as “auto-hackers”, which run ensembles of simulated hacks to find weaknesses. Some of these tools are available for free, from organisations such as CERT. The question asked is, but can’t the hackers get hold of the tools as well? Of course they can. The principle is that hackers know this stuff anyway, so IT managers should also be informed. This is why IT managers are duty bound to use such facilities and close up any holes they may find. Otherwise the hackers will do so instead. Illegal access becomes no more difficult than running a few scripts and reading a log file.
(First published 27 April 1999)
04-27 – Network Computing has its place
Network Computing has its place
The Network Computer, launched with much hype and applomb by Larry Ellison four years ago, is widely believed to be dead following only minimal success. What killed it? Although the Windows camp was largely responsible, through products such as Citrix Winframe and Windows Terminal Server, the advent of web browser-based applications should also take a share of the blame. Or should that be credit? As SUN’s I-Planet announcement demonstrates, the Network Computer or NC is, and always was a myth.
I-Planet is a software package which runs on a server to enable applications and databases to be accessed through a Web browser. Browsers are becoming established as the standard device interface to the internet or intranet, hence the I-Planet concept is sound. SUN is not the only company to see that the browser-enabled, thin client approach is worth developing: most vendors, including SAP and Peoplesoft, are scurrying to web-enable their applications.
It is this facility any different to the NC architecture? No, of course it is not, in principle, but the NC concept is far too flexible to be limited to single device type or hardware. All browser-enabled devices, from PCs to PDAs, mobile phones to set-top boxes, should be seen as Network Computers. Other architectures have their place, for example thick clients for graphics intensive tasks, but the thin, browser-based client will be the best approach for a large number of applications. The NC might have died, but its spirit lives on and on.
(First published 27 April 1999)
04-27 – Signs of Internet Immaturity
Signs of Internet Immaturity
Three young, exuberant individuals in the US are landing up in court following a spoof press release concerning the availability of high-speed communications bandwidth. This unfortunate April Fool incident, in which the joke seems to have turned on the jokers, is just one example (there were unfounded stock rumours being posted on message boards last week) of immaturity being seen on the internet. This is not really about the childishness of the people posting the information. We all need a sense of humour. No, this concerns the current state of the internet and the resulting gullibility of its users. Hey, it’s not their fault. The electronic superhighway needs us to be reasonably open to just about anything happening – there are so many advances in technology and capability that we seem required to suspend our disbelief for a good few years yet if we are to benefit from its potential.
Unfortunately, this leaves us open to confidence tricks, both humorous and malicious. So what should we do? We should move forward with open minds, learning as we go to take advantage of this new medium. Should tricksters try to hold our naivety to ransom, they should pay for their actions. But we should allow a little flexibility to deal with the occasions when undergraduate tomfoolery catches us out.
(First published 27 April 1999)
May 1999
05-04 – ECommerce Crisis
ECommerce Crisis
A crisis, it is said, is a problem with no time left to solve it. It may be triggered by external factors, such as losing a job or winning on the horses (and it’s a harder pill to swallow when the cause of our pain is that our wildest hopes have been met), but its causes lie most often in the past and creep up unawares. When dealing with a crisis, it is first necessary to admit fallibility: only then is it possible to start coping with the hand that has been dealt. We are all individual, sayeth Brian, but we are not immortal and we are all subject to the same patterns of behaviour as out peers. Even the stages of coping are predictable, to an extent – consider those identified by Elisabeth Kübler-Ross, for people coming to terms with terminal illness – denial, anger, bargaining, depression and acceptance. These stages have been applied, in one form or another, to everything from dealing with herpes to coping with mediochre golf. It may be that the stages are not exact, but then this is not an exact science. The stages are a gross simplification of real life, but they are still recognisable to the majority of those who have lived through “difficult times”.
Why am I saying this? Well, lets look for some parallels. Despite the powerful BPR rhetoric of the past ten years, do we look about ourselves and find a brave new world of mature, evolving, learning enterprises? Nope. It is fair to say that, although the headcounts are reduced and we are more reliant on bought-in services than before, not much has changed. To be fair, huge efforts are often made, for example reorganising corporate structures, merging with the competition and investing in call centres. But generally, these investments occur because the decision makers’ hands are forced by external factors such as increased competition and Y2K. It is no secret that the biggest blocker to business change is “resistance to change,” and the forming, storming, norming and performing mantra of organisational change bears a remarkable resemblance to Elisabeth Kübler-Ross’ five stages.
Whatever we want to call it, the Internet, the Web, eCommerce is just one more external cause which is going to trigger business crisis. Fundamental change we will see - it is already happening in the states and will come over here as inevitably as disco dancing and the skateboard. This is not just another stateside fashion, however – this is unavoidable change on a global scale. There is a fundamental difference between this and previous industry-shaking events. The first off the blocks, if correctly prepared, is appearing to win the race. Time is becoming crucial: companies which procrastinate are losing market share. This well-documented trend is an omen for us all. What is also interesting is the other side of the coin, where companies have been victims of their own online success – for example the computer systems of eTrade, the online brokerage, suffering a number of outages due to the volume of people accessing the site. eTrade has survived (with a mere 25 thousand complaints) but other such companies have not been so lucky.
Admittedly we are not in crisis yet, not quite. The percentage of UK business conducted online is at 0.5%, so there is no need for unmitigated panic. The trends and supporting factors suggest that in ten years time a large proportion of business will be conducted online. We can wait until external factors force our hands, or we can start a process now, remembering that the stages we will go through a re largely predetermined. Sure, we need our denial phase. It is a time during which we assimilate the facts, assess the consequences and come to terms with the fact that nothing will ever be the same. Better that we start the process now and still leave time to reorient our businesses and profit, yes benefit, from the eCommerce revolution. We cannot make the revolution happen, and neither can we predict or plan for every eventuality. However, by forcing our own hands and coping with the consequences, we can at least orient ourselves so we are facing in the right direction when the time comes, for example by building business relationships with possible partners, or implementing the open, secure, scalable architectures which will be necessary for online access to our systems. Surely this is preferable to doing nothing in the blind hope that nothing will come of it, or worse seeing the policy of wait and see become wait and watch others succeed.
(First published 4 May 1999)
05-10 – Agnostic to what’s in the box
Agnostic to what’s in the box
An agnostic is a person who does not know if a god or gods exist. Hence, as Microsoft continues its march into the hinterlands of consumer devices, it has a battle on its hands. Microsoft’s recent alliance with AT&T signals its “bullishness” (a word which the company is very attached to) about its entry into the internet set top box market. As discussed in Friday’s analysis, its rationale is to put Windows CE onto 5 million of these devices. The fray is already joined by Sun with Java and Symbian with EPOC – AT&T’s position is now unclear. What a battle there will be. But it will pass largely unnoticed by the mass market.
Microsoft’s Windows 95 announcement was a marketing coup which has never been bettered, winning the hearts and minds of millions of PC users. Can you imagine being able to announce a new mobile phone or laser printer in such a way? No. Device technologies don’t lend themselves to such stunts, as their users are largely oblivious to what goes on inside them. Internet set top boxes will enable access to the web without needing consumers to back one technology or another. In any case, it is likely that the choice of set top boxes will largely be decided by the service providers - the ISPs, media companies and cable companies. This is a fact already being exploited by Microsoft, for example through their Telewest alliance announced today. Indeed, as discussed today in “IT infrastructure for free”, the chances are the boxes will be given away or incorporated in other devices such as the TV set. IT vendors may have god-like status in technological circles, but in the greater, unknowing world they are less likely to be recognised as such. Microsoft’s battle is not so much against the other giants in the industry, but it is a fight for brand recognition in a mass market of disinterested consumers.
(First published 10 May 1999)
05-10 – IT Infrastructure for free
IT Infrastructure for free
So, BskyB are to give away satellite receivers for a song (or a forty pound connection charge). This announcement may have come as a bit of a shock to those poor punters who have just shelled out hundreds of pounds. Fortunately for them, the broadcaster is to freeze their subscriptions to make up. In any case, in this converging market, the news should not come as a surprise.
Let’s think about this. Again and again, we have examples of technology and services being given away for free. Look at Microsoft’s Internet Explorer and the now-ubiquitous mobile phone (which has been consistently offered at several hundred pounds lower than its cost of manufacture). Reasons may vary – in all cases the suppliers are looking to win market share which they can exploit later or leverage for other products, in some cases costs are recuperated, at least partially, through subsequent subscription costs. A company in the US is even giving away PCs for free, but considering that these can now be built for the cost of a mobile phone, this approach should not be unexpected.
Given that this trend is to continue, we are likely to see ever more imaginative approaches to making money out of the internet, both to cover costs and to make a tidy profit. Revenue will come from advertising and customer profiling (click trends), sure. Money is also to be made by hosting the channels through which “real” goods and services can be purchased, such as travel bookings and car servicing. Tomorrows entrepreneurs will have to think of ever more esoteric ways of extending and exploiting this medium.
In the meantime, both businesses and consumers are likely to profit from the situation. It might be wise for anyone about to make a major investment in IT takes stock of what is likely to be made available for free, such as email access or extranet services. At least it should enable us to harden our negotiating stance.
(First published 10 May 1999)
05-10 – The demise of the call centre
The demise of the call centre
Over the past ten years call centres have boomed, and for good reason. Cost savings have been made by centralising customer service operations, optimising business processes and locating staff in areas of cheaper infrastructure and employment costs. How sad, then, that the internet has come along and wrecked all that good work.
How can this be? Call centres have a variety of functions, but one of their chief objectives is to manage the interface between customers and computer systems. Want to book a flight? Call our travel centre and we will type your details into our system and book it for you. Want to check your balance? Give us a call. But wait. What if you could also access such functions directly, through a Web browser, without you needing to be held in a queue and without the need for someone to take a call. Interested? Then you will understand why the demise of the call centre has been predicted.
Call centres are not dead, or even dying. However the need for call centres to take care of mundane, routine enquiries is rapidly going away. As says Tom Black of the Smith Group, this trend will accelerate with the arrival of digital, interactive television and the set top box. Call centres of the future are likely to be smaller and more focused on sales activities and proactive customer support, integrated with online services. Call centre strategists should consider provision of customer services using all access devices, not just the telephone.
(First published 10 May 1999)
05-11 – A Safety Basis for Y2K
A Safety Basis for Y2K
So, the Channel Tunnel is going to shut on New Years Eve, 1999. This has been decided following concerns about the interfaces between Eurotunnel systems and those of the National Grid and their equivalent in France.
Developers of Safety Critical Systems are a breed apart, and they have to be. These are the people that build the systems which run our trains and control nuclear power stations. The underlying rule seems to be “avoid grounds for litigation,” which is a rather crude way of summarising what is a complex systems development field. There is very little room for error – the blasé, “bugs are inevitable” attitude which appears to pervade the rest of the IT industry is replaced by huge attention to detail at every stage of the development process. During design, systems are considered not just in terms of when things go right, but also in terms of effects when things go wrong. An interesting fact, often overlooked, is that many systems have a safety critical element and would benefit from adopting some of these considerations.
If the Eurotunnel systems experts have decided that the tunnel should close, they will have done so following a detailed safety assessment of the risks and their knock-on effects. Most Y2K work is now complete or is being completed but much of this work is based on the risk to the systems involved. Managers of systems which are not considered “safety critical” would do well, even at this late stage, to consider the ramifications of their systems going wrong before they blithely let them run over into the new millennium.
(First published 11 May 1999)
05-11 – Arranged Marriages to rule eSpace
Arranged Marriages to rule eSpace
There has been a proliferation of recent announcements about company tie-ups, for example Microsoft and AT&T, IBM and C&W, and HP, SAP and Qwest. As can be seen, these alliances are cross-sector, normally involving a hardware company, a software company and a communications company. As the alliances have been formed to take advantage of the expected trends in business use of technology, they are a fair indication of the vendors’ opinions on what is likely to happen over the next couple of years.
There are several “givens” here. First, that a significant amount of business will be conducted using the internet as a medium. This will be both business-to-consumer, via a Web browser, and business-to-business, for example using email as a transport for inter-business information such as purchase requests and invoices. Second, that computer systems in different companies will communicate with each other directly, avoiding the need for human interaction and speeding up the supply chain.
The question is, what are us Europeans doing about it? Recent surveys, such as the poll by PhoCusWright of major European airlines, indicate that us europeans are behind the curve. Now we have a choice here. Either the future of business is on the Web or it is not. We at Bloor believe it is and there is substantial evidence to support the fact that businesses which do not embrace new technologies will be at a disadvantage, in terms of both efficiency and cost. In the UK, consider LIFFE, which has only been able to turn around its fortunes by endorsing what it previously rejected. LIFFE’s near-demise was blamed on complacency; others will claim ignorance. In either case, though, for some businesses that do not take on board the internet (together with the new opportunities it offers), the result will be the same.
I know, I know. Here we are again, harping on about eCommerce. A question for you then – are we exaggerating the situation? Or are you well aware of the threats and opportunities posed by the Web? How are you planning to take advantage of these “new media”? We’d love to hear your feedback.
(First published 11 May 1999)
05-11 – Time is running out for Iridium
Time is running out for Iridium
A few weeks ago I wrote about the uncertain fortunes of Iridium, the satellite phone company. Trouble is, time is running out. The satellites only have a space life of five years, following which they must be replaced: Iridium’s business model is based on making enough money to justify the continuous relaunch of new orbiters to replace the old.
Iridium need to identify new markets for their services, as mentioned before. But if they take too much time doing this, they will generate insufficient funds to sustain their phenomenally expensive infrastructure. Leave it too long and the whole lot might come tumbling down, in more ways than one.
(First published 11 May 1999)
05-12 – Gigabit over copper? Never say never!
Gigabit over copper? Never say never!
Broadcom has just announced a new chip which, it is said, enables Ethernet data traffic on copper (and that’s the clever part) at 1000 bits, or one gigabit, per second. A number of people in the industry said it couldn’t be done, but it has. It was only a matter of time, really – just as modem speeds have far exceeded the 300 bit “maximum” originally mooted, so the potential of copper wire has been pushed way beyond initial expectations. This is good news for all those organisations which have flood wired their offices with Cat.5 Universal Twisted Pair – their investment (and their topology) is protected.
The question is – do we still need fibre? Probably not to the desktop, even for high-bandwidth requirements such as streamed video, and maybe not in the machine room either. For example, Fibre Channel Arbitrated Loop boasts transmission speeds of 800K and is being pushed as the plumbing for the storage area network, a topology which separates storage devices onto their own token-based network. Tokens can slow things down, so a number of storage manufacturers are proposing switched FCAL, which enables better use of the bandwidth. But why bother with fibre at all, when the same job can be done over existing copper, only faster?
One thing is for sure. The IT manager’s job can be made easier if designing a network architecture is based around network partitioning for performance, as opposed to partitioning by device type. If all devices share the same network protocol, for example using the Ethernet 10/100/1000 family, then this becomes a real possibility.
(First published 12 May 1999)
05-12 – Linux on trial
Linux on trial
What is the Linux phenomenon? Repeated announcements (including two on Silicon.com today) show more and more companies, vendors and end users are adopting this freeware package. What are the reasons? And isn’t this mass migration to a seemingly unsupported piece of software a little dangerous?
Different camps have different reasons for adopting Linux. For some, particularly IT-literate end-users, Linux is a free training bed – it runs Unix commands, comes with free compilers and a host of other software that are an ideal entry point to IT. Vendor adoption is two-way – some are “listening to their customers’ needs,” as they say, but others are porting to Linux merely as a marketing ploy – after all, porting from Unix to Linux is relatively straightforward. Businesses really are using Linux – a number have built Linux-based systems as mail servers or as a cheap entry to the Web.
This is where things are getting interesting. A vast number of people are taking a look at Linux for their non-mission-critical neeeds, and as they do so they are discovering that the rumours are true about Linux stability. Common sense is coming into play, for example how much support do end-users really get for commercial operating systems? As with the X-files, Linux users note, the support is out there. Techies from the Unix world (who, admittedly, are a little biased) note the fact that a number of essential facilities on that platform, such as Xwindows, were also freeware – that is, public domain software, which is entirely different to freeware. Honest. In other words, Linux is becoming acceptable. And as this happens people are starting to trust Linux for their non-trivial applications. Some of this trend is unavoidable, as Web interfaces and email servers move from being “windows to the world” to being eCommerce gateways. But IT managers are becoming more and more comfortable with the cheekie chappie operating system that was downloaded by the placement student.
There are still plenty of things missing from the Linux jigsaw – in particular, there is still only limited application support and multiprocessor capability still has some problems. These facilities will come, and in the meantime there are plenty of other opportunities for IT managers to see for themselves the benefits of running a secure, stable, free operating system.
(First published 12 May 1999)
05-13 – £50M for NHS IT – Plan wisely, spend wisely
£50M for NHS IT – Plan wisely, spend wisely
It is sometimes too easy to be critical about government spending announcements, so let’s not be for a moment. The money, if it is new, should be welcomed. Trouble is (and I’m still not being critical), IT projects tend to expand to fill the amount of money allocated to them, and hence should be prioritised at a higher level. Whilst difficult to see how the critics can say “it is not enough,” it is also difficult at this stage to know whether it is adequate.
Rather than considering spend, at this stage, the NHS should be defining IT strategy at the highest level. This will enable the appropriate spending decisions to be made based on real priorities. The strategic objectives should share the same values as the NHS, namely to provide an integrated, national, accessible infrastructure which makes the most of modern technologies such as the Internet. In fact, it would be quite stunning if such a strategy did not exist already.
(First published 13 May 1999)
05-13 – Cable, digital, set top box. Cable, digital, set top box
Cable, digital, set top box. Cable, digital, set top box
This is becoming a bit of a mantra. More digital TV alliances have been made over the last day, this time between Microsoft and C&W, and AOL with a whole bunch of hardware and digital TV providers. At the risk of stating the obvious, things are hotting up.
Real soon now, as the saying goes, digital TV is going to get popular and all those predictions analysts have been making for years will come true. At least, that’s the assumption being made by the giants and the little people amongst the vendors (and they’re putting billions of dollars against this horse). The Internet really, finally, will leave the domain of the propellorhead and enter the mass market. Your grandmother really will be sending email and checking bus times on the browser.
Fortunately, contrary to previous expectations, it seems (as reported on silicon.com) that Europe is catching up the US in terms of internet access. At least, business executives over here are using the Web nearly as much as over there. European users results in European internet entrepreneurs. So maybe, just maybe, when the dam does break, we won’t find ourselves entirely dependent on stateside services when we should quite reasonably be depending on our own.
(First published 13 May 1999)
05-13 – See the future – read my palm
See the future – read my palm
The announcement by 3Com at Networld+Interop, concerning the use of PalmPilots with a wireless Internet connection to access network management information. Just might be the start of something really big. That’s really big, and not just for network management staff.
Sure, such facilities (and 3Com do not have the monopoly on this) will be of great benefit to network operators, particularly those working on distributed networks across multiple buildings. Currently these poor individuals have to rely on a remote terminal or, worse, have to wait until they get back to the control room to update the records, only to find that a second fault has occurred back where they have just come from. Maybe I exaggerate – pagers are in common use these days, but a handheld wireless terminal would be a real boon.
3Com have expertise in both handheld devices and wireless networking, and are using both to enable a whole new set of applications, the potential of which is exciting to say the least. It is not hard to think of other markets than network management for a wireless, Internet-enabled PDA, from warehousing to shopping – markets which 3Com, with other vendors hot on their heels will be keen to exploit.
(First published 13 May 1999)
05-14 – Adrenalin keeping the online hermits going
Adrenalin keeping the online hermits going
Apparently, the four individuals living “by the net alone” for four days are having the time of their lives. Some are being more successful than others at catering for their basic needs (one poor chap is clothed in pajamas and socks still), but all are enjoying themselves.
What is the internet about? It is about getting access to vast quantities of information, a lot of it obsolete, irrelevant or wrong. It is about being bombarded by emails from friends, foes and people you have never hear of. It is about goods and services being provided online, with varying levels of service competence. Most of all, as all these examples attest, it is quite simply a new communications medium – and we all love to communicate.
(First published 14 May 1999)
05-14 – Microsoft Open Source is missing the point
Microsoft Open Source is missing the point
It is reported that Microsoft is still considering opening its source code, osensibly as a response to the “threat” from Linux. However, there are some gaping differences between the two camps’ concepts of open source.
Steve Ballmer, Microsoft president, gave a very clear perspective on Microsoft’s position – that some parts of the Microsoft source code could be published or licensed to enable applications to be built more quickly. In other words, if you can see what the code is doing, you can understand it better and use it more wisely.
The approach to development of Linux could not be more different. Here, the Linux source is released in its entirety \i{for development}, so that anyone who wants to modify the code can do so. Changes and extensions to the code are fed back and accepted (or otherwise) in a subsequent release. In other words, there is an open software development process in which the ability to read the code is only one element.
The thought of allowing others to actually modify Microsoft code still seems a little alien to Mr Ballmer, who is promoting only a very restricted definition of the “open” concept. Whilst the Microsoft move might seem relevant to the strategists, it is unlikely to cut much ice with the developers on the ground.
(First published 14 May 1999)
05-14 – New laws for the online community
New laws for the online community
An interesting element of the Richard Tomlinson situation is the perception that the internet is somehow beyond the law. There are two factors to this. The first is that the internet, being global, spans international boundaries and hence leaves room for activities which are illegal in some countries to be conducted remotely from other countries. This is not dissimilar to the current situation in online gambling. The second factor is that the new facilities provided by the internet are causing new situations which, though immoral or antisocial in theory, do not have any corresponding legislation to make them illegal in practice.
The one thing we can say is that this situation will change. Cases have already been successfully brought under the terms of the “long arm” laws being implemented in some US states, particularly against copyright restrictions. The UK government, amongst others it is assumed, has noted the need for international legislation to restrict the misuse of the internet. Of course, the situation is still in a state of flux but things are firming up – it is now possible to see, at least, that global agreements are required to enable some national laws to be enacted across international boundaries.
Without taking a view on Richard Tomlinson’s position, it is possible to see that he has been exploiting weaknesses in international law exposed by the internet. His case is high profile – there are plenty of others. The sooner these weaknesses have been dealt with, taking into account the international conventions to protect the individual and with the necessary agreement of the international community as a whole, the better.
(First published 14 May 1999)
05-28 – Bizness as usual for Microsoft
Bizness as usual for Microsoft
Yesterday in Paris, Microsoft relaunched its eCommerce strategy under the banner “eCommerce for all”. Apart from presenting the strategy from a European perspective, nobody expected much difference from the first launch, two months ago in San Francisco. However, things move awfully fast in this business, and it was clear that the strategy was significantly firmer than when first announced.
The building blocks were the same – Microsoft Commerce Server, BizTalk server and MSN. Commerce Server is “websites with transaction management” – an essential element of setting up an eBusiness site. BizTalk server is the XML interpreter and integration layer, to enable business applications to communicate across the Web. MSN, back in its third (or is that fourth) incarnation, is now an eCommerce portal.
What has changed? For a start, the marketing may be the same but the technology appears a lot more advanced. It is possible to start believing some of the dates in the roadmap - currently product betas are expected in the summer. Real issues are being addressed, like how Exchange and Biztalk server will work together, for example to deal with eCommerce transactions via email. Another change was a small change of emphasis – Microsoft will be “stewarding,” rather than controlling, the standards effort required to define the XML schemas for vertical markets such as Finance, Retail and Healthcare. This gentler approach is to be welcomed – whilst it is recognised that big companies such as Microsoft and SUN have a major role to play in setting standards, it is a relief to hear that they are prepared to give up those standards to more appropriate bodies, when the time is right. Let’s face it, Microsoft’s reputation in this arena is less than pristine.
Still disappointing was the rollout plans for MSN commerce in Europe. Expected early next year, this leaves us Europeans significantly behind if we want to take advantage of MSN’s services. Having said that, maybe its not such a bad thing – it might leave time for a non-US company to step into the breach.
(First published 28 May 1999)
05-28 – IBM goes in Deep
IBM goes in Deep
On the 24th May, IBM announced it was spending $29 on an initiative it is calling the Deep Computing Institute. Deep computing is the flipside of pervasive computing – just as we expect that computer-ettes will be embedded in everything that can take a power cable (and probably a few things that can’t), so will there be the megacomputers which push the limits of technology to solve computational problems currently way beyond our reach.
As with most research initiatives in the past, there is unlikely to be one big benefit. Rather, there will be lots of smaller benefits – new technologies which can be build into existing models, new standards, approaches and architectures. Either way IBM should be applauded for investing in programmes which will be of benefit to the whole IT community, not just IBM.
(First published 28 May 1999)
05-28 – TANSTAAFL, but we'll throw in the starter
TANSTAAFL, but we’ll throw in the starter
The article on Silicon.com about operating systems being given away for free has an element of truth. The fact is, all companies (IT and otherwise) dealing with technology are in an experimental phase. Given the fact that everything has a cost, the question to be answered is “what should be given for free to gain subscribers, and what should be paid for?” There will be different answers for different sectors. In retail, information and value-add service will probably be the major “free” differentiators. In the mobile world, provision of handsets has always been at an initial loss to the telco – this is a model being adopted by BskyB and OnDigital.
In the software and services world, things will be different again. At the moment nobody is really sure what will be given for free and what will be paid for. Maybe operating systems will be given away but like everything else they will have to be paid for somehow, perhaps through paid subscriptions to the services they enable. It is likely that there will never be one answer to this. Rather, business models will be chosen on a tactical basis depending on what people are prepared to pay for and what the competition is doing.
(First published 28 May 1999)
June 1999
06-07 – Baan’s battles not over yet
Baan’s battles not over yet
Baan hasn’t made a profit for a while but battles on, determined to be one of next year’s success stories (it seems to have written off this year) with a new approach and new product lines. Baan is keen to shed the image of being an ERP company and is moving into e-commerce, focusing on the business to business sector. In our opinion this is a wise move - ERP as a market-leading technology is dead, becoming just one of many applications which need to be linked and co-ordinated via the web to optimise and automate business relationships and transactions. The only mistake seems to be that Baan still believe in lock-in. “The majority [of organisations] … preferred to buy all its products from a single vendor,” claimed Baan. This said, the “majority” have no choice but to run a variety of products from different vendors. It is likely that companies which promote integration and interoperability, such as Concur, will steal a march on single-solution vendors. Baan better watch its flanks.
(First published 7 June 1999)
06-07 – Microsoft at the Perly Gates
Microsoft at the Perly Gates
So, Microsoft are to “extend Perl to take advantage of Windows capabilities”. Well, well. We saw it with Java, we saw it with HTML – in both cases, the company were quite rightly accused of taking an open standard and extending it to lock users into the Windows platform. It may be that their approach to Perl is different – goodness knows that Windows lacks a scripting language of any merit, so the logic of porting Perl to Windows is clear.
Can we give Microsoft the benefit of the doubt? Put it this way. If Microsoft are to be trusted with yet another standard, namely XML, then they need to have quit their old “standardisation” tricks. Perl can be the test case: we watch their every move with eager anticipation.
(First published 7 June 1999)
06-07 – The science of appliances
The science of appliances
You heard it here first - the up-and-coming buzzword is “appliance”. So what is it? A variety of sources seem to claim the term for their own – even in their relatively closed community, storage manufacturers differ in the use of the term, ranging from single-function filer to fully-fledged messaging server. Recent estimates suggest that sales of “information appliances” will outstrip PC sales by 2003. Pretty interesting, considering that a definition for the term is, to say the least, vague.
To better understand what is going on, it is worth looking from the two viewpoints that manufacturers seem to have adopted. First off are the systems manufacturers, who are used to seeing large metal cabinets which often contain whirring motors and fans. From this perspective, the appliance is seen to possess the characteristics of a washing machine – large, white, floor-mounted, single function, easy to program. The second viewpoint is that of the handheld device manufacturer. To these companies, appliances are very small, ergonomic and multifunctional, looking much like a mobile phone.
At the end of the day, it’s all semantics – one person’s appliance is another’s device. All so-called appliances of the future should possess certain characteristics to distinguish them from the systems of the past. They should be as low-maintenance as a fridge, as appropriately designed as a magimix, as easy to use as a phone and as easy to replace as a digital watch. We shouldn’t care what’s going on inside the box, what operating system is being run or what the processor bus speed is. Wishful thinking? Well, we’ll just have to see.
(First published 7 June 1999)
06-09 – Amazon’s laws of the Jungle
Amazon’s laws of the Jungle
Amazon is getting bullish. No, okay, it has always been bullish. Amazon is getting very big and bullish. The still-to-make-a-profit company has made a spate of investments recently, and not just in its in its traditional over-the-web, commodity markets. It has also commenced selling downloadable music and it has even set up an auction site. It is putting itself in direct competition with the big guns both online and offline, and from eBay to Wal-Mart the competition is rising to the threat. Not content with local battles, Amazon.co.uk has put itself head to head with W.H.Smith Online, with both now offering 50% off books from their own bestseller lists.
Amazon has grown fast and clearly it must act fast if it wants to stay ahead in the game. It is a big game, however, and there is a danger that Amazon spreads itself too thin or makes too many enemies. In the litigation-friendly US, we only have to look at what is currently happening to Microsoft to know how competition between retailers can be a battle in both the storefront and the courtroom. In the words of Sun Tzu, “do not fight a war if you have to fight on two or more fronts”. In Amazon’s case, the numbers of fronts are mounting and there are more than two dimensions to the fight.
(First published 9 June 1999)
06-09 – Information Overload is a Distraction
Information Overload is a Distraction
A recent survey by Pitney Bowes found that the majority of workers are interrupted by communications technology every ten minutes. For “communications technology” read voice mail, email, telephones and mobiles, pagers and (probably) the beep from the calendar manager. This survey confirms the results of other surveys, such as the Avery survey which found that information overload through use of these new media was preventing the job from getting done.
We are, sadly, still at the stage of being driven by these technologies as opposed to driving them. It has been generally accepted that communications technologies hold the key to efficient, knowledge sharing organisations but our use of these technologies still borders on the primitive, as like Pavlov’s dogs we respond to the sound of a bell by switching off from what we were doing. There will be a huge market in the future for intelligent information-sharing and communications facilities, and the sooner it comes, the better.
(First published 9 June 1999)
06-09 – Office 2000 is out – but is it in?
Office 2000 is out – but is it in?
Microsoft has launched Office 2000, 15 million copies are sold already and it’s still only 1999. The product reviews are excellent, thus far – the individual applications that make up the O2K family are being judged, once again, as the best in their field. Microsoft appear to have it all sewn up. Good news for Microsoft? Well, that depends on your perspective.
First off – Office 2000 continues a line of partially integrated productivity applications based on Microsoft Office 4.2. When Office 4.2 came out in 1993, magazines ran comparisons with WordPerfect Office and Lotus Smartsuite, usually drawing the conclusion that any of the above would do the job, but Microsoft had the edge. Organisations bought office suites by the truckload, with the brand usually dependent on the incumbent word processor. New releases of suites from the different suppliers brought with them new features, enhancements and bug fixes, each time making the new versions a more attractive purchase than the old - when the suite wars reached a crescendo, organisations “rationalised” to Microsoft and it was all over bar the shouting. The point is this. In 1994 we had a very capable set of office applications which gave us the facilities to do 95% of the job (bar, admittedly, the functionality now available in Outlook). It is now five years later and, whatever the functionality now available, it is questionable whether we need it at all. A word processor is, was and always has been a word processor.
It is clear that Microsoft recognise that if they want to ship O2K, they need a new spin. As stated on Silicon.com, “the software is intended to promote a ‘web work style’ for ‘knowledge workers’ who need to share information.” Reading between the lines, and looking at Microsoft’s other announcements yesterday, the plan is to sell Office 2000 as an element of facilities enabling collaborative working, for example document sharing. Fair enough – aside from the (still perceived as niche) document management market, this is an area which remains largely untapped.
Microsoft are guaranteed a packet of O2K sales, at least in the short term. But in the longer term the jury is still out. As a purchaser, if you see Office 2000 as an application set, the question is, do you need to upgrade from your existing, functionally rich suite? And if the suite is to be used as part of a collaborative working environment, is your organisation ready to exploit the potential of your “knowledge workers”? Aye, there’s the rub. Microsoft are stepping into new territory with Office 2000, and it is anyone’s guess what the future holds for the suite.
(First published 9 June 1999)
06-12 – Commerce Electronique? Ja, bitte!
Commerce Electronique? Ja, bitte!
Us anglophones might have a hard time with this but it’s a fact. English is not the most spoken language on the planet. And, despite any impressions we may have to the contrary, it won’t be the most used language on the Internet either.
Let’s face it, we’ve seen this before. There was a day when we though that all software in the world was written in English and, besides, anyone that was IT-literate was probably bright enough to speak two languages. When we went to export our products to the continent, to our chagrin we found that the potential user base would much prefer the software in the local language, and that competitor products already existed, written by our overseas friends for their own local market. Grudgingly, we had to admit that IT was not a UK/US phenomenon. And neither, shockingly enough, is the Internet.
By 2005, it is estimated, 57% of all Internet sites will be in a foreign language. Unlike currency, where convergence seems to be the trend, language use is diverging. This is very interesting. Yes, there is a global pool of customers out there, but they are expecting to be communicated with on their own terms (they are the customer, after all). It is likely that successful sites will be multilingual – indeed, if you’ve looked at sites in other countries, you may have noticed that a large number of sites exist in at least two languages. UK companies wishing to exploit this huge market of consumers, take note.
(First published 12 June 1999)
06-12 – Sony gets Music Shops Online
Sony gets Music Shops Online
In the prediction game, it is reassuring to see some projections come true. The effect of the Internet on retail, for example, has been a subject of much speculation: in lunchtime discussions, most will have suggestions as to what might work, but few if any have the monopoly on the future.
Sony’s announcement today is a case in point. In association with Red Dot, Sony are to distribute their back catalogue to high street retailers via a dedicated ATM network. Customers will be able to select an album and have it pressed while they wait. For customers this is excellent news, as retailers will be able to carry a much broader range of albums; for retailers and suppliers, the future is equally bright because neither will have to manufacture or carry unnecessary stock. For these reasons, t is exceedingly likely that other media firms, such as Polygram, will follow suit.
What will the music retailer become? Well, we are back to speculation again, but it is likely that the physical store-front will stick around for a few years yet. Music retailers will still be dependent on getting the punters through the door before they can make a sale, and in the absence of pure product the focus will likely move to added value services and branding. Megastores with coffee bars, for example, are already in a position to serve as trendy places where customers can gather to discuss and possibly buy what’s on offer. We expect to see more of this but what do you think? Let’s meet up at Virgin, do lunch and talk about it.
(First published 12 June 1999)
06-12 – The Internet is going, to Worms that is
The Internet is going, to Worms that is
We might still be waiting for Cyberspace, but as far as electronic phages are concerned, Science Fiction has well and truly become hard fact. William Gibson was credited for the former but the concept of viruses and worms first reached popular fiction ten years earlier, in the novel “The Shockwave Rider” by John Brunner. Quote: “It could take days to kill a worm like that, and sometimes weeks”. This situation has been all too apparent recently, starting with Melissa which has spearheaded a whole new wave of viruses such as the most recent (and the most damaging), Worm.ExploreZip.
Viruses have had several forms since their early days. Most commonly, they used to “infect” executables and would propagate themselves when the executables were run (as well as offloading their ‘payload’, with whatever consequences that might have had). Recent examples have been as Word or Excel Macros, and now we have a new breed which exploits weaknesses in our email. There will be software patches and virus recognition updates, all well and good. Or is it?
The problem lies in prediction. Solutions to viruses come after the event, and in the case of Worm.ExploreZip, it is likely that several weeks will pass before it is eradicated. How does it work? It appears as an email attachment, and if it is run, it accesses your Exchange or Outlook address book to forward itself to anyone it can find. Was this problem predictable? No… well, possibly, someone just might have been able to work out a scenario such as this, preferably before the hackers did.
This isn’t a Microsoft-bash here. It is quite likely that similar problems exist in Lotus and Netscape products, to name but two. Users of non-Microsoft messaging, for once, can be thankful that they weren’t using the de facto products. However, lessons should be learned for future generations of all products which use the Internet as a communications medium. For example, as applications providers XML-enable their software, they should be asking the question “What’s the worst possible thing that could happen to someone using my software?” Risk scenarios should be developed, evaluated and countered. We can be sure that, if they’re not doing this, plenty of others will be. Nobody’s directly to blame - to quote John Brunner, “The medium is the mess-up.” All the same, vendors should be doing everything they can now, rather than leaving us overdependent on antivirus companies who have no choice but to provide solutions to problems only after they have occurred.
(First published 12 June 1999)
July 1999
07-01 – Linux fails at the OS Corrall. So What?
Linux fails at the OS Corrall. So What?
Recent benchmarks have shown Linux running less quickly than Windows NT. This may seem significant, but it is not. Linux is open source, developed lovingly by the doves of the IT world. It is a product, it might be argued, and it does pose significant competition to comparable products such as Windows NT. It does not, however, behave as a product. It does not stand up to cost-benefit analysis, because it does not have a financial cost. Different models must be applied.
Die-hards of the Linux community would say that the reason Linux is successful is because of the huge numbers of idealistic hackers who are prepared to make something beautiful. They’re wrong. Linux is moving mainstream because the vendors see it as an opportunity to make money. In any business, there are some things that are given away for free and others which are paid for. With the technological advances that are currently being made, new business models have been invented which take advantage of this. But it was ever thus. The huge advantage of Linux to vendors, is that it is already free – vendors can give away something for nothing, but at minimal cost to themselves.
Interestingly, it was Microsoft that demonstrated how powerful the “give-it-away” model could work, with its free distribution of Internet Explorer giving it the lion’s share of the browser market from a standing start. Explorer is still free, and will probably remain so – a small, calculated cost by the Microsoft camp. Did anyone run performance benchmarks comparing Microsoft and Netscape? Yes. Did anyone care about the results? No. Businsses were more interested in protecting their existing investment, getting something for nothing or trying to avoid lock-in.
The mistake we can make with open source, is to think that one day it will have to be paid for. Some things are for free because they oil the wheels. Service in a retailer, local newspapers, meals on aeroplanes are free at source with their real costs being factored into other goods or services. Open source is an enabler, as it encourages the broad acceptance of technology, with all the money-generating spin-off opportunities that it may cause. And this is why open source will remain.
(First published 1 July 1999)
07-01 – More e-Baa than e-Business
More e-Baa than e-Business
A group of farmers in Wales are setting up Direct Welsh Lamb, a web site which sells lamb direct to the consumer. Whatever your philosophy on food, you will never see a clearer example of how eCommerce stands to benefit both producer and consumer.
Direct Welsh Lamb was set up as a direct result of the farmers being ever more squeezed by the corporate world of supermarkets. Rather than go out of business, the farmers looked for alternative ways of selling and hit on the Internet. There are several factors which may prevent the success of this venture, not least that the company itself may not be able to cope with the demand that it creates, but where it goes many other businesses are following. Interestingly, research and experience is showing that the “personal customer experience” is far greater than that, of traditional shopping. In this case, for example, consumers are in direct, daily (if required) contact with the producers of the goods. There are no middlemen and so it is difficult to sidestep blame; also, the competition is only a click away – responsibility heightens and hence so does quality of service.
Supermarkets will continue to have a role, but this role is set to change dramatically over the next ten years. Once the channel of communication has been re-opened between producer and consumer, it will prove very difficult to close.
(First published 1 July 1999)
07-01 – The sad truth about ADSL
The sad truth about ADSL
If the rumours are true, BT is on the point of rolling out ADSL services in the UK. ADSL presents both a problem and an opportunity to BT, the company which pretty much “owns” the link between the local exchange and the household. BT has come under fire in the past for not going the (literal) extra mile for its consumer customers: suitable pricing models for home use of ISDN, for example, were a case of too little, too late. ADSL provides a new opportunity for BT to demonstrate that it can provide cost-effective, high-bandwidth services to SoHo and consumer alike.
Issues remain, of course. Firstly, BT are unlikely to be in a position to manage a national migration to ADSL in a short timeframe. For a start, there is an infrastructure rollout cost for BT, to enable its local exchanges. Also, and maybe most importantly, there is a benefit tradeoff. ADSL is only appropriate for customers requiring high-bandwidth digital reception – this translates to internet users prepared to pay the cost of the access device and the subscription. Although we all think fast access to the internet would be a good thing, how much are we really prepared to pay for access from the home? Until there is a critical mass of customers, BT will be loathe to roll out the service in a particular area.
Secondly, there is the bottleneck issue. The internet acts a lot like a hard disk and controller – it has seek time, access speed and throughput as its major characteristics. Surfers can employ ADSL to increase local throughput but can do little about access speed, for example to a poorly connected site. Seek time is still largely down to knowing what we are looking for or how to find it. When designing a computer system, a lot of effort is made to “balance the bottleneck” – that is, to ensure all components work together so that no one component is slowing down all the others or is underutilised. The internet still has a long way to go before all of its bottlenecks are balanced.
BT will inevitably roll out ADSL, but probably more slowly than the commentators would like, even though the full benefits of fast internet access will not be realised for a good while. What does this mean for the consumer? ADSL should be considered in the round, as one of a number of protocols for data transmission on telephone wire. It will not be the only mechanism, just as the telephone line will not be the only medium – to get the fullest picture we must think of wire, cable, fibre, satellite and wireless and the variety of protocols they support. The biggest issue will ultimately be cost and there is currently such a diverse set of pricing models that it is difficult to gauge the relative benefits of each. This will change, but in the meantime ADSL is more likely to begin with a whimper than a bang.
(First published 1 July 1999)
07-04 – Local loop open for competition
Local loop open for competition
OFTel, the UK Telecommunications watchdog, has announced that BT’s monopoly on the local loop has come to an end. It is still unclear about what this will mean in terms of implementation. It seems that two options exist: either BT can sublet its existing services to outside companies (in much the same way as electricity can be bought from a number of companies, despite the provider remaining fixed); alternatively, other organisations can provide alternative backbones to which the local loop equipments can connect. As with ADSL, one thing likely is that any rollout of new services will be slow, at least in the short term. The longer term is more positive, particularly for non-metropolitan areas which are more likely to benefit from new services than they would have been if everything were left to BT.
Lest we forget, though, there is a pretender in the wings. Wireless communications do not currently support the necessary bandwidth for data communications. With a new standard ratified by the ITU earlier this year, however, it is only a matter of time before the necessary equipments and infrastructure are put in place. Add ingredients such as Bluetooth, a local wireless standard which will transform home networking, to the pot and things become very interesting indeed. If the fixed line operators do not get their act together over the next couple of years, others will be happy to step in and take the business.
(First published 4 July 1999)
07-04 – XML – the common language is half the battle
XML – the common language is half the battle
Articles of recent news point to both the crisis and the opportunity offered by XML, the eXtended Markup Language which looks set to be the standard language for business to business communications. First, IBM and Rational have announced that they are hooking up their software development environments via an XML bridge. Second, Oracle has announced an XML interface to its 8i database management system. Both show the importance of metalanguage standards, or the lack of them.
The first article is an example of what is possible if such standards are in place. The Object Management Group has ratified both the interchange mechanism and the metalanguage definition, in the form of XMI and UML respectively. The net result is that two applications know how to communicate with each other and they also understand what is being communicated.
How different things look in other sectors, as indicated by Oracle’s modus operandi. XML contains tags to represent specific types of data. Oracle have taken a low level approach, providing an engine which can be configured to recognise the tags in XML content, enabling it to remain standards-independent. This differs from Microsoft’s approach, which has involved setting up a standards body to define the tags. Complicated? Yes it is. The net of the absence of standards has been a free for all, with different vendors trying to impose their own standards by strength or stealth (and Oracle and Microsoft are just two of many players in this game).
Eventually, one side will win and the other will lose. What a shame. As the software development community has already demonstrated, the benefits to all players of having standards in place are great. The stakes are high, admittedly, but it is likely that the costs of the battle will far outweigh the benefits, to all but a few.
(First published 4 July 1999)
07-04 – Yahoo bows to people pressure
Yahoo bows to people pressure
After a not-so-long-running dispute, Yahoo has finally agreed to the demands of its Geocities users. Following Yahoo’s acquisition of Geocities, the organisation attempted to enforce its own terms of service on the many thousands of Geocities homestead owners. It may have expected a backlash, but it certainly didn’t anticipate the scale.
Considering Yahoo’s Web heritage, this lack of foresight on Yahoo’s part is remarkable. First off, Geocities existed (and, indeed, continues to do so) on the back of its user community. To fail to consult the community which is the very reason their for existence might be considered folly. To attempt to take away from this community, what could easily be considered as a online right, starts to defy belief. Secondly, let’s think about what Yahoo were attempting – to gain copyright over the individual Geocities Websites. It is difficult to imagine a scenario where the wealth of site owners would agree to this.
It should be noted that the principles behind the two organisations – Geocities and Yahoo – are very different. Yahoo is a portal organisation, letting its users access services at the cost of enduring banner ads and spam. Geocities is a hosting organisation, providing free facilities to its users with the cost (in advertising) borne by individuals browsing the sites. Clearly, one does not easily map the Geocities model to that of Yahoo. A fact that Yahoo is learning to its cost.
Yahoo is still not giving in, not fully – it has left the door open for more battles in the future. Ultimately, this is a lesson in net power – one which all organisations intending to exploit the Internet would do well to take note of.
(First published 4 July 1999)
07-14 – CA SANITI illustrates the problem with management frameworks
CA SANITI illustrates the problem with management frameworks
Enterprise management frameworks, since their inception in the early nineties, have been seen as the ”ultimate answer” to the questions posed by managing the applications, hardware and network devices that make up our corporate infrastructures. This would be true if the world stood still, but unfortunately it does not.
CA announced yesterday an initiative for the management of Storage Area Networks, known as SANITI or SAN integrated technology initiative. Essentially this equated to enabling the management of SAN hardware by CA’s UniCenter management framework.
Once again, it would appear that devices are rolling onto the market while framework vendors are struggling to keep up. No date has been given for the release of SANITI, though hardware vendors such as Compaq and Hewlett Packard are already shipping their SAN devices. Network managers must also plan time to reconfigure their management architectures to take account of the new devices.
Unfortunately, it will ever be thus. Increasing use of the Internet, wireless devices and the arrival of appliances to the market at an ever increasing rate are coupling together to transform our infrastructure topologies and make obsolete our modes of operation. For example, uptime is only one criterion to be judged for an E-Commerce site – throughput, response time and transaction management are also issues to be addressed against a backdrop of new types of application such as application servers.
Investors in enterprise management software should recognise that such frameworks can only solve part of the problem, and that deployment is not a one-shot operation. Best of breed approaches can help, but go against the “one size fits all” policy which goes towards justifying the inflated price of frameworks. Frameworks can help, particularly to manage the integrity of the core infrastructure. However they cannot be expected to keep up with all developments in the domains which they are supposed to be managing.
(First published 14 July 1999)
07-14 – IPV6 picks up momentum at last
IPV6 picks up momentum at last
Five years ago, the doomsayers were predicting that before long, the number of available IP addresses would reduce to a trickle. Back then the solution was said to lie in IP version 6, which replaces the 32-bit addressing scheme with a 128-bit scheme.
The prediction of the demise of IP version 4 – the current standard for internetworking – never quite went away. The forecasted end-date is now 2003, and the solution, as ever, is still IP version 6. V6 also comes with increased capability for reliable routing and performance, and is touted by some as the enabler for the convergence of broadband and packet switched networks. Now, with the launch of the IPV6 forum, the standard for the next generation of IP networking seems ready to lumber into reality.
The future is not necessarily going to be reached without some pain, as hardware devices currently running IP V4 will have to be changed over to the new protocol. While commentators predicting a worse catastrophe than Y2K might be overstating things (we have not, after all, seen the full might of the Y2K monster as yet) it is clear that a lot of work will need to be done. One reason why the IPV6 changeover should cause less pain than Y2K is that it is clear what needs to be done. All devices currently running an IPV4 protocol stack will need to run an IP V6 stack. If they cannot, then they will need to be replaced otherwise they will not work. This is a simpler equation than attempting to identify the compliance of code to avoid data corruption: either devices will work, or they will not. What is less clear is the approaches to be adopted by device manufacturers for the V6 changeover. While most punters can see the benefits of the new addressing scheme (despite the incomprehensible addresses relative to the simplicity of the current standard), they are the ones having to bear most of the pain for what is, after all, an initiative fron the device manufacturers.
We will expect to see significant support from vendors to enable their customers to make the jump to “the new internet”. In particular, devices from now non-existent companies (merged away by the larger players) should be supported to the same extent as newer devices. Some pain (and associated cost) is inevitable, but like Y2K, so is the change. And to think we were all going to take a breather after the millenium celebrations were over.
The IPV6 forum can be found at www.IPV6forum.com
(First published 14 July 1999)
07-14 – Jiro clarifies SUN’s open intentions
Jiro clarifies SUN’s open intentions
Yesterday, SUN announced the launch of its Jiro initiative, which aims to provide a cross-platform standard for the management of storage devices. So what’s it all about?
Essentially, Jiro refers to the provision of a set of Java APIs for storage management such that applications can access storage resources directly, regardless of what operating system is being run. Jiro does not provide storage management functionality (such as that from Veritas), nor does it affect the underlying storage protocols or architectures. Rather, it acts as a middle layer, enabling applications to work with resources without having to reinvent code.
Jiro runs on the Java Virtual Machine, and this is how it retains its machine independence. Jiro also depends on Jini, the SUN directory initiative, for a number of services including discovery of storage resources.
It would probably be a little harsh to accuse SUN of sinister motives, but J-based software standards would seem to be moving into domains far removed from the original ambitions of Java. The JVM is essentially a stripped down, free operating system: through JVM extensions such as Jini and Jiro, plus application support through the Enterprise Java Bean specification, Java is starting to encroach on areas which are worth a lot of money to certain players. Such facilities, it could be argued, reduce the need for an operating system to a device-specific microkernel – maybe they don’t yet, but there is every reason that they will in the future.
In fact, the net result of Java-based “open standards” is to encourage commoditisation of operating system software and hence to reduce its relevance. This, clearly, increases focus on both the developed applications and the underlying hardware. SUN’s market is clearly the latter; it is fascinating to see how many applications vendors are also buying into this vision. Consider this: Microsoft were sued for modifying Java to encourage use of their own operating system. But can Microsoft sue a standards body (which is the ultimate home of the Java-based family) for promoting “standards” which directly encroach on its own territory? Of course it can’t. While we welcome the arrival of cross-platform resource management from the application development standpoint, we would do well to remember that there is more than one way to rule the IT world, and SUN’s aspirations are no different to anyone else’s.
(First published 14 July 1999)
07-16 – 8i reasons to go Oracle
8i reasons to go Oracle
When is a database not a database? When it is also an application server, a development environment, a search tool and a Java Virtual Machine. It would appear that Oracle customers find the bolt-on features a side issue to the database’s core competence – that of scalable, performant information management. It is difficult to gauge whether the movers and shakers within the company are too worried – indeed, as Oracle’s market share continues to rise, the success of the new features (or otherwise) would seem to be of only minor importance. For now.
Oracle’s “additional features” are far more than that. A three-layer architecture comprises the user interface, the application layer and the database and Oracle 8i has something to offer in each layer. Considering Oracle’s moves to strip down the operating system (with Raw Iron, now known as its Information Appliance), it is possible to imagine an IT development and operational system consisting of no more than hardware and Oracle software. 8i is not a database with add-ons, it is an application environment with an information store.
Hey, they’re all doing it. Just two days ago we showed how Sun’s J-based strategy was putting the squeeze on both the OS and the application architecture. SUN haven’t produced JQL yet, but the bets are on that they are thinking about it.
So what are we seeing here? The Internet has resulted in new concepts, paradigms and other architectural clichés upon which applications may be built. Things are stabilising fast – the thin-client, three layer architecture, centralised on the server, is becoming the de facto approach. Vendors from all disciplines are keen to move into this newly formed space and make it their own. The standardised, stable world of applications servers is no bad thing for systems implementors but will inevitably spell a horrible death for many vendors who are currently converging into the same space. As with the hardware market, the increasingly stable architectural boundaries become the battle lines and it is a battle from which there will emerge few winners.
(First published 16 July 1999)
07-16 – Apache grows up – and up
Apache grows up – and up
It is not only Microsoft’s admittance to be running Apache on some of its MSN Web servers that indicate the growing maturity of the server software. The loosely knit federation of Apache developers have formed the Apache Software Foundation, a not-for-profit corporation which aims to continue Apache’s development in its open source vein.
Apaché’s maturity is bolstered by its global takeup. According to a recent survey, Apache is, now, the software in use by over half of the world’s Web servers. This trend is set to continue, as the formation of the ASF will be seen as good news for the corporate world. It can be a difficult decision to take as an IS manager, to use the industry standard (and free) Apache, or to follow a traditional model of buying licensed software which comes with a nominal offer of support, services and upgrades. The reality is that the Apache community is as good at supporting its software as any corporation but making the leap to public domain software is seen as a bit like voting for the Green party, in some circles.
Where will it end? There is always the chance that, once Apache has captured the majority of the Web Server market, it starts to charge for services. Considering the formation of the organisation (and even if it keeps its promise to avoid profit) it seems difficult to believe that the ASF will be able to avoid making money in the future. These are unlikely to be through direct sales of the web server software itself, more through leveraging the Apache brand. For example, the organisation could sell added value service or, most likely, sell new software packages that build on the Apache brand. Indeed, if Apache was floated, it would be likely to make millions on the strength of its brand alone.
(First published 16 July 1999)
07-16 – PC-on-chip spells end for consumer hardware costs
PC-on-chip spells end for consumer hardware costs
It’s all coming together. A number of announcements, from NatSemi, Microworkz, Tiny, AOL, MSN and SEGA to name but a few, are indicators of convergence in (or on) the consumer market. Let’s look at them.
- National Semiconductor unveils the Geode, which combines PC functions onto a single chip. Proposed market: set top boxes, mobile phones and handheld devices.
- Microworkz plans to distribute its sub-£200 iToaster PC through Dixons
- Tiny Computers in the UK start giving away PCs to subscribers to their newly-launched telecommunications service.
- AOL and MSN offer free PCs to users taking out a three year subscription to their online services.
- SEGA announce that their Dreamcast console will have Internet access.
What does all this mean? NatSemi’s announcement equates to reducing the cost of PC-type devices to well below the current level. Microworkz’ iToaster is a set-top-box lookalike PC which demonstrates the way the technology can be taken – interestingly, the iToaster runs Linux, mainly (it is understood) to avoid the increased cost that would be incurred if a commercial operating system was being used. Tiny are giving away a £300 PC, which is as good an indicator as any of the maximum amount of material which can be “given away”. AOL and MSN demonstrate the head-to-head battles that are going to be fought by the internet giants – indeed are already being fought, exchanging hardware for mindshare. SEGA’s announcement is indicative of the scale of the convergence, as the games console becomes as powerful as a PC (if not more) and sports similar functionality.
There are plenty more announcements from the past few months which could be quoted, all indicating service providers giving stuff away, hardware getting cheaper (and therefore easier to give) and different devices – consoles, receivers and computers – becoming one and the same. To extrapolate only slightly, we can see consumer hardware giveaways becoming the norm over the next 18 months. This appears good news for the materialistic amongst us, but only because the situation is being judged relative to the recent past where such hardware would cost well over a grand. In fact, consumers have short memories and will come to expect such deals.
What is likely to cause more of a stir are the shockwaves expected to reflect back onto the industry itself. Hardware manufacturers are already seeing their profits cut to the bone as they compete for volume sales. Convergence suggests that previously ring-fenced groups of manufacturers will find themselves pitted against each other. Their markets will be less and less directed at the consumer, however, they will be selling to the service providers. Here, also, we expect the current shakeup to continue. Resellers could be the worst hit, unless they reorient their channels – peripheral devices such as printers, imaging devices and storage have a longer life, but pure PC sales will be damaged significantly.
The bottom line is that the computer and service infrastructure will, in the future, have little more than gadget value to the consumer market. People will purchase what they consume and need – food, clothes, bricks and mortar, the latest film. IT will act as an enabler for these purchases and will take its cut from their suppliers, but the time of buying PCs for their own sake is virtually over.
(First published 16 July 1999)
07-31 – Hotmail hacked, but who's to blame?
Hotmail hacked, but who’s to blame?
Yes, it is highly embarrasing to Microsoft that their Hotmail service, currently with over 40 million subscribers, was broken into by a bunch of hackers. The fall-out from the incident is, as usual, indicative of Microsoft’s reputation, both concerning the security of the Microsoft software that Hotmail runs and the inability of MSN to inform its customers that anything had happened.
The wider issues of this incident are just as telling. First, given the fact that the computer industry has been in existence for at least thirty years, the question must be asked concerning why the so-called global class systems are still open to attack. This boils down, unfortunately, to the global IT community’s acceptance of mediocrity: the theory and practice of computer security is well undcerstood, but its implementation is often seen as non-core functionality. Products have come to market too soon, have been rolled out without sufficient attention to security issues and have been left to evolve into the complex morass we now know as “infrastructure”. Even today, new versions of operating systems software are released without being properly tested. We all know this but we foolishly accept it as the norm.
In many organisations, clearly including Microsoft, security has become a firefighting exercise: it would appear that concerted attacks are likely to succeed. The Internet has not caused the situation but has exacerbated it, by giving outsiders access to unsuitably configured corporate systems and by providing novices with access to a wealth of up-to-date information about security weaknesses and how to exploit them. All this serves only to undermine Internet confidence - with reason, users are unwilling to risk even their personal credit card details by exposing them to the Web. Despite this, however, companies continue to over-expose themselves by ripping a hole through to the Internet from their private networks, using inadequate security software or poorly configured firewalls as weak protection.
Let’s face it, we knew it was a risk to trust Microsoft with our email, just as it would be a risk to trust any organisation. Maybe one day we will be able to sue for breach of trust - this might be locking the door after the horse has bolted, but could force corporations large and small to treat security with the importance it clearly merits.
(First published 31 July 1999)
07-31 – Sun's rising Star to catalyse device revolution
Sun’s rising Star to catalyse device revolution
Things have moved on since the first reports of SUN Microsystems’s acquisition of Star Division. It surely is too much to ask that the giant took our advice, but it would appear that our suggestion, that this was far more than a battle for office automation market share, was uncannily accurate.
In the article where we first commented on the acquisition, we put it in the context of Microsoft’s decision to move its policies from a PC-centric position to “great software.. on any device”. Fair enough - the advent of better bandwidth, both inside and outside the corporation, coupled with the continued drive to push down TCO (promoting re-centralisation onto enterprise servers), inevitably points to a thin client future. Microsoft, as a software company, wants to reposition itself to take advantage of this new landscape, by selling operating systems and office software which support the thin client paradigm. Recent announcements concerning Office 2000, coupled with the existence of cut-down versions of the applications for handheld devices, reflect this.
And then, here came SUN Microsystems. SUN is a hardware vendor who has consistently used third party software to promote the sales of its platforms (remember the encyclopedia-like Catalyst catalogue?). Then, as now, SUN’s main interest is in selling hardware: whilst conceding that it has lost the battle for the desktop (a battle which, as indicated by Microsoft’s changing mission, is going away), SUN is concerting its efforts on server hardware. As a calculated move to increase server sales, SUN are to give the StarOffice software away - a move which (considering its high quality) may well make significant in-roads into Microsoft’s market share, just as Linux and Apache before it.
Clearly, whilst Scott McNeally will undoubtedly derive pleasure from cocking a snoop at Microsoft’s share of the desktop software market, this is not their main target. The future backdrop of IT, it is well recognised, is the Internet, which will be the communications medium for delivering all kinds of services to consumer and business alike. We would expect companies such as Yahoo and AOL to capitalise quickly on the free availability of the server-based version of the Star software, known as StarPortal, which will enable them to roll out word processing, spreadsheet and other facilities in addition to their existing email, scheduling and information services. This opens the door to use of set top boxes, handheld devices and even mobile phones for more than just web browsing: SUN stands to make money from the increases in centralised processing power that this new model suggests.
There are still some shortfalls in this approach. The issue of consumer bandwidth is still not resolved: even given StarOffice’s modular approach, there will be an inevitable download overhead each time a new or updated module is required. Security is of equal concern: recent events concerning Microsoft’s Hotmail service (see companion article) are unlikley to encourage individuals to give service providers access to their documents and spreadsheets. Even given these short term weaknesses, SUN’s announcement is clearly a major step on the way to mass market adoption of a device-accessible, internet-centric IT world.
Oh and, by the way, I wrote this article using StarOffice, downloaded this morning as I was deciding what to write. One day all software will be this way.
(First published 31 July 1999)
07-31 – Thor poses Open Source threat to Windows 2000
Thor poses Open Source threat to Windows 2000
Timpanogas Research Group (TRG) announced yesterday its plans for an open source, NDS-compliant directory for Windows NT, Windows 2000 and Linux. This is significant, not only because of the direct impacts it could have on the directory-based world. TRG have a reputation for building bridges between NetWare, Windows and Linux. Despite a “professional” relationship with Novell, TRG have a reputation of developing products which can be used to migrate away from Novell. It would seem that, this time, the company is turning its sights towards Microsoft.
Clearly the move to implement NDS-compliant, open source directory software is a blow to Microsoft. The first versions of Windows 2000 will not contain a fully functional version of the much-touted (and awaited) Active Directory software which, despite being announced over two years ago, will still miss the initial Windows 2000 release deadline. Administrators wanting to make full use of the directory functionality will have to wait for the Janus upgrade to the operating system, unless they are prepared to consider the increasingly attractive option of adopting (or continuing with) an NDS-based solution.
Companies such as TRG are not the only ones to benefit from Microsoft’s absence. Novell itself is focusing its efforts on its directory products (as reported in IT-Analysis.com earlier in the month. In partnership with Compaq, Novell have also released a “directory appliance”: it is products such as these which will make moving to an NDS-based solution increasingly attractive.
So what of the indirect impacts? Announcements such as these are indicators of where the open source community is moving, away from its traditional home ground in the academic community. TRG are a commercial organisation whose motives are driven by profit, loss and a desire to get at the big guys. In giving away software, TRG are increasing the chances of its adoption but also are launching an attack on the profit margins of Novell and Microsoft. This trend is unsustainable but may be seen as a David-against-Goliath tactic in the short term: as the giants battle it out, they would do well to watch what the little people are doing.
(First published 31 July 1999)
September 1999
09-01 – Cisco keeps its focus with IBM deal
Cisco keeps its focus with IBM deal
Cisco’s drive towards world networking dominance tool a step forwards yesterday when it purchased the best parts of IBM’s routing and switching technologies and patents, for an estimated $2 billion. Cisco has rarely been out of the news over recent weeks due to its procurements of technology companies – this latest move should be seen within the context of its overall strategy.
The networking market, it is generally agreed, is in a state of flux. The keyword is convergence – the advent of technologies such as Voice over IP means that IT systems, networking equipment suppliers and telecommunications companies are constantly treading on each other’s toes while they define their position in the market. Companies such as Lucent, Alcatel and Nortel Networks see the key to be to amalgamate networking and telecomms. Following some hefty buys, they are now remodelling their internal organisations, not without some difficulty as the two markets have traditionally followed very different business models. Cisco has, for the moment, adopted a different tack, namely to concentrate on acquiring networking companies and technologies which help them to grow into the telecomms space. This, more organic approach is likely to cause less internal grief but means that they might lag behind when trying to meet the needs of “pure” telecomms.
All of the companies recognise the value of services. This is a trend which can be seen across the technology industry, with vendors from application development tool suppliers to networking equipment manufacturers keen to capitalise on what is clearly still a growing market. IBM’s own model is clearly a good one to follow: IBM have grown their services arm within the last three years to a multi-billon business. Again, Cisco has chosen not to go with the networking flock: while Nortel Networks and Lucent have their own services arms, Cisco are partnering with companies such as IBM and KPMG.
Cisco’s policy is a sensible one, namely “do what we do, and do it well” whilst growing the company into new technology areas and using partners for non-core business. It remains to be seen whether this strategy will be world-beating, but the alternative can result in a company growing too fast without being able to assimilate its constituent parts (as demonstrated by Alcatel a couple of years back). Interestingly, it is a policy which is also paying off for IBM: the deal with Cisco enables it to focus on its core businesses, namely as a computer manufacturer and a services provider.
(First published 1 September 1999)
09-01 – Mac is back with new Pentium-killers
Mac is back with new Pentium-killers
What an incredible turn-around. Over the past year Apple’s share price has more than doubled, quarterly earnings figures continue to beat analysts expectations and with yesterday’s announcements of new machines, the seismic turnaround in the company’s fortunes looks set to continue. Why?
New machines were unveiled yesterday with a claimed performance of three times faster than the fastest available Pentium III. The availability of such powerful, well-designed PCs will help convince the marketplace of the viability of the products both as a platform for now and an investment for the future. This is, however, only part of the reason for Macintosh Corporation’s return from the ashes.
If the PC industry was about technology alone, we would all have Macs. They were the first to the market with an intuitive interface, affordable networking and WYSIWYG (remember that?) output. Unfortunately, big Mac screwed up big time: it was slow to recognise the arrival of the PC (big and clunky though they were at the time) and it was slower to react when PC clone and component manufacturers pushed the prices down. PCs became affordable to the mass market, and Macs did not. Wintel won and Jobs lost. End of story.
But not quite. There was a time, not so long ago, where it was assumed that we only needed one sort of computer. It was called a PC, it ran Microsoft software and, well, that was that. Microsoft on Intel was considered the safe bet. This time is still here, but it is nearing an end. Everyone recognises it: look at Microsoft’s new corporate tag line “great software… for any device” – no mention of PCs. The device world is nearly upon us, with the one-size-fits-all PC being just one of many device types, including thin clients, mobile phones, PDAs, toasters, fridges… and Apple computers. Yes, that’s it. It is becoming OK to not have a PC. This is the real reason for Apple’s turnaround: the software is available, the performance is there and the company will be around in a year or so, so why not get an Apple computer?
Of course, Steve Jobs is to be credited for not letting the company lay on its back with its legs in the air. Bill Gates injected new life into Apple by committing to make Microsoft applications available for MacOS. There are lots of reasons why Mac is back, but the most fundamental reason of all remains, because it can be.
(First published 1 September 1999)
09-01 – OS vendors on the Merced starting grid
OS vendors on the Merced starting grid
We have said repeatedly in the past that Merced is likely a test bed technology, enabling the potential of the Intel 64-bit architecture to be established and paving the way for McKinley. However this is unlikely to prevent the inevitable operating system wars.
Merced silicon had barely left the fabrication plants before the rumour-mill started up. Currently the lines are being drawn between a 64-bit version of Linux and a Windows 2000 beta for IA-64. HP-UX and Monterey, both Unix flavours, are next on the list.
What can we expect to see? The most likely event is that Intel’s own release plan will determine the release plans for the operating systems. Microsoft has already announced, for example, that its 64-bit operating system will be available at the same time as Merced, “before the end of 2000.” Likely there will be similar announcements for HP-UX, Monterey and even Linux which will need a marketing push to position itself effectively against the Microsoft and Monterey spinning machine.
Then, there will be the inevitable betas and delays. It is Microsoft who are the most disadvantaged here, as Monterey, HP-UX and Linux development for IA-64 is already underway but Microsoft will not be able to fully commit resources until after the release of Windows 2000. Sure, developers will be working on 64-bit prototypes, but Gates and Ballmer are unlikely to risk W2K delays by reallocating developers elsewhere. Finally, we will see the inevitable benchmarks and bakeoffs, with each vendor, consortium or community demonstrating beyond reasonable doubt that their OS is the best.
There are several factors which will decide the success of the operating systems in the IA-64 space. The first is time to market: any delays will severely jeopardise the market’s interest. The second is application availability, which is less and less of an issue even for Linux. The third is down to marketing and acceptability. Here Microsoft is very strong but Linux is carving its own credibility. HP-UX is likely to have support, not least from the Hewlett Packard user community, but it is unlikely to be “the one”. The riskiest player has to be Monterey, which despite strong backing from the vendor community, may well end up as the Unix also-ran.
Overall, then, interesting times ahead. Obviously for us commentators, who capitalise on the gossip that this situation will generate. Also, though, for the vendor community, who even now may be staking the future of the company on which of the operating systems is likely to dominate. Finally, the user organisations have a clear decision to make, but will probably prefer to wait until the dust settles before linking their IT strategies to any specific operating system.
(First published 1 September 1999)
09-02 – Intel and IBM bring processing to the network
Intel and IBM bring processing to the network
Capitalising on the flurry of interest generated by their recent demonstrations of Linux and Microsoft operating systems running on real 64-bit silicon, Intel have announced a new foray into the networking chip market. And biting their heels are IBM, fresh from selling off a bunch of network technologies to Cisco. It makes sense.
Intel’s “niche market” is the Internet, where it aims to address the spiralling need for new services and network functionality. Existing products are very much about the lower layers of the stack, performing network-level functions such as routing, compression and bandwidth management. Higher level functionality such as management, voice/video processing, multimedia and graphics handling is usually addressed by operating system or application software. This separation is for two reasons: the first is that “this is how it has always been done” and the second (which derives from the first) is that network processors are currently limited in what they can do, being mainly ASICs in which the functionality is hard coded on the chip.
What Intel have proposed is a hybrid device which is optimised to work with networking hardware. The device is known as the Internet Exchange Processor or IXP. At its core is a StrongARM processor which is surrounded by a number of RISC-based microcontrollers. This means that the devices can be programmed to support both lower level and higher level services: the obvious advantage is that the programs can be more easily upgraded to support new technologies and service requirements. Also, a significant load is taken off the core processing capability of the machine. The IXP is designed to be linked with other IXPs, promising terabit throughput.
IBM have rushed out a similar announcement, clearly caught on the hop by Intel. How wrong we were yesterday, to say IBM were defocusing from network devices! Instead, it seems, the sale of technology to Cisco allows them to concentrate on network processors. This move is interesting: having lost the CPU battle to Intel, IBM see a fresh battlefield for the two companies, on which the winner is by no means decided.
The move by Intel and IBM makes a lot of sense. Technologically, the addition of serious processor capability to networking devices is long overdue. Both companies have the skills to integrate processor technologies with network technologies, plus both have the R&D budget to pay for their development. From a business perspective, Intel are not directly competing with the network equipment manufacturers, at least not yet, and as a result have forged alliances with a number of vendors. In this case, it is chip suppliers such as MMC who are the most likely to suffer as the Ciscos and Newbridges jump on what may well turn out to be another “Intel Inside” bandwagon. IBM are already building network devices such as routers and switches so the chances of partnership are limited. Winners and losers aside, the development is likely to be of benefit to end users, not least through the old stalwart of increased functionality and performance with reduced costs.
(First published 2 September 1999)
09-02 – Solaris tightens Merced screws on Microsoft
Solaris tightens Merced screws on Microsoft
In our recent story about the Intel 64-bit architecture, we neglected to mention one operating system vendor who has, until recently, politely avoided showing its true colours. Highly unusual behaviour for SUN who has now announced a Win-beating timetable for its flagship operating system, Solaris.
SUN’s plans are for a release of Solaris for the IA-64 simulator “some time this autumn” – this is based on the assumption that most development will be carried out on the simulator for the immediate future. As we discussed, it is highly unlikely that Microsoft will have much to show before the middle of next year.
Microsoft have had a lot of wind in their sales [sic] in the past, which has caused end users to wait for Microsoft releases rather than use available products from other vendors. This was true of applications software, when users had already invested in Microsoft operating systems. It seems most unlikely that the same will hold true for operating system software. The other differentiator was price. Inevitably, OS pricing will be driven down, both by the commodity nature of the IA-64 OS market and by the fact that at least one, namely Linux, will be available for free. Application availability was never really an issue on the server.
It appears that the biggest software company in the world may, already, have missed the boat.
(First published 2 September 1999)
09-02 – System I/O fabric to transform computing architecture model
System I/O fabric to transform computing architecture model
Peace has been declared, it was announced yesterday, between the vendors involved in defining a replacement for the PCI bus. In fact, so much attention was paid to the avoidance of battle, that the potential of the proposed technology was missed by most.
The keyword is “fabric”. System I/O is a fabric-based architecture which enables any device to communicate to any other device. Harware switching is used to enable far higher throughput speeds than previously available, from 2.5Gb to 6Gb per second – this compares with a measly 132Mb per second from PCI. This is all well and good, suggesting that PC devices such as processors, network cards and memory will no longer be limited by bus bandwidth (particularly as the 6Gb top end will probably be extended in the future). Two points have largely been missed, however.
The first is that the system vendors are not the only organisations experimenting with switched fabrics. One notable group is the storage community, for whom the switched fibre architecture is key to the SAN strategies of most companies (including StorageTek, Compaq and HP). What interests the storage vendors is the ability to link switches (by fibre) over long distances: currently, the maximum distances between switches are touted as between 10Km and 50Km. How might the System I/O fabric benefit from long-distance interconnect? For example, a single, conceptual device, running a single operating system could in fact be a pair of mirrored devices, each with its own processor, disk and memory. The processor, disk, memory and graphics card become devices in their own right which, with the inclusion of a switched fabric with a fibre interconnect, could be physically positioned anywhere in a 50km radius but which could be configured dynamically to make best use of the resources at a given time.
The impacts go both ways, so…what impact will the System I/O fabric have on the SAN community? Do we need both types of fabric? How will they interact?
A second point is the inclusion of IP version 6 in the System I/O specification. This effectively removes the need to consider system components as part of the same machine, virtual or otherwise. The Internet is currently IP V4, but IP V6 will be included in most network-ready devices in the future. The potential is clear – that the system bus replacement, System I/O, becomes an integral part of the Internet infrastructure. How this will happen is still a matter for speculation, but the potential this has, of moving us into a device-based world in which bandwidth is a forgotten issue, is clear. Given that System I/O is likely to be one of the most important enabling technologies we have seen, it is only to be hoped that the new consortium can come up with a snappier name.
(First published 2 September 1999)
09-03 – From Kidneys to Iridium shares – what will eBay auction next?
From Kidneys to Iridium shares – what will eBay auction next?
eBay kidneys are back in the news again. A similar tale, of an auction being pulled as it broke the rules, concerned Iridium. Such stories, hoaxed or unlawful, serve to illustrate the huge potential of auctions to sell, well, anything.
You will probably have seen the newest eBay kidney story by now (there was another attempt to sell a kidney in May, which was also pulled). As the “product” was removed from the site it had achieved a value of $5.75M. Pretty good for a body part, but probably worth it to someone. As for the Iridium tale, apparently Iridium had already filed for Chapter 11 bankruptcy prior to an opportunists’s attempts to sell their shareholding on the auction site. The Securities and Exchange Commission, not amused, requested that the offer be withdrawn: this was agreed, despite the “shares” no longer falling within the SEC’s jurisdiction. Of course, auctioning shares may leave you flummoxed – what is the stock market itself, if not an auction house? This serves to illustrate that people are prepared to auction, well, absolutely anything at all. And this is the crucial point.
As discussed in eRoad, a Vision Series report from Bloor Research, the auction is one of two mechanisms that are growing to eventually replace existing markets. The other is the pure market, which exists to sell products with a clearly defined value. Where the value of a product cannot be predefined, it must be turned over to the market where its value will be defined at the point of sale, that is, the auction. These two markets are not mutually exclusive: sites such as \link{www.lastminute.com, Last Minute} are profiting from the fact that the prices of travel products fluctuate wildly as they approach their sell by date. The success of eBay, and the rushed creation of me-too sites across the globe, are testament to the viability of auctions.
As the two stories above illustrate, however, the legal frameworks which govern auctions are not yet fully geared up. This is a global issue – for example, there is understood to be a thriving black market trade in third world body parts. The Internet may be the great liberator but it also opens the door to misuse on an unprecedented scale. Off-world banking is already being mooted: coordinated action is required before auctions are put out of the reach of even international law.
(First published 3 September 1999)
09-03 – Microsoft faces Hobson’s choice for W2K
Microsoft faces Hobson’s choice for W2K
Microsoft’s most recent announcements concerning Windows 2000 release date are unsurprising. Coupled with the delays to the most recent developer release, the Seattle company are insisting that the product is on schedule. What is most telling is that nobody seems to mind.
As usual, the media campaign is a combination of promotion and expectation management. Microsoft are emphasising quality, and seem to have separated the concepts of “final code” from “released product”. But let’s add some common sense here. Rule One of software projects: you cannot compress development time. Time and again, less experienced managers try to turn eight months into four only to find that that, like a tensed spring, the required time jumps back to the initial estimates. It’s true, try it. The only thing that can reduce a development schedule is reduced functionality or, heaven forbid, reduced testing.
So, what are Microsoft’s options? The first is to remove some functionality from the product, but that would be a marketing disaster. It has been done before, of course (hence the telling reputation of Service Pack 1). The second is to risk releasing incompletely tested software onto the market. Of course, this is a strategy known to most vendors, not just Microsoft: release 1 of any product is, generally, avoided by all but the most desperate. The third is to claim a code freeze at the end of the year, then continue development until shortly before the release date. This, employing the term “code chill” rather than code freeze, is again common practice but risks reducing testing time (again) and removes any slack that may be left from the system.
Microsoft are faced with a stark set of choices. The baying wolves of the IT industry will jump on the slightest misfortune of the software giant, so it will be treading very carefully over the next few months. But what of the end user? The advice is clear, and would be the same for any new product of this scale. Take your time. Evaluate the product (and others) in a realistic environment, making sure that it works with existing and planned architectures and configurations of other software. Listen carefully to the experiences of early adopters, and learn from them. Fortunately, the signs are there that end users are adopting exactly this approach. They are “waiting and seeing,” or planning upgrades only when their existing configurations will no longer be adequate. Y2K is, funnily enough, a helpful distraction: no-one is likely to rush into anything in January 2000.
It is extremely likely that Windows 2000 arrives late, incomplete and buggy (but don’t get me wrong – I’d be delighted for this to be proved untrue). Whilst Microsoft will struggle, yet again, with the blows to its reputation that this will cause, the vast majority of potential users will be unaffected.
(First published 3 September 1999)
09-03 – Open Sky’s wireless PDA future leaves Microsoft out in the cold
Open Sky’s wireless PDA future leaves Microsoft out in the cold
Competition in a market is a good way of encouraging innovation and keeping control of costs. But what do you do if you don’t have any competition? In the Wireless PDA space, 3Com’s answer would appear to be – buy some.
3Com have a share in Open Sky, a joint venture with Aether Systems, Inc. Open Sky’s goal is to provide a US-wide wireless infrastructure to support handheld devices in general, not just to the Palm VII. Its two-pronged approach involves, firstly, linking up with existing mobile carriers and secondly, involving partners to develop modems and software to enable existing devices to talk to the ether. Whilst the initiative remains stateside only, this partnership approach is more likely to result in a european service than 3Com’s existing wireless service, the single-company Palm.net.
The move is interesting because, effectively, it pitches OpenSky against Palm.net. It is difficult to see how Palm.net will compete with OpenSky: the former requires a Palm VII, currently costing nearly three times the price of the chepest Palm (the newly released Palm IIIe) and it is unlikely that the price of the modem will make up the difference. Palm.net service costs are notoriously high, and likley to be put to shame relative to OpenSky’s fixed rates. On the eve of its national rollout of Palm.net, 3Com have taken a calculated risk – the greater good is to establish a market sector for the wireless handheld device, and the Palm VII/Palm.net combination cannot do it by itself. For credibility, the market needs to extend to all flavours of handheld, Palm or otherwise. Enter Windows CE.
Here is the fundamental element of 3Com’s risk calculation. It must establish a market, a move which is unlikely to succeed if it is linked to any one flavour of PDA. However it wants to retain its early bird advantage over the Microsoft camp, which makes up the lion’s share of the competition. For this reason, Open Sky has announced that it will be supporting Windows CE by autumn of next year. 3Com are relying on the rapid acceptance and uptake of wireless handheld devices, and will do everything they can to encourage the end user to come on board. There’s the rub – the company will even pitch against itself to create and win the market before Microsoft get a look in.
This strategy is fraught with risks. Corporate America is likely to be smarting for a couple of months after Y2K, which could dampen the acceptance of wireless. The success of the Palm was driven by the end user and not the IT department, however – this is a fact that will have been in 3Com’s calculations. In addition, of course, mobile phone manufacturers are not standing still, and the fruits of the Symbian alliance are to be yielding forth in the near future. Despite all this, 3Com probably feels it has no choice – it has to steal the initiative, because the risks of not doing so are even greater.
For the moment, from this side of the pond we can only watch. But we watch with interest.
(First published 3 September 1999)
09-08 – Is today the day the doomsayers have their picnic?
Is today the day the doomsayers have their picnic?
9-9-99. The six character string used by mainframe programmers tens of years ago to signify “end of file” is sending shivers down the spines of the same people today. Oh yes, I wrote a few of those programs, they say, dispelling any thoughts that this might be an overhyped media myth. As with the big Y, nobody has been able to predict with certainty what the effects of the 9-9-99 problem will be. Today, the waiting is over.
Certainly the issue is being taken seriously from some quarters. According to \link{http:\\www.silicon.com,Silicon.com} for example, the UK coastguard has chartered extra tugs to bale out ships whose ageing navigational systems are caught short.
As industry watchers, we shall be following today’s events with interest, not only because they will be of concern in themselves. The realities of the 9-9-99 problem are the best thing we have to predict the events at the end of the year. We are sceptical that nothing will go wrong, indeed we are amongst the more pessimistic observers. Today’s events will enable us to gauge what might go wrong, what the impacts might be and what needs to happen to resolve the effects. Like a fish-eye lens, Y2K will be a distorted amplification of today’s events, large or small. Lets just hope they are small.
(First published 8 September 1999)
09-08 – ISPs get tools to help SMEs compete in online economy
ISPs get tools to help SMEs compete in online economy
According to Forrester Research, small businesses’ share of the Web, in terms of revenues, is expected to decline from 9 percent this year, to only 6 percent in 2003. Bleak words indeed.
These predictions are understandable considering that the larger firms are more likely to have the resources to get themselves onto the Web. Even then, the inevitable shakeup of retail organisations is likely to spell doom for many small businesses. Is this just Fear, Uncertainty and Doubt about the future? Unlikely, as this is happening already both because of and despite the Internet. At the same time, however, the Web presents an unprecedented opportunity for businesses large and small: it is the great leveller in which all are equal. This is true, at least, for those companies which are online.
Today’s Internet shopper is becoming more discerning. It is not enough for a site to present a long catalogue of products and a telephone number for orders (although this has been a successful model in the past). Customers want sites to be fast and fun, informative and interactive. Such added value is raising the hurdle to newcomers whose first attempts at establishing themselves online are often proving less than successful.
Initiatives are underway to help resolve this. One set of tools, from Bondware, is directed at ISPs. The tools provide the set of functions to that the discerning surfer has come to expect, including chat rooms, newsfeeds, member pages, ad management and a host of other facilities. The most attractive thing about the package is the price: at $6000 for the basic license (which allows for 25 commerce sites), the cost of entry is lowered considerably, enabling the service providers to provide low cost service to their subscribers.
The advantage of this approach over the competition, for example, Microsoft’s Small Business Server, is that it is designed for ISPs who are the natural focus, in terms of providing both technology and experience, to help SMEs onto the Web. It also promotes continued development and competition, as ISPs can develop the range of services they provide, enabling companies to pick and choose from a number of ISPs. The other advantage is, of course, that it is available now.
(First published 8 September 1999)
09-08 – Killer App for voice recognition – don’t hold your breath
Killer App for voice recognition – don’t hold your breath
IBM’s newest version of ViaVoice signals a change of tactics for their voice recognition software. The product is to have improved recognition, better user-friendliness and additional features including Web surfing facilities. Will this release prompt the acceptance of VR? Unlikely, not yet.
There are three areas which we can see as benefiting from this technology. The first is dictation. Speaking, ultimately, is a different communication technique to writing. Some dictate letters, but most prefer to write them directly. Will this change? Unlikely, particularly in today’s already noisy offices. Imagine the hubbub of everyone talking inanely to their machines. As I write this, I contemplate using a microphone rather than a keyboard: even with 100% recognition, I cannot see the attraction. Maybe that’s just me. The second area is to provide an interface for users with physical disabilities. There is a clear benefit here, to certain users. Even so, neither of the two areas are sufficient to give killer app status to recognition software.
The third area is in commanding the computer. This is where the potential lies, but unfortunately not in its present state. The ability to launch an application, or even surf the Web for that matter, by voice commands is unlikely to be any more than a fad. Voice recognition will sit on the bench, as it waits for the technology which will launch it into ubiquitousness: intelligent agents. Once computers are capable of following natural language commands, then the time will be right for their communication by voice. For computers, read the whole of the online infrastructure. For example, “Take me to Amazon. I want to buy the latest John Grisham novel,” is an sentence I could imagine speaking out loud, either to my computer or directly to my phone.
The issue is one of time. The recognition software quality is almost there but the ability of the IT infrastructure to make sense of our utterances is not, in terms of both indexing the information “out there” in a sensible fashion and providing the resources to exploit it. Neither is it a priority, so it is likely that voice recognition has a wait on its hands before its perfect partner comes along.
(First published 8 September 1999)
09-08 – Mainframes lead resurgence in software tools
Mainframes lead resurgence in software tools
It is old news that mainframe companies are experiencing an uplift in their fortunes. As the mainframe re-invents itself as the workhorse of the Web, the drive to competitive advantage is causing vendors to light the fires under the slumbering development tools market.
Let’s take an example. Amdahl’s hardware revenue figures for last year beat expectations by 202%. Quite a turn-around. Now the company says that interest is increasing in its ObjectStar development environment. ObjectStar is an integrated collection of tools to enable mainframe applications to be developed – tools include screen and GUI builders, a rule-based engine for business logic and wizard-like facilities to support the generation of common functionality. Whilst the product has a reasonable existing customer base, whose licensing revenues have supported the product’s continued viability, efforts to promote ObjectStar were largely killed off eighteen months ago when they failed to generate sufficient new customers. Things have remained flat until now, with both existing customers, and new prospects enquiring about the product.
Why should this be? The answer, says Chris James, Marketing Manager for Software and Services at Amdahl, is the drive towards competitive advantage on the Net. This makes sense: it is one thing to use an off the shelf package for internet site development, or to select a mainframe over a smaller server to ensure that the volume of transactions can be handled. However, it is the additional services which can set one Web site above the rest. These services need to be defined and implemented at a rate which the packaged tools could not possibly keep up with: in other words, they require bespoke development. At the same time, eCommerce is driving the requirement for an unprecedented level of integration with external information feeds and back end systems, new and legacy. To deal with this level of complexity, whilst developing added value functionality in an ever-decreasing timeframe, companies are turning to development tools with renewed vigour. Particular interest will inevitably be garnered by enterprise RAD tools such as ObjectStar.
A few words of reality. There does not exist, anywhere on the market today, a fully integrated development environment which takes into account the full spectrum of needs for Web Site development. Areas such as site editorial, management of external components, monitoring of statistics and configuration management are still the domain of a disparate set of tools which remain disconnected from the “core functionality” of application serving and transaction management. This situation will change: the renewed interest in development tools will, in the short term, lead to promotional campaigns for existing products. In the medium term it will also spawn advances in functionality as the suppliers attempt to leap-frog each other to corner the burgeoning market.
In a way, history is repeating itself. There is nothing happening here that did not happen during the CASE “revolution” of the late seventies and early eighties. We will see the market consolidating, silver bullets flying and the inevitable disappointments when they do not solve all ills. We will also see a new generation of software tools which, used correctly, can enable businesses to reap the rewards of the Web.
(First published 8 September 1999)
09-08 – Nor.Web left on the wayside of the superhighway
Nor.Web left on the wayside of the superhighway
A surprise announcement yesterday revealed Nortel Networks and United Utilities’ intentions to pull out of their Nor.Web joint venture. What does this mean? Well, nothing, probably.
The joint venture existed to develop and promote the Powerline technology, which was much-lauded when first announced a couple of years ago. Several deals were struck, for example with electricity companies in the UK, Germany and Sweden, and interest was gathering in the US. Everything was rosy, then, well, technology just passed Powerline by. ADSL was the straw that broke this camel’s back, but the writing was already on the wall when the ITU ratified the latest bunch of wireless communications standards several months ago. Bandwidth is on the increase by default, as technologies for mainstream bandwidth provision (for example through phone lines and across the ether) continue to develop. Such advances render the development of new technologies to provide additional bandwidth, exploiting niches such as powerlines, largely unnecessary.
Digital Powerline, in any case, was facing an uphill media battle. Fears surrounded the emissions created due to the very high voltages required to transmit the signal over the high tension wires. Although this was not the direct cause of the decision, which was made for cost reasons, a spokesman for United Utilities confirmed that this was a factor.
Nor.Web are currently in discussions with their existing customers as to the options for the future. It is possible that the technology finds a buyer, who may well be able to make money out of it. If this is not possible, the demise of Digital Powerline is unlikely to cause more than a ripple in the Internet pool.
(First published 8 September 1999)
09-08 – NT Pragmatists strip Microsoft naked
NT Pragmatists strip Microsoft naked
It’s official – the shiny outer surface of Microsoft’s marketing machine is no longer the basis on which organisations are formulating their IT strategies. Thank goodness for that.
According to a survey by the NT Forum, which showed 40% of respondents already beta testing Windows 2000, only 30% had any plans to upgrade to Windows 2000 in the next twelve months, relative to 80% last year. Do I need to repeat that, or did you catch the significance already? It is easy to speculate on what has caused this apparently slating turnaround. Is it Y2K looming higher than expected in IT managers’ eyes? Is it that the world is to adopt Linux wholesale, no longer needing the costly alternatives? Whilst the first question may be a factor, our sources tell us a far simpler story.
Lets work back from the answer, which may be stated quite simply: there is no one answer. Despite Microsoft’s attempts to the contrary, to tout NT as the only operating system that we would ever need, the reality always was, and shall always remain, quite different. Back in 1997, Microsoft missed its chance to kill off Unix once and for all, by demonstrating the scalability of NT. The audience were expectant: Wintel had already trounced the pretenders to the desktop throne and was widely expected to do the same to the server. Indeed, a significant number of IT strategies at the time were flying the Windows flag wherever they possibly could, and were waiting patiently for Microsoft to take over the rest of the world. However, in the end NT scalability failed to convince the sceptics, the media and even the end users. This was the beginning of the end for Microsoft’s buy once, use anywhere mantra.
More recently, the Internet and eCommerce have uplifted the fortunes of mainframe and high-end server manufacturers (see the related story on IT-Director.com). The mainframe market, which refused from the outset to just roll over and die, is being reborn. It is not currently a market which gives NT a look in. What is now clear is that we will need mainframes for the highest availability and performance, we will need mid-range servers for more affordable back-end processing. The scale goes all the way down to the embedded software devices such as mobile phones and PDAs. Each has its own specific needs and hence requires the most appropriate operating system support. The dream, held by IT managers and promoted by Microsoft, of a single OS running everywhere, has become sadly jaded. The nine versions touted for Windows 2000 (as reported here) are testament to this fact.
There is no one answer: this is coupled with the fact that it is becoming increasingly difficult to throw away existing technologies to replace them with the newest, best thing. IT Managers are becoming masters of the morass – pragmatists for whom the goal is to integrate the best of the old with the best of the new. Upgrading only when necessary, replacing when budgets allow, these folk have evolved to listen more to the needs of their business than the hype of the vendors. Microsoft are seen for what they are, as a software company with some good products which can fit into the overall IT portfolio of an organisation. And that’s just the way it should be – the question is now, how long before the Seattle giant accepts its place.
(First published 8 September 1999)
09-09 – Apple at the gates of dawn
Apple at the gates of dawn
I like Apple. Always have, but then that’s because the technology has always been more important to me than the marketing. Apple’s demise hit hard, particularly when they had such good products. But now they are coming back.
It took Windows a good ten years to catch up with the Macintosh in terms of usability, but finally with Windows 95, it did. (OK, there’s a debate we could have there, but you get the point.) Unfortunately, in the meantime some particularly foolish management decisions priced the unclonable Mac out of a market which was growing like billy-oh. The company has suffered for its foolishness, but it is fair to say that it can leave such errors firmly in the past. The company is re-emerging – not as a gorilla that vandalises everything in its path, nor as a niche supplier with little to offer, but as a player in the game, with sufficient following and reputation to give customers confidence. Its new products are sexy and powerful, the software is there, the innovation is there and what is more there is a new track record developing which augurs well for the future.
For these and other reasons, Apple has seen its share price increase by 40% over the past month alone. Advance orders of its new laptop, the iBook, are running at 140,000 already and there are plans to launch it in Japan within the next month or so. Can nothing go wrong for the reborn company?
Of course it can, and it would be folly to suggest otherwise. However, this market is as much about keeping one step ahead of the game as anything, and for the moment, Apple appear to be doing exactly that.
(First published 9 September 1999)
09-09 – Argos misses the decimal point
Argos misses the decimal point
It looks like Argos, the UK retail chain, may be sailing into murky electronic waters over the next few months. A few days ago, a mistyped catalogue entry on the Argos website led to hundreds of customers ordering Sony televisions at three pounds a pop. The company are now refusing to honour the orders, as (in their own words) this was an obvious error. The question is, are they liable?
This question would be difficult enough to answer even without the complication of the Internet, due to the complexities of commercial and contract law. The underlying model of the retail sale is, however, relatively simple and it is this which will be used as the basis for both sides of the case. According to Taylor Walton, a legal firm which specialises in eCommerce law, there are three elements to a retail sale. The first is the “invitation to treat,” which corresponds to an entry in a catalogue or a price on a shelf. The second is the acceptance of a payment. If a payment is accepted, this leads to the third element, namely the completed contract. In other words, once a payment has been accepted, for example through a credit card transaction, then the company which made the initial offer is liable to deliver the goods.
Reports in the press suggest that Argos is to fight any suits (and indeed, threats to sue have already been made), by saying that orders were made but would not be accepted. Sure enough, a quick browse of the Argos web site shows that customers are invited to make “orders”. However, once a customer has compiled an order, they are then requested to make a credit card payment and hit the “purchase” button. Hmm, sounds like a sale to me, but who am I to say? It is probably best to leave this to the legal experts, who stand to make a pretty penny out of this debate whatever the outcome.
Unfortunately for Argos, precedent would suggest that they stand to lose. Many in the UK will remember the Hoover competition which offered free holidays on the purchase of a washing machine. The company was forced to honour its pledges, almost bringing itself to its knees in the process. Supermarkets, too, have had their fair share of problems due to misadvertising prices. The Walkling case, for example, resulted in a family claiming (and receiving) £30,000 worth of goods. This was due to Sainsbury’s policy of letting customers keep the goods and get their money back if they found that an item was mis-priced. Another example is of an enterprising customer buying £5000 worth of beer at Tesco, then going directly to the customer service counter and asking for their money back – and getting it. The service representative is reported to have commented that the lucky customer “didn’t do as well as a chap in our Stratford store,” or words to that effect. It is understood that Sainsbury’s have now changed this policy, but Tesco, to date, have not.
Whatever the outcome, it is clear that the Argos case will be watched with interest from around the globe. One thing it illustrates is that on the Internet, nothing happens by halves – the hundreds of TVs were “sold” in a matter of hours, before Argos corrected the error. It could be said that all publicity is good publicity – even articles such as this will encourage prospective customers to visit the site. Indeed, I would wager that even now, bounty hunters are scouring the online catalogues for typos.
(First published 9 September 1999)
09-09 – Dell gets the Acquisition bug
Dell gets the Acquisition bug
Dell made its first acquisition yesterday, which clearly marks the beginning of something big. Dell’s purchase of ConvergeNet for $340M is the first step on a long road for Dell, which will see the company take on the storage market and, very likely, come away with a sizable chunk.
At the moment, Dell is saying that the move does not affect its existing partnerships. For example, the company is quoted as saying that there is “no impact in existing relationships” with IBM and Data General, who supply disk drives and storage systems respectively. Similarly, Network Appliance, whose storage appliances are rebadged by Dell and sold on, does not foresee any direct impact at this stage. As pointed out by a source from within the company, ConvergeNet are a SAN supplier (or would be, if they had released any products yet) whereas NetApp are a Network Attached Storage (NAS) manufacturer. The two models can live side by side in harmony, said the source. Fair point.
However it is unlikely that Dell will stop here. Dell already works with tens of partners, of whom ConvergeNet was one. Further acquisitions will no doubt be announced, each, knowing Dell, enabling the company to take new technologies and skills on board without causing a major impact to their operational capabilities. No doubt, they will have learned a number of lessons from watching Compaq’s purchase of Tandem and Digital – ironically, Compaq is probably Dell’s major target for competition in the storage space, just as with the PC market where Compaq will still be smarting from losing the top spot in US PC sales to its rival.
This organic approach to growth will be coupled with Dell’s world-beating sales engine, not to mention their brand. Dell will win market share in the same way that they did in the PC arena, by putting service at the top of the list and by creating a supply model which, with its rapid turnaround and low overheads, is the envy of the market. Sooner rather than later, Dell will find itself in direct competition with its suppliers and partners, with the result that some relationships will have to change. But this is unlikely to distract the new Goliath as it drives towards its goals.
(First published 9 September 1999)
09-09 – Microsoft’s Cool shades preach Java doom
Microsoft’s Cool shades preach Java doom
Unsure how to present this, let’s start with the facts as we have them. On The Register, it was noted that Microsoft are to release a programming language called Cool, whose aim (it is reputed) is to kill Java. Questions abound – what does it entail, will it succeed, and ultimately, do we mind?
Cool is not a language, rather it is a “series of extensions to C++”. This is the nub of the matter, as Java is ostensibly a simplification of C++. As an ex-C++ programmer I can say that its curtailment is probably a safer bet than its extension. It was always astonishing to see how convoluted C code could become (I wonder if the convoluted code competition is still running?) That is, of course, until C++ came along. If C could be astonishing, C++ could be truly unfathomable. Java has been described as “C++ without the hard bits” – this is only part of the story, of course, but it paints the picture. Despite this, of course, there is plenty of extremely well written C++ around. Java has failed, so far, to push it off its perch, and it is likely (look at COBOL) that both languages will be here to stay.
So – what are Microsoft doing? Provision of new facilities to the C++ community can be seen as a good thing. The language, by itself, is nowhere near as well supported as Java – just look at the vast range of class libraries now available, for free, for the latter. There is room for some catch-up, and Microsoft are probably right in trying to fill the gap. But Java-killer? Unlikely. There is far too much momentum behind the language. Just look at IBM’s policy of including a JVM with every platform they ship, mainframe to mini. Also, Java set out to achieve a particular end – to meet the needs of the Web. Okay, this is paraphrasing, but Java’s design enables it to be run as applets or applications on devices from phones to digital TVs. Things have moved on, of course, such as the Enterprise Java Bean specification which is as much an IT architecture as a language construct. Users now see Java as strategic – just look at the results of our recent Java survey to see this.
The other question, of course, is to ask whether we mind. Microsoft wants to be rid of Java for lots of reasons, none of them technical. We could be affronted by this, or we could remember that SUN’s initial launch of Java was set to remove Windows from its pedestal. In other words, this could be seen as a cynical attempt by a slick marketing machine to replace the cynical attempt by another slick marketing machine to replace a cynical attempt… you get the picture. At the end of the day, though, marketing will not decide this. Java is not going to die, and that’s final.
(First published 9 September 1999)
09-09 – Sega’s Dreamcast – millstone or milestone?
Sega’s Dreamcast – millstone or milestone?
Despite Sega’s first-to-the-market position for its next generation games console, pipping both Sony and Nintendo to the post, the forecast is not so good for the games company. But, hang on – what’s this got to do with IT?
The answer is simple. These machines are an indicator of the shape of things to come for consumer use of technology. They have two key features: TV access, and Internet access. They also have a price tag which is much more within the bounds of the mass market.
There’s been a major, but little talked about, problem with the home PC. The question has been, where to put it? Chances are, it will be in the spare bedroom, dining room, cubby hole under the stairs or office. However, a large proportion of the predicted uses of PCs are social activities – home shopping, chat, direct video or audio, and, of course, games. The games console bridges several worlds. It is accepted by the young, it is allowed in the living room and it is understandable by all. No boot failures or install problems plague it, it is reliable, resilient and quiet. The games market may well choose to shut out Sega, however successful it is in the short term. However, it may also close the door on the PC mass market. PCs have a place – data storage, hobbies, word processing. But as for their mass market potential, the future of the PC is far from sure.
(First published 9 September 1999)
09-09 – Transatlantic porn test case gets result – sort of
Transatlantic porn test case gets result – sort of
Anti-porn crusaders will be breathing a sigh of relief today, following the landmark judgement in a British court about a UK resident running a number of US-based porn sites from his home in Surrey. The case was part-victory, part disappointment, as the US involved party appears to have got off scot-free, and two of the sites are still up, running and based in Costa Rica.
This case serves to illustrate the state of the world legal system concerning Internet porn. In school terms, it’s a case of “not bad, could do better” – clearly huge strides have already been made but many loopholes still exist. For the moment, whilst the partial victory is better than nothing, it should only serve to increase the efforts required in all countries around the globe, to set global minimum standards and enable the interworking that is necessary to deal with the crimes. It is unlikely that the porn industry will ever be stamped out, but at the moment, the Internet offers an open invitation to anyone wishing to participate in this unfortunate trade.
(First published 9 September 1999)
09-10 – 9-9-99 – Reasons to be cheerful, or not
9-9-99 – Reasons to be cheerful, or not
Last Thursday came, with neither bang nor whimper. The press have jumped on the single reported event, namely a spreadsheet application being run in Tasmania. So – was that it?
I still remember the weeks leading up to the start of 1984. The sense of foreboding, that Big Brother really would set up shop and start rewriting history, was everywhere. Funnily enough, the feeling continued way into the years which followed, whilst at the same time there was a sense of loss, at disasters failing to happen. Last Thursday was like that – it came and it went, with very little to show, leaving the more paranoid amongst us wondering whether we were robbed.
Okay, nothing happened. We should still be vigilant, but then so should we always be – a crash is a crash, after all. The debate centred largely around whether there really was an issue – after all, argued some, September 9, 1999 can be written in such a variety of ways. Maybe they were right or maybe they got lucky, but who cares? No big bang, no domino effect, cause to celebrate or at the very least get a decent night’s sleep.
Now then, let’s get on with the real issue. 9-9-99 enabled plenty of companies and government agencies to test out their Y2K readiness plans (the results of which served to fill the press column inches reserved for any computer failures). Most plans succeeded, some failed (notably in the US Coast Guard – what is it about coast guards and dates?). The 9-9-99 problem may turn out to be fictional, but Y2K will not.
We know several things about Y2K as a fact: the first is that it is a real problem that some computer systems use two digits to store the date rather than four. We also know that substantial money has been spent trying to resolve the problems that it might cause. Most importantly, we know from our own sources that a number of so-called “compliant” systems, when retested, have reported an average of 30 non-compliance errors per 1,000,000 lines of code, most of which result in incorrect data being written to the disk. This isn’t the result of an over-active imagination or a news shortage, it’s there in black and white.
In the words of Tom Hanks, “We have a problem.” The only thing we do not know about Y2K is how big the impact will be.
(First published 10 September 1999)
09-10 – AmEx builds Blue bridge over Web payment waters
AmEx builds Blue bridge over Web payment waters
AmEx are the latest in line to attempt tackling a fundamental weakness in the world of eCommerce – that of making payments. Will they succeed?
Currently, Internet credit card transactions are seen by the consumer as being insecure. This perception maybe changing (albeit slowly), and possibly be more than a little misplaced relative to, say, giving card details over the phone. However it is still one of the major reasons cited for slowing the uptake of eCommerce. The other, equally important reason is that credit card payments are expensive, effectively setting a minimum purchase value of about £5. A raft of initiatives have been attempted, such as microcharging through eCash. To date, however, none has won the hearts and minds of the Web population.
AmEx have partnered with a company called eCharge to launch a hybrid service known as Blue, providing both smartcard and credit card facilities and Internet transaction mechanisms. With regard to the latter, the service is said to be more secure than traditional credit card mechanisms; also, it is notably cheaper to run, per transaction, than its credit card equivalent with merchants actually to be given a discount for using it on the Web.
What may set this trial apart is that it is not a trial. This service is a launch, available to consumers across the US. It is far more likely to have mass market appeal as it is aimed at the mass market; also, it is instantly attractive to Web retailers through its reduced cost. Unfortunately, we could not find any sites actually using the new standard on the Internet. If you build it, they will come, maybe.
(First published 10 September 1999)
09-10 – HP bring DVD+RW a step closer
HP bring DVD+RW a step closer
DVD may be taking off slowly in Europe, but it is still taking off. The reason may well be this simple – what do we do with all that space?
Hewlett Packard has announced its DVD+R/W drive, which will be available in the UK from the end of November. The price is not yet announced, but the SCSI device is expected to come on the market at about £500. According to HP, the device will be able to read existing DVD-ROM disks, but current DVD players will not be able to read disks written by the DVD+R/W. This is not due to technology incompatibilities, rather it is a software issue. HP are currently in discussions with other DVD-ROM manufacturers to see how the situation can be resolved.
Despite the continued improvements in these technologies (the spindle speed is on the up, for example), DVD has so far failed to take the European market by storm. A recent survey, from analyst firm Strategy Analytics, showed the European take-up of DVD to be lagging behind the US, expecting 4% of european households to own a DVD, compared to 11% in the US. Why is this?
DVD disks can store 3Gb of data on each side – that’s a total of 6Gb, giving each disk the capacity of 60 zip disks. The potential for DVD is huge, both in the corporate markets and for consumers. All it lacks is the Killer App. CD-ROM is largely adequate for most uses, and while I would be delighted to replace all those MSDN disks with a single platter, it is not that much of a problem to me. Similarly, at home, most european computer and video users are still content with CD-ROM and VHS. It is only a matter of time, of course, before our information appetites overwhelm the capacity of the lowly CD, but in the meantime we are quite content to wait and let the prices drop a little.
There remains the outstanding issue of the ongoing debate between DVD-RAM and DVD+R/W. These are competing, incompatible technologies, but there is an inevitable market for DVD+R/W as the disks will be (software glitches aside) compatible with existing DVD-ROM drives.
So – no big bang for DVD in Europe, not just yet. Despite this, the future seems bright for DVD – the market is expected to grow exponentially over the next three years. HP is leading the pack with its products so, given its good reputation in the removable storage arena, the company seems well placed to deal with the rush when it comes.
(First published 10 September 1999)
09-10 – W2K gets first UK deployment
W2K gets first UK deployment
Microsoft announced at the end of last week that the University of Leicester had begun a deployment of Windows 2000, to replace its Novell Netware installation. Although clearly a showcase piece, it should still be watched with interest.
The network is to be rolled out this week to an initial 500 PCs. By Autumn next year, this number will be increased to 2,000 PCs and up to 7,000 remote sessions. 7 Compaq servers will be used, with 4 main servers, an SMS server, an Exchange server and an SQL server. The version of Windows 2000 to be rolled out will be Release Candidate 1. Our advice to all comers has been to pilot the operating system before rolling out, and to watch what others are doing: the Leicester project, while still not enterprise-scale, will doubtless provide some useful insights (not least the merits of rolling out pre-release software). Of particular note is the new security functionality – we would expect the student population to test this to the full.
For the monthly update, see The University of Leicester Windows 2000 web site.
(First published 10 September 1999)
09-10 – Waitrose ISP – now we are truly free
Waitrose ISP – now we are truly free
It is unsurprising that another UK supermarket has joined the free ISP fray. It is interesting that Waitrose are to give phone revenues to charity. What is unique, however, is that Waitrose are not to charge for support.
The UK free ISP market is already burgeoning. Tesco, Virgin, W.H. Smith and a host of others (from newspapers to football clubs) quickly followed a trend started by FreeServe, the now-floated spin-off of electronics retailer Dixons. Whilst free ISPs have been slated for providing a lower quality of service than their paid-for equivalents, the truth is that ISP quality varies widely across the board, paid for or otherwise. It is probably true to say that, in these early days of the Web, people are more accepting of non-optimal service than they will be in ten years time. The fact remains that for a substantial proportion of the UK population, the quality of service that free ISPs can provide is good enough.
The major downside of all these services is that, to date, they have charged for support at a premium rate, for example 50 pence per minute from tesco.net. Now this is nothing personal, as I have worked on both sides of the helpdesk fence – the support role is thankless, problems are often difficult to solve and, to cap it all, customers can be downright rude. From a customer perspective these services can be inefficient and bureaucratic, which wouldn’t be such a problem if the clock wasn’t ticking at a penny a second. To take a recent example (which, honestly, I wouldn’t mention if this Waitrose article hadn’t come along), I called the support desk of a major supermarket free ISP to inform them that I was receiving the email of another subscriber. Following a good five minutes on the phone, I found the procedure for dealing with this situation. Being a community-minded eCitizen I wouldn’t have minded spending the time, but I was riled at being charged at least three quid to help them sort out somebody else’s problems. Get the picture? If support is free, fair enough, but if we are paying then we expect our absolute money’s worth. Don’t we?
Waitrose are offering an email bureau service, which provides for “unlimited email addresses, accessible anywhere in the world” according to the press release. This service, which is still being developed, will be unique in that it allows customers to access their email either from a POP-3 compatible client (such as Microsoft Outlook), or directly from the Internet à la Yahoo or Hotmail. This is a major differentiator which allows users to get the best of all world for Internet access. A major concern to JKD, the company responsible for developing the Web site, and ITG, the service provider, is the recent media storm about Microsoft’s Hotmail service: every effort is being made to ensure that such back doors are not left in Waitrose’s email facilities.
The target audience for the service is the profile Waitrose/John Lewis customer, namely the broadsheet rather than the tabloid reader. Several reasons are touted for this – firstly, it is in keeping with the middle-to-upper class Waitrose branding. Secondly, the keenest users of the Web tend to be from “the thinking classes”. These reasons are only valid for the site content – it is very likely that the thought of free, unlimited support will have a much wider appeal.
The gesture to charity is an interesting one. It will be good for the image of the site, as well as giving customers the feeling that they are giving to good causes, without them having to lift a finger (this will probably appeal to their target audience). It is also good for the charities, of course! But where do Waitrose expect to make their money? This is where things get really interesting.
Waitrose is a supermarket launching onto the Web. It is setting up as an ISP in order to attract prospective customers to its site. Ultimately, though, its revenues will come from its capability to sell its own products via the Web. This, like Iceland’s announcement a few days ago (which included free delivery of goods bought through its site), demonstrates the sea change that is occurring in the supermarket-ISP arena in the UK. The battle is joined: over the next months, expect to see Waitrose, Tesco, Sainsbury, ASDA, Safeway leap-frogging each other with a host of new services, not to get users signed up to their ISP but to get real customers buying real products.
(First published 10 September 1999)
09-21 – IT Directors – hold your heads high
IT Directors – hold your heads high
According to a survey published in yesterday’s FT, IT Directors and CIOs have more reason today than ever to feel valued. The traditional stereotype is of a person thrust into the job of directing corporate IT, faced with huge pressure to deliver solutions whilst lacking sufficient knowledge of either the business or the technologies required to support it. This would be coupled with having only limited authority to achieve any objectives, let alone strategic ones. According to the survey, published jointly by the London Business School and Egon Zehnder International, the IT Director role has become “one of the most demanding in the corporation,” with at least half of the working week spent on non-IT activities such as defining business strategy. Hurrah.
We are delighted by this development and we will change our stereotypes accordingly. We do have one minor concern, though: the result is such a direct contradiction to those of the past that it seems too good to be true. There’s one other tiny, tiny point – we still know many IT Directors that are harangued by the business, viewed as without teeth and, privately, rather worried that the pace of technology is far faster than they can keep up with. So – what is the full story? We await the full results of the survey, which will be published by the time you read this. In the meantime, we would be delighted for any feedback you might have, anonymous or otherwise, on the condition of the IT Director. Is it all so rosy? If you are an IT Director, you work with one or for one do, please, let us know.
(First published 21 September 1999)
09-21 – Oracle, HP and Siegel – picking up the CRuMbs?
Oracle, HP and Siegel – picking up the CRuMbs?
CRM is, frankly, a triumph of marketing over substance. The danger lies ahead for the main players, who may have forgotten to worry about the potential damage they could do to their reputations, let alone those of their customers.
Why so? Let’s clear some cobwebs which may be obscuring the true CRM message. CRM is the blanket term applied to “front office software”, i.e. software which may be used to develop, manage and deal with people with whom a company needs to interact. This definition is dubiously vague, a fact which has been exploited by vendors of anything from sales automation, help desk, marketing and call centre software packages. The war of words is ongoing, as the big players such as Oracle and Siebel slug it out to win market share. Bizarrely, it isn’t the products that are being pitched head to head, it is the marketing messages. As it happens, the word is (from the Yankee Group) that the punch-fest is cutting little ice with the end user, who is making decisions based on what the package offers. This makes sense – we are not talking about a product with reasonably standard interfaces and functionality, but an array of very different products which must be decided on their own merits.
On to the second point. The danger of selling on hype alone is that the substance will most likely disappoint. We would strongly advise anyone from making a decision about a CRM product to define the requirement very carefully before making any decision. This is obvious motherhood, but is worth re-emphasising as CRM product lines have come into existence rather rapidly following a bout of market making from analysts and vendors alike. ERP packages, on the contrary, had a more solid foundation constructed before the marketeers got hold of it (and even then, were not painless to implement). A further worry concerns the quality of the products. In the rush to gain mind share before market share, it is possible that products will be cobbled together using half-suitable components and some window dressing on the user interface. Information has come our way to support this hypothesis: we can only advise decision-makers not to rush into any purchase before both functionality and product quality have been verified as adequate.
After all, we are not just talking about the reputations of the vendors. CRM automates the interface to the customer, so any failures will be starkly visible outside the user organisation. As we are advising against hype, neither do we want to over-hype the risks. All we would say is look before you leap: know what you want and ensure that you will get it, before you sign the CRM bottom line.
(First published 21 September 1999)
09-21 – SNIA Comes to Europe
SNIA Comes to Europe
The spirit of co-opetition was well and truly alive last week at the inaugural International Membership Development Conference of the Storage Industry Networking Association. With title like that, is it any wonder we talk in acronyms. So what’s it all about?
The goal of the SNIA is “to develop specifications which … form the basis of standards to enable compatibility among storage networking products”. Essentially, this is about software standards, not hardware: for a start, the SNIA wants to remain device-agnostic, not wanting to tread on the toes of existing hardware standardisation efforts such as the Fibre Channel Alliance. Secondly, hardware technologies are moving very fast: the SNIA sees its role as defining a device-independent layer which will continue to be valid well into the future. In practice, this means that the SNIA is running a number of initiatives, including:
- working groups are responsible for defining specifications in areas such as device management and (storage network based) data management
- conferences and marketing activities are planned to enable the storage networking message to be broadcast to the wider community
- an interoperability lab is to be developed, which will enable manufacturers to test their equipment and their software against certification criteria (SNIA-marking, perhaps)
Overall, the aim is that devices from different manufacturers will not only be plug-compatible, but also they will be able to communicate with each other and be managed centrally. For example, a goal scenario would be that two storage devices from different vendors could be used to provide storage for a single application such as SAP.
What is interesting about the work of the SNIA is that it has been set up to solve specific issues that have arisen through the development of networked storage, both SAN and NAS included. Rather than being driven by the marketing departments or the standards bodies, both of whom have had a tendency to deal with possibility rather than reality, the SNIA has its work cut out to solve problems, for example of interoperability, which exist today for managers of heterogeneous storage networking devices. The downside is, of course, that the wider community will have to wait before any SNIA-marked products exist. The first results are starting to come out of the working groups, for example common interface modules and SNMP MIBs are appearing, however it will be a good six months before we see anything turned into lab-tested products. There is, of course, a sense of urgency coming from the SNIA camp so we can hope for some concrete products to be ready for the post-Y2K thaw.
The SNIA already boasts a powerful membership. Board member companies include Veritas, EMC, StorageTek, Sun, HP, Quantum, Compaq, Dell, IBM and Microsoft. Also, the organisation, which has traditionally gained income from memberships, now has serious sponsorship money behind it and claims to have “the mindshare of the industry”, all of which stands it in good stead for the future. The SNIA appears to have its act together and its willingness to have an international (or global) presence is to be welcomed, as it has in the past been seen as a West Coast club. Prospective members should visit the all-new SNIA web site.
(First published 21 September 1999)
09-24 – Free domain names – a quick thrill
Free domain names – a quick thrill
Two significant changes are occurring in the world of domain names. The net result is that the cost of top level domain name registration will be next to nothing. Exciting news. Strange, though, that within a few years domain names will become a thing of the past.
What are these changes? The first is that a West Coast startup known as iDirections is to offer domain names for free. The basic cost for a registration company is currently down to $18: this cost, and the cost of iDirections own infrastructure, will be covered by advertising and promotions, presumably on the website of the domain name registrant. The second development is that Network Solutions, keeper of the keys to the central register of domain names, expect to have installed software which will enable direct access to its database: this could mean that the registration cost itself will drop to zero. One way or another, the chances are that domain name registration will be free by the end of the year.
This is very good news for all those that wanted a Web site with a distinctive domain name. It will not only benefit smaller businesses (for whom even $70, the going rate for a domain name for the punter, seems a bit steep), but also individuals who will be able to establish themselves with a clear Web presence. The net effect (no pun intended) will be that all the remaining nouns in the English dictionary, plus a significant number of non-english words, will be mopped up.
There are two reasons why this might matter less than it initially appears to. These are intelligent agents and directory facilities. First, let’s talk about intelligent agents. The fact is that dot com names have never been particularly user-friendly. It is a wonder to me why the IETF or the W3C did not bring out a more natural language version of Web site naming. For example, wouldn’t it be preferable to type “amazon books” in the browser, rather than “amazon.com”? You might not think so, but my grandmother would. Domain naming is a temporary aberration, which will go away as soon as there is something better. That “better thing” will probably be the use of intelligent agents.
Have you tried Google? I can recommend that you do. If you have forgotten a URL, then you can go to Google.com (excuse the anachronism), type the name of a company and hit “I’m feeling lucky”. The chances are that Google will take you to the right site. Now, let’s think about this. What, for example, is the domain name of Hewlett Packard? www.hp.com – easy one. What about CGU Insurance? That’s www.cgugroup.com. BBC? www.bbc.co.uk. Get the picture? Things aren’t always as obvious as they might seem. Google can help, but it is the “middle man” – wouldn’t it be preferable to avoid the extra step? That’s where the most basic of intelligent agents would come in, enhancing the browser by providing an implicit search facility. Propellor heads may prefer the dot com names, and (to be sure) a number of businesses are dot com named from the word go, but it is clear that the general population would prefer to stick with the names they know. More advanced intelligent agents are expected (once the XML revolution kicks in) to be able to search on products, amongst other things (for example, delivery costs), so just having the viagra.com domain will not be enough to guarantee business.
The second reason is that of directory facilities. Having “a good noun”.com as your name may well be a way to guarantee business, as (it is well documented) the average web user is just as likely to guess an URL as try to look for it. Hence travel.com, drugstore.com and music.com are all benefiting. In the longer term, though (and again, this will be a product of XML), punters are just as likely to say “give me all the travel sites that sell package holidays to the south of France”. The high premium on the domain name will quickly fade, hopefully not before the site owners have made a pretty penny.
Clearly, the gist of this argument is speculative, however sites like Google and AskJeeves are already demonstrating that natural language questioning is becoming an option and that the right URL doesn’t have to be the key to the kingdom. Companies profiting from domain name registration, or the domain names themselves, should make their hay while the sun is still shining.
(First published 24 September 1999)
09-24 – Software pricing to plummet – is Linus Right?
Software pricing to plummet – is Linus Right?
Linus Torvalds went on record at the end of last week saying that, within 3 years, software prices will have plummeted. So – is he right? The answer, it would seem, lies in the value which that software represents.
Value, so goes the adage, equates to benefits minus costs. Traditionally the main use of computing power has been seen as to reduce costs. More recently, the advent of the Web has enabled businesses to dramatically increase benefits. By itself, IT is pure cost, so it must be pitched against the resulting cost savings or the increased business that it enables.
Given this, the charges for software, hardware and services already varies significantly. In the City of London, for example, the consultancy fees for a newly-launched, software package were anywhere between £2,000 and £5,000 per day. Why? Because the package, once installed, could save a hundred times that amount: every day that the package sat in its box was money down the drain. The premium on IT-related goods is often well worth paying, or at least it has been to date. The question begs to be answered – how long will these good times roll for the IT industry?
We are still near the beginning of the electronic age. Every year, brand new technologies come with a single guarantee: that they will change the way we work. Telephones, radio and television, computers, fax machines, graphical terminals, email, the World Wide Web, mobile communications, each has played its part. We recognise the advantages of each new generation of products, we purchase and participate, we move up to the next rung of the technological ladder. Despite all the advances, though, how far have we come? Linux, for example, is based on operating system principles and a language which were industry standard in 1969. As for software, Object Oriented languages have been around for at least that long. Package software still has some life in it: the ERP (back office) market is now largely sewn up, so suppliers are turning their attention to the front office with CRM. Couple this with supply chain management and business intelligence, throw in platform support, and the question begs – what business problems remain to be solved? Once this stage is reached, the advantages will be gleaned from the relative qualities of each implementation.
There are still some major advances to be had from IT. Over the horizon is pervasive wireless networking, followed by terabyte solid-state memory. Software to manage the vast amounts of information flowing through the ether will keep a premium for as long as the quantity of information remains unmanageable. There will always be a business benefit in doing things better than the competition – just look at eBay’s recent failure to cope with the volume of transactions, or Hotmail’s security problems. For the medium term, IT in general, and software in particular, will hold its price where it can directly guarantee business advantage.
At the same time, the infrastructure tide is rising and things below the water level tend to have a fraction of the worth of the applications they support. For example, it is not a coincidence that most open source packages, such as compilers, operating systems and web servers, deal with infrastructure issues. It is also unsurprising that Microsoft should give away Internet Explorer 5, or that CA should bundle its Unicenter network management framework. The tide will continue to rise: the products included in this category will include databases, document management facilities and workflow engines. Today we are seeing office suites and programming environments become freeware: already a user can equip an office with a full range of IT facilities without having to spend a penny on software. Products are given away for all kinds of reasons, both commercial and emotional: for StarOffice, for example, it is probably both. Once a product has been given away, it cannot be reclaimed; if the product is already adequate, it undercuts all similar products now and in the future.
Fair to say, then, that the cost of certain kinds of software will plummet. However, do not be taken in: this is a ploy by the vendors: even likeable Linus has a vested interest. Vendors don’t do anything without a reason, for example, they hope to damage their competition or attract you to other elements of their product line. Make the most of the opportunities as they present themselves, then, but remember TANSTAAFL: there ain’t no such thing as a free license.
(First published 24 September 1999)
09-28 – MCI-WorldCom – Sprint to a finish?
MCI-WorldCom – Sprint to a finish?
The possible takeover of Sprint hit another obstacle this week, as the market showed its disgruntlement. Shares in the MCI-WorldCom dropped by 4% on Monday morning alone, to $75 1/16 – things quietened down since then but a dollar was wiped off the share value by end of play Tuesday. And now two companies are saying that the merger could be off at any time. Things are not looking good.
It all boils down to wireless. MCI WorldCom has seen its market pulled out from under its feet over the past year, as the mobile market has well and truly established itself. MCI’s main competitor, AT&T, is a strong player in the mobile market whereas MCI doesn’t have any wireless capabilities. Several players without the long pedigrees of the traditional communications providers, such as Sprint, Nextel (which has also been an MCI acquisition target), Voicestream and PowerTel, are carving up the airwaves: at the same time, due to operational efficiency and better technology, they are able to offer services to corporates at prices that MCI cannot touch. The rhetoric of Bernard Ebbers, CEO of MCI WorldCom, that the company “does not need a wireless business right now” does not match what his feet are doing, namely grasping at an opportunity to buy itself back into the multi-channel future of communications.
Whatever happens, it appears, there will be a storm. If the deal succeeds, it will have to ride out the likely disagreements of both the Justice Department and the European Commission, not to mention the fall-out from its sell-off of a chunk of Internet backbone to Cable and Wireless. It is strongly possible that the merger, if successful, would involve a sell-off of Sprint’s own Internet backbone unit. At the same time, the deal would blow a crater in the landscape of the whole, global telecommunications industry – alliances would need to break and re-form to take the merged company into account.
Should the deal fail (and it is likely that it will), MCI risks losing considerable corporate custom to its competitors, who are already able to provide a more comprehensive portfolio of services. It is difficult to see where the company would go next: Nextel is an option, though things failed last time around. Other companies lack the scale necessary to provide MCI with an integrated offering, although they could be the foundation stone it needs for the future. The communications giant may end up all dressed up with no place to go.
(First published 28 September 1999)
09-28 – MSN is open for discussion
MSN is open for discussion
Microsoft has launched its MSN online communities in the UK only a week ago. It’s still in beta but already has over 3,000 subscribers. Not bad going.
So – what’s it all about? Essentially, the online community concept has changed little since the golden days of the bulletin board ten years ago. As Microsoft themselves point out, however, this is less about technology and more about reach, as the “3,000” figure above illustrates. The main component of the MSN online community is a discussion board which may be configured by its administrator. This person, like the participants in the community needs to be signed up to the MSN Passport facility, which provides a user profile. The administrator can set up the community’s front page, can define the language, whether it is open to private or public membership, the content standard (e.g. adults only) and so on. Prospective members can be vetted, if required, following which they are open to participate by posting messages and sharing files. A handy feature is the photo album – for example, families can operate a private community where they share holiday snaps or pictures of the little ones.
This would appear a reasonably standard implementation of an online community – even with the language support (the main competition, Yahoo, is currently in English only). Typically it is what is planned that gets interesting. The main features on the horizon over the next few months are provision of a calendaring facility (so community members can fix events) and, most interestingly, integration with both online and offline tools such as Outlook Express, Microsoft Messenger and even Web site building facilities. This integration trend its to continue – for example commerce facilities are in the longer term plans and even voice integration is “on the scope”. Not bad for a free service, but where will MSN make its money? In the short term, this will be primarily through advertising and sponsorship opportunities. MSN have not been particularly profitable in the past, but apparently now the company has been setting hard sales targets to, at least, make the company pay for itself. These targets are set to rise.
So, have MSN got it right? There are no guarantees. First off, what we are seeing at the moment as something new (which it clearly is not) is likely to appear pretty primitive within a year or so. What Microsoft have worked out is the importance of an underlying framework – as the facilities evolve, they will benefit from having something solid to plug into. For MSN, this hinges around the Passport facility for individuals, but there is nothing similar for organisations unless we include Microsoft DNA. MSN has reinvented itself several times in the past, as an entertainment portal, a consumer portal and now as an infrastructure for online communities. Despite the bad press the company has received for not getting it right first time, Microsoft has little choice but to go with the flow, copy the competition and hope. The competitors – AOL/Netscape and Yahoo to name two, are already proving that Microsoft’s dominance in the desktop market doesn’t mean diddley squat in this new, vaguely defined battle for the hearts and minds of the online masses.
(First published 28 September 1999)
09-28 – Ohh, James… is that a Jornada in your pocket?
Ohh, James… is that a Jornada in your pocket?
It seems appropriate that scenes for the latest James Bond film, “The World is not Enough,” should be filmed at Motorola’s brand new, state of the art Integrated Circuit fabrication plant at Swindon, UK. It is equally comforting that handheld PCs should take centre stage in the film. We are up to date - things have come a long way since “Sneakers” showed an Excel spreadsheet as the front end to a Cray supercomputer.
Of course, there is one over-riding reason why HP and Microsoft were so keen to get the Jornada 430se into the frame. Product placement is seen as a hugely powerful technique for reaching the consumer market and CE-based handhelds are, after all, a consumer product. Or at least, they are becoming one. 3com’s Palm (now an entity in its own right) has focused on meeting executives’ needs (rather than their desires) – it is only recently, with the launch of the Palm V, that a sexy design has been taken on board. The target for CE devices, whilst starting life as competition for the Palm, has now moved to the mass market. The new Jornada, for example, is a Game Boy grown up – it does all that boring stuff, like scheduling and email, but one look at the literature shows that the features at the top of the list are the colour screen, the hot processor and the support for MP3.
The race is on between Palm and CE. Recently, with companies like HandSpring, Palm have started to inject some competition into their own market. The bottom line for Palm is – “it does everything you need”. There are tales of people investing in colour devices, for example, then coming back to Palm when they were fed up of shelling out on batteries. The CE community, however, are aiming straight at the jugular of the gadget man. Where better to promote this image than with the king of them all, the brits’ very own 007? Such battles have been fought before – sometimes it is the gadgety that wins; sometimes, like with the black and white Game Boy, “what you need” turns out to be the more compelling argument. As for me, a self-proclaimed gadget junkie, I’m hanging on a bit longer. Integrate a camera, mobile phone and voice recognition capability and I’ll be sold. Until then, like (I suspect) the majority of the market that CE is attempting to tackle, the high price of the fully functional devices give me ample cause to wait.
(First published 28 September 1999)
October 1999
10-11 – eCommerce in the UK RPI – a final call to arms
eCommerce in the UK RPI – a final call to arms
Last week, the UK government announced that it would be including online shopping statistics as one of the sources of information it uses for its retail prices index, once of the principal measures used to determine inflation. This is a move which will be seen as overdue by some and maybe irrelevant by a few luddite diehards, but which cannot be ignored by the vast majority of organisations in the UK, which have so far failed to use the Internet for anything more than brochureware. According to a recent Economist survey, the UK lags 2 years behind the US in exploiting the potential of eCommerce. That’s one heck of a long time in dog years, the rate at which the Internet is understood to be moving. Plus, given the global nature if the Web, the issue becomes even more stark: by the time the UK gets there, the market may well already be sewn up.
This article does not want to spew out the usual FUD. Let’s face it, we’ve heard it all before. “The Internet is here, embrace it or die.” True or not, there’s some good money to be made online – EasyJet, for example, has already sold over one million airline seats, online. We’ve been scared enough, however – the question is now – what can we do to profit from the Web?
The answer to this question has three, mutually dependent parts. At the top level they may be considered as:
- the business strategy covering, for example, how can I align my organisation to best profit from the Web? What products and services should I offer, and what markets should I target? Who are my customers and suppliers?
- the technical strategy – what are the most appropriate technologies to meet my needs? What can I use to implement something quickly and gauge the reaction of the public so I can move on?
- the operational strategy – how can I resource the 24x7 operation that my online service requires? Have I the necessary pieces in place to provide the best possible customer service?
All of the above need to be addressed simultaneously to make a success of the Internet. Too often, still, it is the technical issues that are tackled first (“Give me a Web site!”) without giving due attention to the business and operational issues. Each of the above has a responsible party – the CEO, CIO and COO – who must work together to draw up a coherent eCommerce strategy.
There, that’s it, now it’s over to you. The intention here is not to scare or patronise. No FUD here, just a single question. Does your organisation have a strategy for exploiting the Web? If not, you may be resigning yourself to watching from the sidelines while others steal the rewards.
(First published 11 October 1999)
10-11 – HandSpring will fuel device revolution
HandSpring will fuel device revolution
When the bunch who designed the PalmPilot quit 3Com to form their own, rival organisation, we wrote about the likelihood (and benefit) of injecting competition into the non-CE handheld market. Little did we expect, however, the wow factor that HandSpring would manage to inject into their product range.
Essentially, the HandSpring Visor is a PalmPilot with style. The basic model comes with a USB connection to speed up transfers and the more advanced version comes with more memory and a choice of colours. Not to be knocked, this – look what it did for the Apple iMac. No great shakes so far – a Palm clone with a couple of additional features.
Where things start getting interesting are, as they say on Blue Peter, “when we turn this one over.” On the back of the Visor is an expansion slot which looks not dissimilar from that of a Game Boy. Indeed, games cartridges are one thing that the Visor claims to support (although I could find no references to backwards compatibility). Expansion modules are plug and play – the drivers are build into the cartridge so they are ideal for the non-technical and technical alike. For the IT professional, executive or wired individual the selection of cartridges mooted is already impressive – GSM and land modems, voice recorders, data acquisition devices and MP3 players are already on the cards. Extensibility is the best option in this fast-moving world – there is some protection against obsolescence given the ability to plug in new technologies (say, Bluetooth) when they come.
What is really amazing about the Visor is the price. With launch prices beginning at $149, 3Com’s Palm subsidiary will be forced to follow suit (never mind the CE device manufacturers) probably leading to a sub-$100 product from one or the other before long. It is hoped that the expansion modules have similar pricing strategy, i.e. cheap – the second hand market in such cartridges is likely to boom, if for no other reason the devices will be easy to post.
Ultimately, what will prove to lead HandSpring to its holy grail is the extensibility. No other palm-held device has an expansion capability, a fact which is likely to cause expansion device manufacturers to flock to HandSpring and give it sustainability. At the moment, the Visor is launched in the US only – over here we shall be looking out for early adopters among the BA Executive Club members.
A final point – what of CE? The battle is still on, and the ultimate future of either device standard (PalmOS or CE based) cannot be guaranteed. It is true that Philips left the CE fray last week “due to poor sales”, but it is equally true that Compaq, Casio and HP still appear to have their full weight behind their CE devices. The fact remains that the PalmPilot was one of the unexpected success stories of the 90’s and, in the short term anyway, HandSpring look set to replicate that success.
(First published 11 October 1999)
10-11 – MCI gets hands on Sprint
MCI gets hands on Sprint
In the largest corporate takeover in the US ever, MCI WorldCom agreed to shell out $129 million in a merger which will bring wireless communications into the portfolio of the telecomms giant. Bernard Ebbers, WorldCom CEO remarked on his surprise at the size of the deal, stating glibly that “I thought I agreed to much lower numbers.” As we noted in our previous previous analysis of the merger talks, however, Ebbers is renowned for statements which disagree with the reality of the situation. In this case he clearly felt the steep price was worth it. The question is, what now?
The biggest hurdle that the betrothed giants face is the regulators, both in the US and Europe (where MCI are still clearing up the mess caused by a previous deal with Cable and Wireless). The companies have stated their intentions to keep the various elements of the businesses together, describing as “prudent”, for example, their intentions to have more than one Internet backbone. Whether this is sustainable remains to be seen but is unlikely - sooner rather than later, the merged corporate will have to decide which of its backbones is strategic, as it will not be able to route the same traffic over both.
The road is both long and pitted, however clearly this is a deal which MCI Worldcom decided it could not afford not to make. Fair enough – the wireless world is advancing at a cracking pace, with mobile phones mooted to outnumber land phones in Europe within two years, and with deals being brokered all the time (like Bell Atlantic Corp and Vodafone AirTouch setting up a new US mobile company). Worldcom risked being left further behind with every week that passed. It will be interesting to see how much of their current merger strategy remains in one piece by the end of the year.
Following a fall to $68.5 after the announcement, MCI Worldcom’s share price rallied to $76.88 by the end of last week.
(First published 11 October 1999)
10-12 – GTE launch carrier-class VoIP service
GTE launch carrier-class VoIP service
Telecomm 99 in Geneva this week saw the launch of a Voice over IP service from GTE Internetworking, aimed at telcos, ISPs and Internet Telephony companies. The question is, will they bite?
Voice over IP has long (in Internet time) been a possibility but has always been hampered by theoretical (and practical) issues of reliability. Essentially, this is to do with moving from a circuit-switched network (where devices link together to provide a guaranteed bandwidth path from end to end) to a packet-switched network which divides information into chunks, which are routed across the network and reconstituted at the other end. This, latter technique, used by the Internet Protocol, offers few guarantees about latency (the time the packet takes to arrive) or reliability (whether the packet gets there or not) in the network itself, with mechanisms being implemented in the devices at each end to, for example, acknowledge receipt of a packet. For telephone users this lack of guarantee equates to the possibility of pauses in conversation and drop-outs where parts of the conversation are lost. Those are the downsides, but on the upside we have the huge potential of cost savings to telephone user and telco alike, as the cost of sending information over the Internet is greatly reduced compared to traditional leased lines.
So, will the telcos bite? The two reasons why they won’t are fears about the reliability issues and the overheads of moving services to a new supplier. In order to guarantee a high level of service, GTE are implementing a IP network dedicated specifically to the service, plus a technique coined “Intelligent Route Diversity”. This refers to a number of services including load balancing, dynamic routing and failover to the (circuit switched) PSTN should the reliability falter. In other words, GTE Internetworking have covered their bases. They are not routing packets over the Internet but dedicating Internet technology to provide bandwidth channels over which they have more control. Brian Walsh, of GTE Internetworking, told us that the company was achieving sub-100 millisecond one way latencies, well below the 200 millisecond target agreed to be “undetectable to the human ear”. This should appease the service providers: if not, they have the additional guarantee of re-routing traffic to the more conventional PSTN.
GTE Internetworking are likely to emphasise the cost reductions that telcos can achieve employing IP-based bandwidth for voice calls. Walsh noted that companies could expect savings of up to 25% off charges within the US, with greater savings possible on international services. It may be that, with PSTN competition already cutting prices to the bone, many companies find they have little choice but to quell their fears and head for VoIP.
(First published 12 October 1999)
10-12 – Moore’s Law not a barrier to progress
Moore’s Law not a barrier to progress
According to Paul Pakan, an Intel semiconductor scientist, we are approaching the limits of Moore’s Law which states that the number of transistors on a chip will double every 18 months. This should not, at least in the short term, provide much of a hurdle for progress. How can we say this? Well, for several reasons.
First of all, as an obvious point, Pakan notes that the limitations will start to bite in 2001 and beyond. However this is not the crux of the debate. Hardware technologies have been far and away in advance of software technologies ever since Bill Gates determined that 640K of RAM was sufficient for the fledgling PC operating system DOS. Software, in general, is inefficient and burdensome, using up chunks of hardware resource for the simplest of manipulations. Should software vendors discover that the hardware resources upon which they rely are not infinite, then they might be tempted to develop more efficient designs. Embedded software developers have worked within their bounds with excellent results, and it could be argued that the top end of chips required for next generation mobile devices will be sufficient for most needs.
This leads to the second point. To make the best use of limited resource requires a well-designed architecture. Thin client is becoming the approach which is recognised as best for both application partitioning and consequent manageability. The capabilities of the end user device may require to be limited not only because the hardware has reached its limits but also because the application architecture requires it.
Finally, we have so far neglected to mention the huge leaps forward in hardware design, not only concerning the physical construction of chips but also the designers’ abilities to create on-chip components which exploit the underlying hardware to the fullest extent possible. We only have to look at how processors such as those from Intel, AMD and ARM have been constructed, with features such as intelligent instruction look-ahead and multithreading, to see that the transistor size is only one (albeit important) facet of chip design.
As noted on the Register, “the sooner they can get quantum and/or optical processors to work, the better”. In the meantime, however, there is plenty to be keeping both hardware and software designers busy, and plenty of progress still to be made, with the technologies we have available today.
(First published 12 October 1999)
10-12 – Vendors poised to tear down the Broadband wall
Vendors poised to tear down the Broadband wall
The broadband revolution is only a hair’s breadth away. Barriers to its inception are about to be removed. A tidal wave of services will ensue, transforming how we live and work. It looks like the electronic future will be in time for the Millennium, after all.
The last pieces of technology are clicking into place, overcoming the two main obstacles which may be summarised as a lack of bandwidth to connected devices and a lack of suitable protocols to enable services to be delivered. These technologies include:
- 3rd generation wireless protocols, enabling high-bandwidth delivery to mobile phones and wireless PDAs.
- Roll-out of broadband technologies such as DSL and Cable, removing the low-bandwidth local loop from the equation.
- Creation of protocols such as WAP to enable wireless devices to access Internet-based services.
- Creation of devices sufficiently small, powerful and function-rich to take advantage of these technologies
- Creation of service-level protocols such as Jini and eSpeak to enable applications to communicate.
Most of these components exist as functioning prototypes and are being demonstrated at Telecomms 99 in Geneva this week. It is only a matter of time before they are delivered into the mass market, enabling such vapourware concepts as videophones, media streaming to the home, integrated messaging and phone-based eCommerce to become reality. Telecomms companies, hardware and software vendors are teaming with retailers, banks and content providers to achieve the dream and we have reached a point where it is only a matter of time before this happens.
The real test begins when the products and services begin to be rolled out in anger. It is likely that, the sales model will follow that of the mobile phone market, at least in Europe where phones are given away subject to taking a services agreement. This time, however, there are a raft of potential mechanisms given the variety of service providers. For mobile phones, the service for which customers were prepared to pay was the voice call. In the future, we will be able to cover the costs of hardware by, for example, committing to use E*Trade for our stock trades, or to use Blockbuster for our video stream.
In any case, it is highly likely that the availability of broadband bandwidth to the device will lead to an upsurge in the services available. Already companies such as Thomas Dolby’s Beatnik.com are intending to use the Internet as a delivery channel, superseding the need for anything other than cache storage on the wireless walkman. The currently reported glut in European bandwidth may well be quickly drained by the new generations of eCustomers who find the concept of pre-recorded music and video, frankly, old-fashioned.
(First published 12 October 1999)
10-13 – Linux and NT – The myths exposed
Linux and NT – The myths exposed
There’s been a lot of press recently concerning Microsoft’s recent presentation of its Linux Myths web site. By coincidence, and to right the wrong expressed by Microsoft that “Linux Needs Real World Proof Points”, Bloor Research published today the results of its operational comparison between Linux and Windows NT.
The report was based on a study of both operating systems from the standpoint of how they operated in practice to support real applications. It was possible to rank the products according to the following nine criteria: Cost/Value for Money, User Satisfaction, Application Support, OS Interoperability, OS Scalability, OS Availability, OS Support, Operational Features and OS Functionality. The products were also ranked according to appropriate usage, in the following areas of application: File/Print server, mixed workload server, Web server, mail server, database server, groupware server, data mart, application server and enterprise level server.
The results of ranking the products showed Linux to be superior to Windows NT in seven of the nine categories, by a significant amount for some and by a short head in others. The comparison by application type came down less strongly in favour of Linux, with each operating systems seen as optimal for certain applications. Bloor Research found neither product suitable for use as an Enterprise Level Server.
Clearly the Windows NT – versus – Linux debate will run on and on, particularly with the launch of Windows 2000, which will see a significant number of additional features to the operating system (not least its ability to be reconfigured without being rebooted, which has long been the bane of NT systems administrators). Linux remains, above all else, free and capable of supporting a wide range of uses, although it is clearly not suitable for everything.
Microsoft are learning a hard lesson with Linux, namely that there is more to life than product marketing. Five years ago there was the NT versus Unix debate, which the Unix community appeared to be losing until Microsoft attempted to tackle the issue of enterprise scalability and convinced no-one. This gave the end users a new set of perspectives, that it was OK to use the right operating system for the job and that the concept of a single OS which could run on the smallest device up to the largest server was inappropriate, not to say unachievable. These perspectives hold true today: NT is suitable for a variety of tasks, but remains inadequate for a whole set of others. At the end of the day, any comparison will
(First published 13 October 1999)
10-13 – Travelocity ups gear and merges with Preview Travel
Travelocity ups gear and merges with Preview Travel
The recently announced merger between Travelocity and Preview Travel spawned the world’s third largest eCommerce site by revenue, following Amazon and eBay.
Travel ranks as one of the most appropriate services to sell over the Internet –the deal is, essentially about agreeing contracts for services which will be delivered at a later date. The Internet is a far better medium for selling travel than brochures and guidebooks could ever be – it provides an integrated service so, for example, a prospective traveller can book a hotel, rent a car and book flights and receive a consolidated, clear itinerary covering all aspects of the journey. What is more, it is much easier to provide additional information over the Web, for example sites are gearing up to providing multiple views of hotel rooms, longer descriptions of localities and information about local hotels. Coupled with this, Web operators are able to significantly undercut high street travel agents, for whom the future would appear, well, bleak.
Having said this, we see the largest growth area for Internet travel services to be in the corporate space. Thus far, Travelocity’s services have been targeted at the consumer. We expect to see partnership arrangements between Web travel companies and ERP vendors, particularly companies such as Peoplesoft who are advancing the principle of end-user self service within an organisation. The concept is simple: allow the end-user to arrange their own travel within an automated tool which knows the limits of each user, then ensure the existence of an audit trail so that any usage (and abuse) is logged and reported. In this way the inefficiencies inherent in internal company procedures are significantly reduced.
Despite Travelocity’s projected size, the battle for the online travel segment is far from over. The merging of Travelocity and Preview travel’s user communities will not be trivial and is likely to cause weaknesses in the combined company’s system architecture. Weaknesses can lead to failures, and we know how seriously these are treated by the online press and user communities alike. Also, competitor sites such as Expedia and LastMinute are unlikely to roll over and die. LastMinute, with its reverse-auction business model, is one to watch in particular. Finally we should not forget the Web presence of the “travel fulfilment” companies – BA, AA and the like. One of the prime capabilities of the Internet is to displace intermediaries unless they can demonstrate real added value. This fact will be the ultimate litmus test concerning the successful future (or otherwise) of today’s third largest eCommerce company.
(First published 13 October 1999)
10-13 – Y2K’s winter to freeze SMEs
Y2K’s winter to freeze SMEs
Two recent announcements gave an indication of the likely winners and losers at the turn of the Millennium. First off, the UK Financial Services Authority (FSA) stated that all high- and medium-impact companies in the financial sector were on schedule to complete, or had already completed their Y2K readiness programmes. In another announcement, according to Silicon.com, a study by the Cutter Consortium has found that 50% of US companies were planning a spending freeze on IT equipment between now and the end of the year.
In both cases, the companies which will be most affected are the small-to-medium sized enterprises or SMEs. The FSA report included only 400 out of a total of 8,000 financial services companies in its medium-to-high impact categories. The remaining 7,600 are the smaller companies who, if they have Y2K problems, will impact on a lesser number of “stakeholders” – depositors, policy-holders and investors. Unfortunately these are also the companies which lack the financial resources or in-house IT skills to evaluate the risk and deal with any suspected Y2K weaknesses. UK Government brochures, while well-written, only demonstrate the complexities involved in testing every PC, router and PABX for compliance, not to mention any outsourced services. For companies who have not started this process yet, the remaining 100 days are likely to be insufficient.
Similarly, it is the SME vendors who are likely to be most affected by any spending freeze. Again it is a question of resources – the larger companies have both the financial know-how and the hedged funds to carry them over until the predicted thaw at the end of January next year. Smaller companies do not have the luxury of digging in and waiting: most will survive but there will be plenty that do not.
Finally, on a lighter note (as we are accused of preaching too much doom and gloom about Y2K), it would appear that even the least prepared will steal a march on PLN, Indonesia’s national electric board. When asked by an Indonesian newspaper about it’s Y2K preparedness, a spokesman for the board replied “We can observe what happens (at midnight 2000) in Western Samoa, New Zealand and Australia and still have 6 hours to make plans.”
(First published 13 October 1999)
10-14 – ASP's – The Age of the Information Power Hub
ASP’s – The Age of the Information Power Hub
The momentum behind the ASP movement is such that it seems to be only a matter of time before we see all software being provided ‘off the wire’. The Application Service Provision model refers to the predicted rental of applications, which are accessed via the Internet using a standard browser as the front end. At least, this is the initial model: in the future it is expected that customers will be able to integrate ASP services into their own applications. Both models make perfect sense. Consider, for example, the ERP market which has long been the exclusive domain of larger companies which can afford the inflated list price (and associated consultancy) that an ERP implementation can entail. ERP companies have been forging alliances with telcos and ISPs (for example, SAP with BT) to give smaller companies access to applications which are preconfigured for a given market, for example retail, warehousing, engineering and the like. Not only is the cost of entry reduced, but also support levels are touted to increase, compared with the likely levels of in-house resource.
This is a model that makes perfect sense, for vendor and customer alike. The economies of scale that can be gained from these larger installations yield clear benefits to the vendor, which can be translated to a greatly reduced per-user cost. As the ERP market has largely become saturated, vendors have been looking for a way to tap smaller organisations: the ASP model is just the answer. ISPs, too, are looking for ways to differentiate their own services from others’ and also for ways to bring in revenue as the basic connection costs reduce to zero. ASP services enable ISPs to offer far more capabilities to a much broader range of organisations, and also to make money, a factor which is becoming an increasingly difficult given the burgeoning competition and the disappointing revenues gained through advertising.
The emergence of the ASP industry is to a great extent being enabled by the pace of technology. Previous technological limitations, which effectively prevented the real time access to applications over the Internet, have been eroded. This is primarily down to greater bandwidth coming online at cheaper prices, but is also down the capabilities of both hardware and software to support a far greater number of potential users. Solutions now exist to meet the security fears of most, or at least to reduce them to acceptable levels.
No longer will the end user be buying, installing, configuring and supporting software. Rental is the name of the game and it will dramatically change the industries involved. The computer and telecom industries are on the brink of a step change. The models for distribution are mutating, leading us into an age where the computer industry will resemble that of the electricity utilities, providing farms of processors, storage and preconfigured applications all dedicated to serving the world with data and software.
As you might imagine these changes will be far reaching and the customers will be the last to learn, but their position is changing too. In the future, software users will fall into one of three categories. Anyone whose business has an information element (and that goes for most) can become an ASP, from pharmaceutical and genetic research companies to organisations which decode and manipulate video streams. A company will either be an ASP itself, an ASP customer, or a combination of the two where it not only uses the software of others but also build and deploy their own solutions as rental services.
(First published 14 October 1999)
10-14 – HP eases itself into NAS waters
HP eases itself into NAS waters
Following its announcement in September, Hewlett Packard demonstrated its SureStore HD Server 400 range of NAS devices which are due for release on November 1.
Network Attached Storage (NAS) devices enable disk storage to be connected to a LAN, employing thin server technology to provide the basic minimum of an operating system required to serve files to LAN clients. The market for NAS has been widely predicted to grow rapidly over the next five years. Figures from DataQuest, for example, estimate the market to reach $10 billion by 2003. Even if it only attains half that, this is seen as an opportunity which storage vendors can ill afford to ignore. The current main player is Network Appliance, whose NAS devices sell for about $20,000 apiece. NetApp was ranked at #4 in Fortune’s fastest growing companies listing, testament to the potential of this technology. Put it this way – the company does not make anything else.
HP have signalled that it wants more than a slice of this pie. Its intention is “to dominate the NAS market”. Time to market has, however, limited the range of devices that the company will be selling, at least in the short term. Initial products will support between 27 and 108Gb of data and will only be accessible from Windows NT clients and servers. Costs lie between $5,000 and $10,000.
So – does the NAS market merit all this excitement? The answer is, probably, yes, but not for the reasons that are being touted. NAS devices are seen as a solution to the requirement of adding storage to a network, quickly and simply. Yes – they can do this. However the real advantage lies in the fact that storage processing and application processing no longer have to be run on the same box. Storage-specific devices, attached directly to the LAN, are inherently more manageable than multi-purpose servers as they do one thing and they are optimised to do it well. Anyone who has attempted to reconfigure a set of general purpose servers (for example, “we want to move the users onto this box, consolidate the databases onto this box and use this box as our messaging server”) knows the loops that have to be jumped through to achieve their aims. Specialised devices do not solve this problem, but they do limit it happening in the first place.
There is one issue that is not being addressed by any of the NAS vendors at present. Just as NAS is seen in terms of storage rather than in the context of an overall IT architecture, so we still lack the tools to enable us to manage the distribution, accessibility and availability of our information assets. It is an unfortunate reality that storage requirements will always outstrip our capability to increase capacity, so the longer term solution must lie with better management of information as well as increasing the basic resource. NAS can only ever be part of the answer.
(First published 14 October 1999)
10-14 – W2K no Y2K
W2K no Y2K
We were unsurprised (and a little relieved) to see Steve Ballmer’s expectation management concerning the end of year shipping date for Windows 2000. In our September article, Microsoft faces Hobson’s choice for W2K, we expressed our concern about interim deliveries slipping with no apparent impact on the final release. The logic (and experience) suggested that either quality or functionality would have to be reduced, neither of which were acceptable options to the user community. Oh good, honestly prevails. We no longer have a fixed date for the product, but at least we can be comforted that the product will be reasonably stable.
Despite this, it seems that the prospective users of W2K are not champing at the bit for the new OS release. Gartner figures suggest that 70 percent of existing servers will not be upgraded until the back end of next year, when the second release of Windows 2000 is slated. There are several, obvious reasons for this, not least Year 2000 itself – few existing Windows NT users have a compelling reason to upgrade the operating system in the short term. The upgrade will involve significant reconfiguration of existing servers, and reconfiguration is the last thing that IT managers want to deal with early in the New Year. In our last article, we recommended IT managers not to jump early, but to wait and see what were the realities of Windows 2000 in terms of its stability and functionality.
The operating system market is likely to evolve significantly over the next six months. Gartner’s write off of Linux, for example, seems to contradict the fact that every day sees new vendors allying with the Linux community in one way or another. What seems truly strange in all of this is the continuing idea that there really is a one-size-fits-all operating system. Despite the fact that most IT departments have been running heterogeneous environments for the past twenty years, companies still tout themselves as “the one,” be it for hardware, operating systems or application software. The reality is, and will probably always remain, that there are twenty or so vendors whose devices and packages need to work with each other, now and in the future. This reality is the basis of middleware, EAI and indeed most of the eCommerce market. Ballmer himself confirmed this yesterday at the Planet 99 conference when he referred to XML as the glue “to stitch together work from different vendors”.
The world has moved on from “no-one ever got sacked for buying IBM, Microsoft“ or whoever. This is a fact acknowledged by the captains of the IT industry as they promote interoperability and standards. However they are still failing to walk their own talk. Windows 2000 has a place, as do a variety of other operating systems and thin server devices. To sit any one product on a pedestal is to deny the reality and the opportunity of using the best tool for the job.
(First published 14 October 1999)
10-15 – Anyone for eTea?
Anyone for eTea?
Few examples than this one offer a clearer impression of how the Web can change the way the World does business. According to Bloomberg, a site is to be launched early next year which enables tea producers in Africa and Asia to trade tea with UK-based tea companies.
The tea market in the UK has both consolidated and dwindled since its heyday. 40 years ago, tea auctions dealt with up to 150,000 tons of tea per year whereas last year a mere 24,000 tons were sold. The reasons for this are mainly to do with tea auctions moving to producer countries, such as Kenya. The result has been that only the bigger players (who can set up bases abroad) still remain in the game, with smaller companies having to work through the two remaining UK-based tea brokers.
The Internet looks set to change all of this. An online auction would enable tea producing companies, large and small, to work directly with smaller companies in the UK. Intermediaries would be required only to manage the auctions (hence the site) and for shipping, meaning that the costs of working through such players as brokers would be greatly reduced. It is no surprise that the company intending to set up the site is a brokerage company, Thompson Lloyd and Ewart Ltd. The company intends to run the business in parallel with its brokerage.
Using the Internet as a transport mechanism, online trading would enable costs to be reduced and savings to be passed on to producers and customers alike. In fact, what is happening is a return to the more traditional modes of trade, with small traders dealing directly rather than relying on the few companies that remain since “value” became the catchword and consolidation the practice. The Internet is the great leveller, reducing the costs of both entry and ongoing business, allowing large and small to compete on the same playing field.
What is more, the Internet is global. Tea trading has occurred in the UK because it always has, but this no longer needs to be so. Suppliers to tea-drinking markets around the world will be able to profit from the global accessibility of online trade in a way not possible before. Who knows – they might even start drinking it in Boston.
(First published 15 October 1999)
10-15 – Dream is over for UK ISPs
Dream is over for UK ISPs
Just as the world was looking to the UK as leading the way in the free ISP market, the ISPs appear to have dropped the baton.
Let’s take some examples. Currantbun.com, the News International ISP being offered to readers of the mass-market paper The Sun, has been canned to be replaced by Bun.com. Freeserve shares are now worth less than the IPO price. AOL UK subscribers are threatening to walk out on the service provider. To cap it all, Screaming.net has been indicted by the consumer programme WatchDog with paper evidence of customers being charged for so-called free services.
What is going on? Well, in reality, the dream could not sustain itself. The fact is that there is no verifiable revenue model for free ISPs, as Freeserve has discovered to its chagrin. Given this, all that is left is the promise of market share or, at least subscriber share. “Get me the list of subscribers and then they will be my captive audience for whatever comes next.” Trouble is, as we all know, the audience is anything but captive, particularly when the ISP can offer very little that is not being given away elsewhere on the Internet. The site provides news? Well, so does the BBC. And so on. Also, “whatever comes next” hasn’t come, at least not yet. We expect ASPs to be a huge market, initially for businesses but ultimately for consumers as well – however this is waiting for the bandwidth bottleneck to be removed. We also expect online shopping to take off but so far it hasn’t given ISPs a differentiator over anyone else. The fact is that the model is the wrong way around: a free Web connection is a bonus to be provided to customers of a given service, rather than getting a motley band of subscribers together (I include myself in this) and trying to sell them anything under the sun.
ISPs in the UK started the free trend and now have little choice but to deliver on it. This they will only be able to do when they offer a truly differentiating range of services to the UK consumer. In the meantime, we can expect a continuation of the turmoil that we are witnessing right now.
(First published 15 October 1999)
10-15 – What’s in SUN’s sights next year? Microsoft, of course.
What’s in SUN’s sights next year? Microsoft, of course.
With earnings at 33 cents per share, SUN finished its first fiscal quarter of Year 2000 beating analysts expectations by 2 cents. With its ISP successes, Java story and now StarOffice, the company would appear to be full steam ahead for a successful year.
SUN is, at the end of the day, a platform company. It wants to ship hardware, and not just any hardware. Despite having more or less launched the open systems movement in the eighties, SUN systems look pretty proprietary. Sure, they run Unix, but it is still the case (despite efforts to the contrary) that SUNs are used to develop and run SUN applications. The miracle of a single operating system flavour never happened – indeed, the closest we might get to that is through Monterey (which is ostensibly IBM’s AIX ported to 64-bit Intel with SCO technology) or Linux, which (let’s face it) is a phenomenon happening despite the best efforts of the major vendors. Remember when SUN refused to allow the Linux sources on SunSite, its download portal? Enough said.
Despite all this, the SUN platform has become “the enterprise platform of the Internet”. This has far more to do with marketing and alliances than technology, though admittedly the high-end of SUN hardware has grown up to encompass the kind of transactional capability that we expect of eCommerce sites. What of Java? The programming language has achieved widespread acceptance (according to our own figures, 28% of programming job ads are for Java, compared to 38% for C++ and the gap is closing), however one of Java’s essential strengths is its portability so it cannot be seen as one of the technical drivers towards using SUN. All in all, a pretty slick campaign, in this phase of technology where managing the mainstream is at least as important as having a good product.
So that’s SUN’s first goal met. Given the way the Web is going, SUN would have to make some pretty big mistakes to lose its market share. Microsoft won the desktop and, with Windows 2000, will make more inroads into the departmental market, but the enterprise will remain a Windows exclusion zone for the forseeable future. On to SUN’s second goal, which is dubbed self-protection by the senior execs of the company. Truth is, though, that the company smells blood. The next big battle to be fought is for Application Service Provision – providers, or ASPs, are ostensibly ISPs who cannot make any money through a basic connection and hence who are looking round for differentiating sets of application services that they can offer. At the client end, the operating system is unimportant as it is likely that the majority of applications will have a Java-based front end. Server-side, however, a decision will need to be made over which, compatible set of applications can be provided. As always, this boils down to “should we run a Microsoft configuration, or not?” ASPs will want to offer the set of applications which are most appealing to their customers. At the moment, despite most of their functionality being ignored, Microsoft Word, Excel and PowerPoint are the de facto desktop applications. This may be for no other reason than file compatibility, but ASP customers are unlikely to move to a set of services which cannot read their existing files.
Enter Star Office, which reads MS Office files like a native. It is unlikely that SUN see Star Office as a strategic application. More realistically it is a non-application, intended to remove differentiators from the Microsoft-based offering. Microsoft apps need Microsoft OS need Intel hardware, it’s that simple. If SUN do not challenge the perception that MS Office is essential, then the corporation may end up losing a sizeable part of its ISP business. It is interesting to note Scott McNeally’s recent comments concerning the his doubts about Microsoft breakup – given the size of the prize, it is likely he would prefer one big target to lots of little ones.
(First published 15 October 1999)
10-25 – ASPs – how IT can learn from the Comms guys
ASPs – how IT can learn from the Comms guys
If the rumours are true, Application Service Providers, or ASPs, are going to dominate the way in which IT services are procured and delivered. At Bloor Research, we have reason to believe the hype. As for the vendors, they have a stark choice to make. Some are embracing the ASP model whole-heartedly, whilst some are hedging their bets.
Networking and comms vendors are in an enviable position concerning ASPs, for two reasons. The first is that they stand to gain whatever happens. The provision of application functionality over the Web, requires a networking and communications infrastructure capable of sustaining a multitude of reliable data pipes between an organisation’s internal networks and the outside world. As it becomes business critical, no longer will the Internet connection be under such tight constraint, either by budget or by policy. Besides, ASPs are only one area which requires an enhanced networking infrastructure: there are plenty of other usage models, not least business to business eCommerce, which will ensure that the networking revenue stream is protected. The second reason is that ASP capabilities are defined primarily by which technologies networking vendors are able to deliver. Application and systems providers are dependent on networking providers, a fact which can be exploited by the latter.
So, what about the traditional IT companies – the manufacturers of systems hardware, systems and application software? Some, such as IBM, SUN, Oracle and Microsoft are adopting the ASP model wholesale. Of course, the industry giants can afford to develop themselves in the ASP market with much less risk than smaller players. This week, Microsoft announced a partnership deal with Cisco which will enable small-to-medium sized businesses (typically sub-100 staff) to use Microsoft software over the wire. It is likely that more and more vendors will jump onto the ASP bandwagon, however in doing so they are likely to miss one fundamental point.
The ASP model is currently seen as a facility to deliver applications across a network, thus reducing costs. This view is valid, in the short term, but it ignores the full potential of the ASP concept. Once applications can be delivered, then they can be combined and assembled in previously inconceivable ways. Communications services, application services and business services will all be delivered in the same way, enabling even more combinations. This kind of shake-up is already being experienced in the telecommunications marketplace, as convergence finally starts to happen. Issues faced by these companies include how should a properly managed service be developed, provided, and most of all billed for. These, same issues are to be the bane of the ASP: vendors prospecting for a chunk of ASP real estate will do well to watch the successes and failures in the comms market, and learn from them.
(First published 25 October 1999)
10-25 – Coppermine braces itself, without Rambus
Coppermine braces itself, without Rambus
Yesterday’s announcement by Intel concerning the copper-based version of its Pentium III chip has one major objective: to knock AMD off its perch. AMD’s flagship product, the Athlon processor, currently claims the high ground as the fastest PC processor. The importance of this position is as much to do with marketing as anything, however so is the whole of the IT business.
The popular press, at least in the UK, has warmly received the Athlon, as have the vendors IBM, Dell and Gateway. These factors led to AMD’s better-than-expected financial importance reported a fortnight ago. Intel has already swept away a number of chip companies, such as National Semiconductor, who have found it difficult to fund the levels of research and development necessary to keep up. Not so AMD who have proved themselves capable of beating Intel at their own game.
Both Intel and AMD’s processor marketing is geared around “being the best”, and neither is likely to let the other get away with being top dog for long. AMD announced the opening of a new fabrication plant, in Dresden, last week which will be capable of manufacturing the next generation Athlon chips – pre-production versions have been running at 900MHz. The two companies are likely to leapfrog each other to the 1Ghz holy grail, and the first to achieve this at a production scale is likely to gain huge kudos as a result.
Intel’s announcement should be seen in context – it is playing the game for which it invented the rules. The only downside for technologists is the absence of Rambus support from the launch. As we reported in August \link{http://www.it-director.com/99-08-06-2.html,(see article)}, Rambus has some advantages particularly for the Server. While this is a blow for some PC makers who were planning to ship the new processor and chipset together (and who will now have to wait), it remains to be seen whether this will affect the market as a whole.
(First published 25 October 1999)
10-25 – Sun sets up autonomous Forté, completes its Java portfolio
Sun sets up autonomous Forté, completes its Java portfolio
In a similar fashion to IBM with Tivoli and Lotus, SUN is to give Forté a semi-autonomous status. The Forté organisational structure will report into the head of software at SUN, however there will be no moves to change the internal organisation of the division, beyond establishing the communications channels to enable Forté to work with other parts of SUN’s organisation.
This is clearly good news. Forté is in the right place at the right time with the right products, reasons why the company was so attractive to SUN in the first place. Forté recognised that its traditional market, of bespoke enterprise application development tools, was insufficient to guarantee the company’s future success. With Fusion, it moved into the Enterprise Application Integration space; more recently, with SynerJ, it released tools to support the Java 2 Enterprise Environment or J2EE. Forté’s original vision, of producing scalable, reliable platforms for enterprise development, has been retained and enhanced to meet today’s demands. Now, it seems, the organisation will be able to retain this strategy as it moves forward into the future.
Sun Microsystems is a canny company. Still recognised primarily for its hardware, it has quietly been building up a software portfolio based around the unexpected success of Java as an enterprise language. This fortune is as much down to IBM as anyone, nonetheless Sun have been putting in place the elements it needs to benefit directly from Java. Since its acquisition of NetDynamics, through its developing relationship with AOL/Netscape to the more recent purchases of Forté and NetBeans, Sun have assembled the portfolio of products it needs to start generating real revenue from the Java language. Services will also play a big part, but these would not be possible without the comprehensive set of products that Sun have now at their disposition.
(First published 25 October 1999)
10-26 – Loi 495 – allez le source ouvert!
Loi 495 – allez le source ouvert!
Open source received a boost this week, as two French senators proposed a law concerning the use of such software by national and local administrative systems. The driver is one of freedom of access to the software, but the issue is more likely to be one of cost. As reported on \link{http://www.theregister.co.uk,The Register} yesterday, a \link{http://www.senat.fr/grp/rdse/page/forum/index.htm,discussion forum} has been set up on the issue. And the feedback is good.
Comments such as “Un pas vers la démocratie électronique” (a step towards electronic democracy) give an indication of some of the feedback, however this is coupled with fears about the impact on business. It is worth reflecting on what such a move might entail, should it be adopted.
First of all, the imposition of such a law implies that open source is here to stay. There are plenty that believe that it is, but others say that it is an interim stage, made possibly by government subsidy of academia and the political manoeuvres of companies. (Personally, I believe that old software technology can be given away, and new technology developments need to be paid for, but that’s opinion, not analysis). However, given the existence of open source, the question moves to one of delivering value.
Value is benefit minus cost. Benefit from software may be derived from its functionality and ability to interoperate with other packages. Costs are financial, but also relate to intangible costs of procurement, implementation and maintenance. Costs also relate to risk, driven either by security or safety criteria. So – what are the impacts of the open source model? Clearly there is a bottom line cost reduction. The intangible costs require a little more attention: an open source package can lead to reduced cost only if it offers similar functionality to existing packages, whilst reducing the costs of implementation and maintenance. This is the area which is less clear, and which must be judged on a package by package basis.
The issue might stop there if it wasn’t for the appearance, over the horizon, of the ASP. Application Service Providers are to do for software what outsourcing did for IT departments, namely take the responsibility away and enable companies to concentrate on their core business. It may seem a logical step for public organisations to adopt open source where it can, but an even more logical move is to pay a provider to supply software services over the wire. In this way, an organisation can concentrate on its functionality requirements without being distracted by issues of package cost, interoperability or maintenance overhead. In this way it is for the ASP to decide what packages to buy, and whether or not to use Open Source products. The ASP market is still in the crib but, for any organisation wanting to keep control of costs whilst gaining access to the best services, it may well prove to be the answer.
(First published 26 October 1999)
10-26 – Orange dreaming of a WAP Christmas
Orange dreaming of a WAP Christmas
UK mobile telephone operator Orange is to launch a Wireless Access Protocol service in November, ready for the Christmas rush. The up-front services will be basic, but given the general excitement about WAP this is unlikely to be the case for long.
The WAP forum was launched by Ericsson, Nokia, Motorola and Phone.com, with an aim to make information-based services more accessible to the mobile user. The forum now has nearly 200 members and is on the brink of real products, a number of which were demonstrated at Telecom 99 two weeks ago. Orange’s new service will be based around a WAP-enabled phone from Nokia, with initial information services including sport, news, weather and a business directory.
What is really, really interesting about the Orange service is the charging mechanism for services, or rather the planned charging mechanism. Initially, WAP data calls will be charged at less than the price of a voice call, however the plan is to cease charging for the connection and to start charging for the services themselves. Basic services will remain free, but according to Rory Maguire, Strategic Relations Manager for Orange (quoted on Excite UK), basic content would remain free but “premium content” would be charged.
This is fascinating stuff as not only does it turn the existing mobile phone charging model on its head, but it also flies in the face of other, wire-based Internet services, who already give away a great deal of what we might term “premium content.” Consider share prices, for example. It used to be the norm to charge Web users a subscription before they could access share prices. The model then changed, as this information was given away by portal sites (suchas Yahoo) as a way of luring custom towards other, advertised services. Money could still be made, however, from the share deal. Now, however, services are setting up which do not even charge a fee for the share deal, as it is reckoned that hedging the funds and bulk dealing is worth far more than the fifteen quid transaction fee. In other words, the tide is rising: whenever there is a higher level way of making money, the lower levels become given away. There is a caveat: as you move up through the levels, you need more information (and expertise) – in the share dealing example, the free trading is only open to “experienced traders.”
So – what we have is two rules:
Rule 1 – a service will be given for free as long as the supplier of the service stands to gain
Rule 2 – if the customer needs help, then such help is also a service, which must be paid for unless covered by Rule 1.
These two rules should come as no surprise as they are the twin pillars of retail pricing, understood and applied instinctively by customers and suppliers alike. If buying a car, for example, we do not expect to pay for the advice, guarantees and so on. If we buy a car from a trade outlet, we will be able to get the car more cheaply but at the expense of having less advice, expertise and guarantees to go on.
Orange may intend to charge for premium WAP services. Indeed, they are ideally placed to do so as they control both the data feeds and the billing mechanisms. However when doing so they should take the rules into account.
(First published 26 October 1999)
10-26 – VDSL beats back Broadband bounds
VDSL beats back Broadband bounds
The fat pipe is getting fatter. Very-high-bit-rate Digital Subscriber Line (VDSL) technology promised by Alcatel and Texas Instruments promises bit rates of up to 60Mb per second. And this is no pie in the sky, sometime over the next five years technology. Sample VDSL chips are already being shipped to equipment makers, with products expected some time next year. Compare this to VDSL’s little brother, Asynchronous DSL (ADSL) which can only (only!) deliver 1.5Mb/s.
Applications for VDSL already being discussed include running exclusive digital TV channels – for example a corporation may obtain a training channel with a predefined programme, direct from the digital broadcast site. In fact, what we are seeing is the stretching of the boundaries of the managed service – a video feed is one application, which will be coupled with voice and data feeds to give a truly comprehensive managed service. It is surprising that no-one has mentioned virtual reality which, even if it remains a niche hobby, will benefit hugely from the bandwidth.
Oh, but hang on. This really is pie in the sky, at least as far as the UK consumer is concerned. It is widely understood that in some parts of the US – Boston, for example, cables are being laid like they were going out of fashion and ADSL is getting its own display area in Radio Shack. Meanwhile, back in Blighty, we have a pseudo-monopoly in the guise of BT which has just finished an ADSL trial and is expected to offer a reduced service, well, maybe one day, at least in the London area. In the UK, we can dream about the advanced capabilities that new technologies such as VDSL will enable. However the reality is likely to remain virtual for a good time yet.
(First published 26 October 1999)
10-27 – Future of UK Enigma site continues to confound
Future of UK Enigma site continues to confound
All is not well at Bletchley Park, the World War II code-breaking centre, former stamping ground of Alan Turing and location of the world’s first programmable computer, the Colossus. Controversy centres about what should be done with the site. Once again, it seems that Britain is failing to deal with its IT heritage, let alone its wartime memory.
What’s going on is, unsurprisingly, an everyday story of folk. The battle lines were drawn two weeks ago between the old guard of Park trustees (seven out of the twelve) and Christine Large, who was brought in as chief executive to oversee the process of bringing the currently dilapidated park and outbuildings up to scratch. Mrs Large’s plans to transform the park into a modern museum, conference and education centre were seen as too ambitious for the trustees who were wary of the site becoming “a high-tech theme park”. Following a series of wrangles and a vote of no confidence by the trustees, Mrs Large was sacked from the board, only to be reinstated last Thursday.
If this wasn’t complicated enough, Mrs Large has rejected the reinstatement offer unless a new Board of Trustees is created. This is not an outlandish suggestion, as the Board itself recognised the need to disband and reform in May this year. Six months later, the impression is that the Board are reluctant to follow this through.
To an extent, the reasons for this and the current situation are less relevant than the consequence, which is that nothing is getting done. Time and again we have seen personalities get in the way of progress, in this case towards recognising our heritage. As with the failure to procure the funds to build a statue of Alan Turing himself, we are stymieing our own potential to get things done and, in this case, to broadcast to the world our own successes. Britons are experts at hiding lights under bushels – humility is a good thing, but so is recognition.
People need to have visionary focus and to act accordingly. From this perspective it would appear that Christine Large is right – there is no point in returning to the post of Chief Executive if the current composition of the Board is likely to put the brakes on every step of the way. For the sake of our IT and wartime heritage, we wish Bletchley Park the good will and commitment it needs to ensure it is developed with both sensitivity and vision.
(First published 27 October 1999)
10-27 – Xerox’s 18-month European Plan
Xerox’s 18-month European Plan
Following the company’s seven billion slump in value caused by its less-than-successful Q3 announcements earlier in October, document company Xerox explained themselves this week.
Despite the company’s huge customer base (with over a million customers in Europe alone), its main problems (so say Xerox) were due to its out of date and badly targeted sales strategy, which was focused on the direct sale (even when dealing with the largest corporate deals) to the detriment of other channels. Also, the company was organised on national lines, with “each country operating its own fiefdom”, according to William Goode, Deputy Managing Director of Xerox Limited.
Xerox have every intention to change this, in fact they have been spending the past eighteen months consolidating its European operations, for example opening call and manufacturing centres in Ireland. The company is also setting up two new organisations: the Industry Solutions Organisation (ISO) will concentrate on major accounts, and will be organised by sector across Europe, whilst the General Marketing Organisation (GMO) will take over the direct channel. Initial sectors for ISO will be Manufacturing, Financial, Graphic Arts and Public Sector; the organisation will also encompass Xerox’s professional services organisation, named ICSI. The showpiece of these efforts is a pan-European SAP implementation which will support all of its back office operations. When it is complete, in eighteen months’ time, it is destined to be the largest single-instance ERP implementation in the world.
For the committed Xerox customer, these developments are likely to be good news as they will result in better account management and more co-ordinated services. For Xerox itself, the company is moving into the IT space from being an office equipment company, and stands to benefit from the opportunities that this offers – not least the marketability of the showpiece SAP implementation. Despite all this, the timescales are still very long and some pieces of the puzzle lack definition, for example the shape of the sales and marketing operation remains unclear. Also, the ongoing changes are likely to impact on the company’s ability to take advantage of any opportunities, at least in the short term. From the customer perspective (and as an ex-customer of Xerox), organisational change is nothing new for Xerox, so customers are likely to keep their counsel until they see real benefits coming through.
All in all, Xerox is still making money and the company is orienting itself to be better positioned for both its markets (which are, more and more, IT related) and its customer accounts. No change was not an option. It is just a shame that it is all taking so long.
(First published 27 October 1999)
10-28 – Micromuse setting their sights high
Micromuse setting their sights high
Enterprise Management software company Micromuse saw its shares soar by 30 percent as the company announced a doubling in its fourth quarter revenue at the end of last week.
From lowly beginnings in London, UK, the company launched on the Nasdaq in February 1998. In the early days, the company had a strong relationship with BT. The company has stuck with telecommunications as one of its core markets, a move which has stood it in good stead for the proliferation of ISPs and the eCommerce revolution.
Micromuse’s core product is NetCool, an event management system which is designed to enable events to be received from any device. This simple principle has allowed the NetCool product to be extended across a whole raft of protocols and device types, including SNMP devices, databases, WAN/Voice devices and TCP/IP protocol connections. Application interfaces (e.g. for SAP) are currently being considered.
Based on this core product, Micromuse have driven their product range in two directions and it is this which is getting the analysts excited. The first is NetCool/ISM, which simulates web site requests in a variety of protocols and feeds the results into the NetCool framework. This enables the monitoring of Web site performance, from the basic request-response level up to more complex series of transactions. Monitors can be situated anywhere in the world (to gauge international differences in performance) with the results fed back into a central NetCool installation. The second product is Impact, which uses historical records, configuration information and external databases to build an information set to aid the resolution of a given fault. Impact won a “best in class” award at NetWorld InterOp this year.
The market for Web-oriented management tools is burgeoning, with a variety of old and new players jumping on the bandwagon. Micromuse have a head start, in that they have worked in the comms space since their inception. They understand the issues and they already have an enviable set of customer and partner relationships including AOL, AT&T and BT. The NetCool/ISM product is based on the company’s existing platform, which is a differentiator from Tivoli, for example, and which isn’t a possibility for newer vendors such as FreshWater. Investors are already sitting up and taking note, all of which paints a rosy future for Micromuse.
(First published 28 October 1999)
10-28 – Thin server appliances drive OS shakeup
Thin server appliances drive OS shakeup
Bringing ease of installation and administration, thin servers look set to steal a sizeable proportion of the server market. Market analysts are predicting that between 3 and 8 billion dollars will be spent on such devices by 2003. For speed, usability and cost reasons, thin server manufacturers are keen to keep operating systems as scaled down as possible, prompting an inevitable shakeup in the OS market.
What is a thin server? Also referred to as a server appliance, a thin server is a server which does a limited number of jobs and does them (at least, this is the intention) extremely well. Examples are:
• Network Attached Storage (NAS) devices for file management, manufacturers including Network Appliance and Hewlett Packard.
• Email servers, from Compaq and Mirapoint,
• Web Proxy servers, from Novell (using Compaq hardware),
• Database servers, from Oracle.
The thin server model is attractive to the IT Manager, for two reasons. The first is that it enables additional resource or specific functionality to be added to a network simply and effectively. The second is that it moves away from the multi-purpose server model, where servers often have conflicting demands and incompatibilities posed by the different packages they run. The target market for thin servers is seen as the small business with less than a hundred people, but this need not be the case: Yahoo email, for example, boasts the logo “powered by Network Appliance.”
As for the operating system, there is no consensus on an ideal platform for a thin server. Neither should there be – the thin server operator is more interested in the functionality than the platform, which is largely hidden. Network Appliance devices use a proprietary kernel, flavours of Unix are visible in both Mirapoint and HP’s offerings, Compaq’s email servers run Microsoft Exchange on NT and Novell’s proxy, unsurprisingly, runs NetWare. Mentioning no names, this is nonetheless a blow to any software vendor whose marketing depends on promoting the relative advantages of its own operating system. In fact, the main issue for the OS vendors appears to be who can secure the best deals with the hardware companies. Not coincidentally, this trend bears a striking similarity to the Symbian/Palm/WinCE wheeling and dealing in the handheld and mobile device arena.
We have seen very little from Microsoft and Intel, since their thin server alliance announcement in April. Intel were hugely successful with their “Intel Inside” campaign, but this was as much down to their incumbent position in the PC market as anything. Just as with set top boxes, games consoles, mobile phones and PDAs, in the device-driven, thin server market it is unlikely that “WinTel Inside” will have the same cachet.
(First published 28 October 1999)
10-29 – Amazon grace under pressure
Amazon grace under pressure
Despite a huge increase in revenues from $154m to $356m, Amazon.com’s reported a four-fold increase in operating losses, from $21m to $79m. Amazon has seen increases in just about everything else from new accounts to repeat business, but is also seeing increasing pressure on profit margins, both from rising internal expenses and the inevitable external competition. Amazon continues to invest in new product lines and in new parts of the world. The question remains – how much longer can it keep up this momentum?
It is commonly held wisdom that it is healthy for dot-com companies to post a loss. The argument concerns the investment in the number of customers who use Amazon by default, hence the importance of both accounts and repeat business. However, the Web community is a notoriously fickle bunch. Just as Lotus never expected to lose its spreadsheet monopoly to Microsoft, so it seems equally unlikely that Amazon could lose its customer base. There are three ways in which this could happen.
The first way is system failure. Lack of service is very quickly jumped on by the Internet news, as eBay, eTrade and Charles Schwab have all discovered in recent times. eBay’s failures caused a reported mutiny of customers to Amazon; similarly, eTrade has lost custom to other online brokers, both through downtime and through processing errors (for example, those which caused potential Red Hat beneficiaries to lose out). The Internet does not take any prisoners: news of failure and retribution spreads like wildfire. Amazon’s systems are holding up well – they were designed for scalability and Internet access. However the same could be said for eTrade and eBay, of which the latter is famously hanging on by its fingernails to support its burgeoning community. Failure of a dot-com company results not only in customer flight but also shareholder concern, and this is the last thing any tech stock wants in the current, volatile market conditions.
The second route is competition. The Internet purchasing model is still very much built around lock-in, such that it is easier to stick with one supplier than move to another. Amazon’s one-click purchasing is an example of this. Moves are afoot, however, to make it simpler to deal with a variety of supplier sites: Snaz.com, for example, offers a multisite shopping basket for exactly that purpose. To water down lock-in is to do the same to Amazon’s key performance indicators of customer retention. Amazon may choose not to play, but this could have the effect of locking the company out, of siting Amazon outside the mall, as it were.
Thirdly, we have the unknown quantity. Amazon exists by virtue of riding the Internet wave long before other companies knew it was coming. Much as we like to speculate, none of us really knows what the next wave will be. Plenty of so-called giants have been washed away by the tidal waves of technology, and Amazon is no more immune than any other company.
Some are saying that the really big hitters are still to join the fray. Companies like WalMart are on the point of launching their Web strategy – this may happen with a whimper, but it could well be with a bang. Industry and financial analysts alike have so far been unable to predict how things are to turn out. This is certainly no time for dot-com companies, Amazon included, to get complacent.
(First published 29 October 1999)
10-29 – Mobile gives Voice Recognition its killer app
Mobile gives Voice Recognition its killer app
Slowly but surely, voice recognition technology is gaining maturity. It may still be in the domain of the comedy club, but will soon form an integral part of the services and devices that make up the IT infrastructure.
There are currently three sectors for voice recognition technology. The first, and best known, is PC-based voice recognition, aimed mainly at the consumer market. Whilst this is achieving some success in the Far East (largely due to the difficulties in providing keyboards for eastern character sets), it is still a niche market. As discussed in \link{,our previous article}, dictation and navigation are not powerful enough reasons for the large-scale adoption of products such as Dragon Dictate and IBM ViaVoice. The second sector is the embedded application sector, in which specific products such as medical and manufacturing equipment are enhanced to include voice input. Thirdly, we have telephony-based products. It is this market which is evolving the most rapidly, and from which the popularisation of voice recognition will occur.
Many communications companies have had their own voice labs for several years, aimed mainly at “enhancing the telephony experience”. As comms and IT have converged, these technologies have been applied to computer applications. For example, last year Lucent announced a pact with Unisys, in which Lucent’s technology would be used to develop an integrated speech recognition software package. Start-up companies such as SpeechWorks and Nuance have been set up with the objective of providing voice-enabled application development tools. So far, the results have been fair to middling: in general the best results come from the systems with limited scope (such as eTrade’s and BT’s voice-driven stock systems). Herein lies the danger: prospective customers remain skeptical, while there is a lack of installed systems with sufficient wow factor to sway them.
In line with the test of the application development industry, component-based development is the adopted direction of the major players. SpeechWorks have released a comprehensive set of Java and ActiveX applets to recognise common structures such as names, addresses, dates and so on, as well as applets for standard applications such as SAP. Similarly, a few days ago Nuance released their first set of voice components, as Java Beans.
Voice recognition is no longer about the quality of the recognition engine, as this technology is sufficiently advanced and improving all the time. Rather it is about the ability to automate dialogues which may occur between end user and machine. Just as with the Web, the voice caller can very easily get turned off by requests for irrelevant or badly ordered information so usability becomes a primary issue, a fact recognised by SpeechWorks’ own development methodology. We are likely to see a number of voice-controlled applications in the future, based on the component model, and the chances are they will become acceptable as their quality improves and their numbers increase. The death of the touch-tone interface is an imminent and welcome consequence of this (though it should be remembered that the bad press of DTMF is as much down to the interface design as the technology). Call centres are the primary market for this: call centre managers will find voice technology indispensable in achieving their “improved service, reduced costs” mantra.
Even so, this use of voice technology is still concentrated more on the enterprise than the individual. The real, mass market, killer application for voice comes with the merger of the mobile phone, a device which we are used to talking to, and the PDA, a device we are used to looking at. The recent announcement by IBM and Nokia serves to illustrate the potential of the technology, which will one day come as standard in handheld devices, particularly as it includes the definition of an XML-based standard for voice. Agreed, a truly portable voice solution which relies on the power of the device alone is a long way off, however advances in wireless technology mean that the recognition can occur on the server side rather than on the device itself. In the keyboard-unfriendly mobile world, the use of voice for navigation and search becomes the better option. Voice entry will sit alongside text and pointer entry, with each being used in the most appropriate manner.
For voice recognition vendors, the future is bright for those that ignore the skeptics and embrace the broadest possible vision of the future. Recognition engines and application development tools are both OEM markets, suitable for the systems integrators. There does exist a third market, which bears a striking similarity to the Web site development market currently occupied by products such as NetObjects Fusion and Microsoft FrontPage. As voice-enabled mobile devices gain presence, site developers will require tools to enable the receipt of speech commands. There are very few players in this space at the moment – SpeechWorks is one of them. However there is still plenty of time for the other voice recognition companies to catch up.
(First published 29 October 1999)
10-29 – UK Govt PC plans miss the mark
UK Govt PC plans miss the mark
It is interesting to compare two recent announcements, that of the UK government offering PCs at low cost to low income groups, and that of the new wave of Web-ready appliances to hit the markets next year. Essentially, this is about bridging the current and future new business models for consumer IT. The old model states that IT facilities in the home are centred around the PC, wherever it may be situated. The new model, of ubiquitous IT, says that a spectrum of computing devices will exist throughout the home.
UK Chancellor of the Exchequer Gordon Brown expressed his desire to see up to 100,000 refurbished PCs rented out to low income families for a fiver a month. This move has caused controversy over who would pay the phone bills for Internet access. Also, it could be argued that a 486-based PC is still adequate for the Web, and these are virtually being given away today in second hand magazines. Still, the principle of making low-cost computer facilities available is sound.
Meanwhile, it was reported in the Wall Street Journal that Dell, Compaq and Gateway would be launching Internet access devices next year, none of which would run Microsoft software. These devices are to be priced below the cost of the PC and it is likely, if their popularity grew, that the prices would tumble. Here we are seeing the kind of devices that could be given to subscribers of internet services, possibly even undercutting Gordon Brown’s offering of £5 per month. Web devices will not be able to run the same variety of software as the PC, however for most people sufficient facilities will be available directly Web portals such as Yahoo and MSN.
There is life in the PC yet, but in two years time it will not be the only device that people use to access the Web. Fears about an information underclass caused by a lack of computer facilities may be premature.
(First published 29 October 1999)
November 1999
11-22 – DNA points to chasm in the law
DNA points to chasm in the law
Oh dear, oh dear. There are few areas which show how woefully inadequate the law was to meet the needs of this information age, than the subject of DNA data. As recently discussed on Wired, for example, only 3 states in the US have laws prohibiting the unauthorised release of DNA material which could be used for testing. Similarly, in the UK DNA testing looks like it will quickly become the norm. These weaknesses, coupled with the unseemly rush towards DNA testing, suggest something needs to be done fast to ensure that civil liberties are protected.
The human genome project is progressing stunningly well, so it seems likely that the whole of the DNA structure will be identified and mapped within ten years. Such advances are impressive, as are the benefits of such research. As usual, however, the protective measures against the misuse of such information lag behind the research. There will be an inevitable period where, in certain areas, Joe Public is at a disadvantage relative to the early adopters of the new technology. One such area looks set to be law making itself.
Several questions arise. Like the unauthorised taking of fingerprints, can officers legally take tissue samples without consent? Like with the debate over ID cards, would those people refusing to participate be treated differently to those that consent? As the capabilities of DNA testing are developed, exactly what information should be made available to the police, for example genetic defects or genes which indicate violent or other tendencies? This is not meant to be a Luddite diatribe, but the fact remains that many, many questions remain unanswered, and all the while technologies are improving and policing organisations are looking to how they might benefit.
It is nigh time to open these questions up to broader debate. Police authorities and law enforcement organisations worldwide would seem to be ahead of the wider public on these issues, who would do well to catch up.
(First published 22 November 1999)
11-22 – Real Time Linux to become a Reality
Real Time Linux to become a Reality
For Linux to take over the world, it must prove itself viable in several, as-yet uncharted territories including desktop, enterprise server and embedded platform. Whilst it still has a way to go in all these areas, an announcement from Lynx suggests that a solution for the embedded platform may be nearing readiness. Lynx expects to release BlueCat Linux by June of next year. This version of Linux will be able to support embedded and hard real time applications (as opposed to pseudo-real time applications) and will be released as open source.
What is significant about this announcement is that Lynx is already a vendor of a POSIX-compliant, real-time operating system and so clearly has some experience to bring to the party. Secondly, the whole issue of embedded systems raises the question of “What’s in the box”. Embedded systems are not visible to the user of the device concerned, and it is this user that is the most susceptible to marketing. Selection of which embedded OS is based on a number of factors, including functionality, manufacturer credibility and access to trained staff. If Lynx can successfully prove itself relative to these three factors then it may have enough to go up against the more marketing-oriented approaches favoured by non-Linux vendors.
It is both unlikely and unreasonable that Linux would take over the world. Microsoft have already proved beyond doubt that the concept of a one-size-fits-all operating system is fatally flawed. However Linux has already proved itself as a capable, general-purpose operating system. If Lynx can demonstrate its credentials in the embedded market, this will further reinforce acceptance of open source Linux in other areas.
(First published 22 November 1999)
11-22 – Silver Surfers fuel UK Net growth
Silver Surfers fuel UK Net growth
It is generally agreed that surveys should be taken with a pinch of salt, particularly when commissioned by organisations standing to gain from the results. As results from three surveys appear on the same day, it is interesting to compare the findings.
First off, High Street banker Barclays commissioned a survey from NOP, which concluded that 41% of people surveyed were willing to switch to Internet banking within a year. It also noted that 36% of over-65s said they would like to bank online.
Second, a survey by Continental Research found that despite 11.1 million people use the Internet regularly (once a month or more), only 1% of all retail purchases were being made over the Web. It also found that average and lower-income families were being left out.
Third, a Which?Online report concluded that more than a million people over the age of 55 are regularly using the internet, primarily as an information source but also to chat.
Of course these are selected highlights of summaries of the surveys, however it has to be said that the different results bear each other out. Particular interest has been garnered by the “Silver Surfer” category. To date, it seems that insufficient attention has been paid to the group of people with more time, and maybe more wisdom, on their hands. Here is proof if any were necessary, that the Internet is more than a technology for the information-hungry whizz kids of our time.
For commercial organisations such as banks and retailers, the raision d’être of the Internet is to make money. While the current reality is that 99% of consumer cash is still being bagged at traditional outlets, the potential exists for a substantial percentage to be transferred to Internet-based transactions. Buoyed by the proof that the Web exists for more than just the propeller-heads, companies are trying every trick in the book (and plenty of new ones) in their attempts to garner a better ion of this market. This is currently geared around Web sites: fresh from another round of high street branch closures, for example, Barclays has announced new features to its online banking service. Sainsbury’s, the beleaguered supermarket chain, has also stated its wish to “launch, develop and own the best portals” in the food, drink, home and garden categories of Internet shopping.
The fact remains that nobody really knows what will trigger the consumer majority to start online shopping on masse. An exponential curve is assumed, but this will not be without significant effort on the part of the commercial organisations to make it so. It is unlikely, for example, that the PC-based Web site is the epitome of what can be done with the Internet. Bandwidth is still a problem which is likely to limit mass market adoption for another couple of years at least. Finally, the vision of most organisations remains sadly limited – Barclays’ list of online banking features falls well short of the blueprint for online banking described by Patricia Seybold in her book “Customers.com”.
Overall we have a way to go yet. Initial excitement and take-up by the hard core will undoubtedly give way to a wider adoption, but both technology and functionality need to improve substantially before this will happen.
One last word about “average and lower-income families.” Various initiatives have been announced, such as Gordon Brown’s PC rental scheme, but they all ignore one significant fact. The Internet is as much about reducing costs as it is about increasing profits. Central government will, sooner or later, work out how much it could save by providing Internet access to all. If this is coupled with the ongoing cost reductions in equipment and connections, then those on lower incomes should be able to take their place in the early majority, when it comes.
(First published 22 November 1999)
11-23 – Bluetooth is going not going to change the local networking landscape… yet
Bluetooth is going not going to change the local networking landscape… yet
At least that was the conclusion of major vendors of the technology, blaming software and interoperability issues for delays in bringing products to market. Despite this initial disappointment, companies are still moving full steam ahead with Bluetooth. So what is it, and how will it affect end users?
Bluetooth is a short-range wireless standard designed for communications between electronic equipment. The standard has been agreed by the majority of hardware manufacturers (including Ericsson, Intel, Lucent, Nokia, Toshiba, Philips and Sony) and software vendors (including Microsoft). It therefore looks set to exist, which already a major hurdle cleared for any technology. A number of applications have already been indicated for Bluetooth, including its use as a replacement for wires or InfraRed connections between devices such as mobile phones and PDAs. It has quickly become clear that the potential for this technology goes way beyond this limited view, however. Home networking has now been recognised as a valid target for Bluetooth, not for fridges and toasters (which, lets face it, has met with significant disbelief) but for HiFi units, televisions and other electronic devices used around the home. The second area which is likely to receive attention is the Small Office/Home Office environment, as Bluetooth demonstrates its capability as a replacement for local networking and the general proliferation of wires in these environments.
There is one significant element of this development which makes Bluetooth an inevitability: it is targeted at equipment manufacturers and hence is likely to be included on any electronic platform that is suited for the purpose. Like the InfraRed ports that the technology is designed to replace, Bluetooth will be an integral part of the device. The difference is that it will actually be used.
Software may be holding up the development of Bluetooth devices, and it is software which will slow down its acceptance. Higher-level mechanisms such as Jini or Universal Plug and Play are still necessary to enable devices to identify each other and to open secure communications channels. Without these, Bluetooth will still be appropriate for a limited range of applications, but will be prevented from achieving its full potential.
(First published 23 November 1999)
11-23 – Chip development goes up a level or two
Chip development goes up a level or two
Despite fears that Moore’s law is running out of stem, recent semiconductor announcements suggest that there’s life in the old technology yet. First off, scientists at Berkley University, California, have developed a transistor that is four hundred times smaller than those currently available to chip designers. Second, the Semiconductor Industry Alliance (SIA) released its technology roadmap which indicated that future gains in scale would be based on integrating multiple silicon and packaging technologies on a single chip. Finally, last week IBM announced how, using developments in chip layering, it had developed transistors which stood vertically on the chip’s surface, rather than laying flat.
All these developments add up to the suggestion of a rosy future for integrated circuits. Given the fact that the Berkeley development will not be patented so to enable “the widest possible usage,” and the existence of the SIA as a cross-industry body of competing vendors, the future looks even brighter. Most significant is the coupling of the IBM announcement with the plans of the SIA, as together these suggest integration of many device types, including displays, processors and audio equipment, onto a multi-layered, multi-component, single chip foundation.
It is unclear how much potential for innovation still exists in silicon but these developments should keep the ball rolling for a good five years, by which time research in other areas (For example, nanotechnology or Hitachi’s memory chip research) should be starting to bear fruit. Again, integration is the key: by the time the new generation of devices starts to roll off the design station, they will be able to slot right into the connectivity and packaging developed for the older generations of integrated circuits.
(First published 23 November 1999)
11-23 – IP becomes carrier protocol of choice
IP becomes carrier protocol of choice
Some interesting figures have come out of BT, who is investing half a billion pounds in their IP networking backbone. BT will be increasing the IP points of presence from 14 to over 100 nationally, and intend to implement the separation of Internet from voice traffic at local exchanges.
The trend towards Internet traffic from voice traffic is visible across the carrier marketplace. Nortel, for example, announced recently that it would be integrating IP routing capability into its transmission devices, with two effects: the first was that separate IP routers would no longer be necessary and the second was that all traffic, including both voice and data could be transmitted using IP. Another example is GTE, who recently announced a dedicated Internet for voice traffic to give customers the cost reductions of IP coupled with the performance guarantees of a dedicated network.
By bolstering its Internet access capability, in the short term BT are taking a strain off their voice backbone. However they are also preparing themselves for the general migration to IP traffic from voice traffic. It has generally been agreed that IP will be the protocol for all traffic types which use the global carrier infrastructures, voice, data or otherwise. Remaining questions are more a matter of “when” and “how” than “what if”.
Given BT’s monopoly position in the UK, BT’s announcement has some some interesting impacts on dial-up Internet users. First, consider the modem link to the local exchange. Modems are required to convert digital data to within the ranges required for the voice carrier network. If BT are demodulating and sending such data directly onto the Internet from the local exchange, it becomes questionable whether modems are necessary at all compared to DSL-based mechanisms. Second, the question arises as to whether BT are exploiting its privileged position as “owner” of the local exchange, by extending Internet access to the local loop thus offering potentially better performance to its own subscribers than to users of other ISPs. This remains to be seen, as it is quite likely that other carriers’ equipments are also to be installed in local exchanges.
Overall, announcements such as this are steps along the way to the Internet becoming the global communications network for all media including broadcast media. Local and global carriers need to position themselves for this inevitability, whilst being careful to not overtly exploit the advantages of their incumbent position.
(First published 23 November 1999)
11-24 – Can Linux workstations change SGI’s fortunes?
Can Linux workstations change SGI’s fortunes?
Desktop Linux may not be a reality just yet, but as far as Silicon Graphics is concerned it is going to have to be as the company is staking its future on it. SGI recently admitted that it had failed to find a buyer for its Intel-based workstation business. Fresh on the heels of its announcement of an attempted sell-off of its Cray supercomputer product line, the company could really have done without more bad news like this.
Silicon Graphics lost the plot a long time ago, admit sources from inside the company. Faced with the ever-decreasing sales of its proprietary, Unix-based workstations and servers, the company launched itself into high-end NT-based workstations based around a customised hardware architecture. In doing so it fundamentally misjudged the workings of the PC market, by giving itself insufficient differentiation from other vendors to justify the price premium. It also continued to focus on its traditional markets where it should have been aiming squarely at the corporate mass market.
The newest desktop strategy from SGI is to concentrate on the provision of Linux-based PCs. The company recognises that application support is still weak, but are prepared to take the short term risk. Also SGI intends to open up parts of its OpenGL graphics code to the Linux community.
Clearly, though, Silicon Graphics are adopting Linux because they have very few other directlions in which they can go. What is interesting is that while it remains unclear whether the strategy will succeed, it is clear that it gives a general boost to the concept of desktop Linux. The company can exploit its desktop Unix heritage to ensure its Linux workstations have the usability and ease of administration required for the desktop. It is highly likely that SGI will focus on the performance of desktop Linux, again an area which will benefit the movement as a whole. The biggest unresolved issue remains application availability and it is possible that SGI will be able to leverage its relationships with application providers to help resolve this.
There will be a future for desktop Linux. It remains to be seen whether the same is true for Silicon Graphics.
(First published 24 November 1999)
11-24 – Directory services lead Novell out of the mire
Directory services lead Novell out of the mire
The turnaround in Novell’s fortunes can now be said to be completed. In a year the share price has doubled, the growth rates for directory-based products have doubled and the company profits have virtually doubled. The company has clear strategy, popular products and a fast-growing consultancy arm. All change for Novell, who many were consigning to the junk heap of IT history only a few years ago.
It seems still recent that Novell were bullishly acquiring companies and products such as Tuxedo, WordPerfect and UnixWare (Novell’s name) in their drive to compete directly with Microsoft. Within a year it became clear that the strategy was failing and measures were taken including the sell-off of all the products that the company had been so feverishly collecting. Drastic action was necessary as the share price fell like a stone and the company was derided for a lack of focus or coherence. The company decided to fix their sights on NetWare and its directory service product NDS.
With NDS, Novell has set its sights on being the directory of the Web. Recent announcements concerning the open source release of parts of NDS will most likely be targeted at supporting this ambition. The company has been remarkably successful so far, down to its own marketing success and, it has to be said, Microsoft’s incapability of bringing its much-touted Active Directory to market. Originally presented in 1997, AD will now form part of Windows 2000 which will not be released until February of next year.
Companies rise, and so can they fall. Over the next few years Novell still faces an uphill struggle before it can truly say it is at the top of the directory pile. Microsoft will undoubtedly attempt to steal the company’s flag once Windows 2000 has been released, and this is a threat which Novell will be taking most seriously. Even so, there is still a huge market for directory services which, based on current performance, the company stands every chance of capturing.
(First published 24 November 1999)
11-24 – Microsoft: the lighting of the long fuse
Microsoft: the lighting of the long fuse
The brewing legal storm for Microsoft has remarkable similarities to that already experienced by the tobacco industry. Class action suits have already been filed in Alabama, Louisiana, California and Ohio and there is every chance that more will follow. The probability is that the different cases will be consolidated, suggesting another legal battle which will cause more than a few headaches to the software giant.
There are two main differences with the tobacco industry, neither of which are good news for Microsoft. The first is that the defendant is a single company rather than an industry – yes, individual companies were targeted but this still led to a dilution of the overall thrust. The second is that there are no vested interests relative to the tobacco debate, which pitched smokers against non-smokers as much as individuals against companies. In the Microsoft case, the situation is much simpler – individuals and organisations want some of their money back as they feel that they have been overcharged. This isn’t about proving risks to health or substantiating research, this question has a yes-no answer which many feel has already been addressed by Judge Jackson.
Clearly a huge amount hinges on the final conclusions of the antitrust case, which may not be known until next year. Should the parties decide to settle then a final answer may not be reached, meaning that the class action suits will have to start virtually from scratch. In the meantime, investors are hanging on as long as possible to what is clearly a good earner, monopoly or no. It could be argued that settlement acts only in the interests of Microsoft as it does not resolve the fundamental issues of the case. Should the Judge reach a final ruling then Microsoft will appeal, taking the case to the Supreme Court if necessary. In the meantime, both the company and its investors can continue to milk the cash cow.
It is unlikely that other legal disputes will be able to reach any conclusion prior to the closure of the antitrust case. US lawyers are stunningly persistent when they smell money, however, and Microsoft cannot shake off the powerful scent. Legal organisations will try every possible approach to getting their clients’ hands on the cash: it may take several years but sooner or later, like with the tobacco suits, a case will succeed. Once the breach is opened, investors will hang on as long as they dare before they take the money and run.
(First published 24 November 1999)
11-25 – Chip wars come to a head, just in time
Chip wars come to a head, just in time
The leapfrogging of AMD and Intel towards the elusive goal of a 1GHz 32-bit processor looks close to reaching a conclusion. According to The Register, Intel will use a semiconductor conference in early February to demonstrate such a chip. However if rumours from AMD are true, Intel may find themselves pipped to the post.
The ongoing thrusts and parries of the two organisations are having several effects. First, AMD has managed to re-establish itself as a processor provider worth considering. A number of the major PC suppliers, including Dell, are starting to take chips from AMD. Gateway withdrew from using the company but there is information to suggest that the two organisations may be re-opening the channel. Success breeds success and AMD now look like they have achieved the kind of critical mass that is necessary to keep up, or even overtake, the erstwhile leader. The second knock-on of the knockabout has been that customers have benefited greatly. Prices have been forced down while new products have been released at an increasing rate.
The question arises as to whether this pace of innovation can be sustained. Or does it? Given the intensity if the x86 wars, it is possible to become oblivious to the other changes that are occurring in the IT industry. The fact is that we are on the brink of an explosion of diverse and innovative devices, with interface compatibility but a broad ranging base of software and hardware. All of the main players are recognising this with Dell releasing its WebPC next week, Microsoft changing its mantra to “great software.. on any device” and Intel making a broad range of investments to give itself a lead in the device explosion.
The next year will prove most stimulating, for one thing it should spell the end of the processor wars which have seen off a number of vendors including NatSemi, Cyrix/IBM and Motorola to name but a few. The closing of this chapter will enable the other players to rejoin the fray, and innovation and partnerships will prove more important than processor speed and MIPs. An important milestone it is, however the success story of the first past the 1GHz post will quickly leave the marketing collateral and enter the annals of IT history.
(First published 25 November 1999)
11-25 – Compaq provides litmus test for OS debate
Compaq provides litmus test for OS debate
If ever there was a single company which could illustrate the complexities of the current Operating System debate, then it would be Compaq. The company is pulling back from NT on Alpha and is offering existing customers an array of options for migration or replacement. Customers can stay on Alpha and choose a free upgrade to OpenVMS, Tru64 Unix or (already free) Linux, or they can get 90% off the cost of a new Intel/NT system.
There are two things we can get out of this. The first is that Compaq, having done considerable homework, is unable to reach any conclusion about relative benefits or customer preferences for the four platforms. Of course, some are perceived as “mainstream,” “niche” or “just arrived,” but overall the company has been forced (though it is unlikely that Compaq would use that word) to offer a choice. This paints a considerably different picture from that suggested earlier this year, which saw NT/Intel and Tandem being the two platforms that the company would promote above all others.
The second point is that the results of the migration, which should come to light over the next six months, will provide a deep insight into the state of the operating system market. It is tempting to believe that customers will choose a migration to another NT platform as they already have skills in this area and given the popularity of the OS. The migration comes with a not insignificant hardware cost, however, and for various other reasons customers may be tempted to keep their existing hardware and change the operating system.
It is to be assumed that there will be additional costs associated with application upgrades, plus the labour and downtime costs of the migration itself. It seems unlikely that many will choose this route, unless they are dissatisfied with NT. Of those that do, maybe the most interesting data to come out will be the proportion of customers that choose Tru64 Unix over Linux. Clearly Compaq will be pushing Tru64 Unix, in which it has a major stake, hard. Success for Linux could well force new changes on the whole Unix OS landscape.
(First published 25 November 1999)
11-25 – Zero cost IT – only...
Zero cost IT – only…
The current debate about costs of IT equipment and services illustrates the stage we have reached in the technology revolution. Microsoft, for example, have been accused of price fixing; Oracle of abusing their position when setting license fees. Now, on Silicon News, it is claimed that IT services companies are overcharging their customers. So – what is going on?
What we have here is the simplest value equation. In the past there has been a huge perceived value attached to IT. Relative to the manual systems and processes that mainframe processing represented, it was possible to justify spending vast amounts on such facilities. Similarly, new markets developed (and continue to develop) in parallel with technology advances – in the early stages, the price premium is set based on what the end customer is prepared to pay, and in order to recoup the costs of research and development by the vendor. Alongside all of this has developed a need for services, experts in the new technologies or people with past experience of making the necessary changes to organisational and technical infrastructures. Companies have been spurred on by the carrot of reaching new markets or the stick of keeping up with the competition. The internet has resulted in a new wave of this phenomenon, new products at premium costs and newly trained “experts” in the field.
Each new leap of technology will cause organisations to reappraise how they operate and what products and services they offer. Where product and service suppliers have come unstuck is down to one of two reasons. Either they failed to deliver, or they maintained the price premium long after the perception of delivered value had changed.
Failure to deliver is perhaps the most obvious and hence the easiest to describe and comment upon. Products may fail for a variety of reasons, from low quality to over-hyped functionality. Services may also fail, for largely the same reasons. At the earlier stages of a new technology adoption, partial failure matters less as anything is better than missing out on the race. Later, though, the picture is different: late adopters rarely stand for shoddy products or performance.
Price maintenance is harder to judge because it is based on perception. The competition in the hardware market has forced manufacturers to push the bangs per buck ratio ever higher. The copybook of many software vendors is not quite so clean: the price of Microsoft operating systems for example, have gone up rather than down over the past five years. Competition in the hardware space is not matched among software vendors who are often rightly accused of being less innovative and more grasping than their hardware counterparts. Initial demand for point services (for example, how to tune financial packages for best results) has commanded staggering rates, but this is to be expected. Services vendors often justify the maintenance of high prices based on the principle that “it was expensive, therefore it must have been right”. However there is little or no proof of any correlation between the two.
There will be several technology leaps over the next few years, in particular the current eBusiness movement, the revolution in broadband communications, device-based computing and Application Service Provision (ASPs). The costs and benefits of each of these are currently still being understood but it is to be expected that the costs of products and services will be pitched relative to the perceived value of the prospective user base. Older technologies, already perceived as commodities and whose R&D costs have already been recouped, should be priced at a substantially lower level.
(First published 25 November 1999)
December 1999
12-07 – Microsoft: Java comatose, W2K holds key to bypass operation
Microsoft: Java comatose, W2K holds key to bypass operation
Java, Java everywhere, such was the vision of Sun when it first launched the language. Java has gone through several incarnations, initially targeting embedded systems and then aimed squarely at knocking Windows off its monopoly perch. These days the battle cry is for the enterprise – a raft of powerful players such as BEA and IBM have been lining up behind the concept of Enterprise Java Beans (EJBs) running on application servers. But they are not there yet, according to Microsoft, who is seizing the opportunity to get to the higher ground.
To bolster its armoury against Java, Microsoft commissioned a survey of 3,000 developers across Europe, from independent market research company Romtec. The findings make interesting reading, particularly as they compare usage of Microsoft technologies and Java/Corba technologies. Across the board of different types of developers, Microsoft COM/DCOM technologies were shown to be holding their own. A growing number, currently running at between 25 and 40% of companies (depending on sector) were shown to be using these components. Users of Java beans, however, lay at around 5-10%, a figure that has remained relatively constant over the past three years. And as for EJBs, developers of EJBs, these barely registered on the scale.
Microsoft advocates will be encouraged by the findings. COM is growing, Java is stuck in the single figure percentage points. Of course, it is likely that Java flag-wavers will dispute the findings – either rebutting the figures with findings of their own, or bringing up the point that these are, after all, statistics which by nature cannot be trusted. Indeed, Bloor Research published a Java survey of its own earlier this year, which conflicts with the Romtec survey. But it is not the intention here to dispute who is right or wrong. The question is – what if Microsoft are perceived to be right?
There was one very interesting fact that came out of the Romtec survey. This was, year on year, a substantially greater number of development houses planned to adopt Java-related technologies in the next year. Year on year, it would appear, the adoption of Java may have been put off. Anyone who has worked in a development shop will know the difficulties of delivering existing applications and will recognise this conflict between hope and reality.
By focusing their efforts on enterprise applications, Microsoft may well be cutting Java off at the pass. There is a general feeling that Windows NT and its associated technologies have proved insufficient for Enterprise applications, and have preferred to adopt platforms from the likes of Sun, IBM and BEA. Java has a reputation for being slow but its suppliers build on the tradition of delivering true enterprise environments, hence the expectation of EJB application servers is already positive. Soon, however, Windows 2000 will be trotted out of the Microsoft stable. All signs are that it is performant, available and, most of all, scalable enough for the enterprise.
If Microsoft can hang in there for another three months, they will be in a position to influence the “real soon now” school of Java adoption. That is, those development shops that would love to adopt Java if only they could find the time, and a suitable project, to do so. With surveys such as the one quoted here, the software giant will explain how it has the skills base and the incumbent position already. “You’re right,” development managers will say. “Why change – after all, the MS environment is now mature.” Java shops and developers alike may throw their arms up in horror, bemoaning the closed environment, the upward-spiralling license costs and, most of all, the weakness of fellow developers who fail to adopt the J-vision and succeed only in lining the pockets of the world’s richest man.
So – is this truth or fiction? Ultimately it doesn’t matter. In this topsy-turvy, technological world Microsoft have shown in the past that how marketing skill is at least as important as good product. Even as the jaws of the US justice system attempt to close on the software giant, Microsoft may demonstrate yet again that its message making proves too much for the evangelism of the other camps.
(First published 7 December 1999)
12-07 – Nokia forges Europe’s mobile future, threatens the silicon boy wonders
Nokia forges Europe’s mobile future, threatens the silicon boy wonders
Nokia became Europe’s most valuable company yesterday, as its net worth pipped BP Amoco at the post. Not bad for a Nordic TV manufacturer. Almost as staggering is its prediction that cellular subscribers will triple to one billion over the next three years, based on the adoption of the cellular Internet, where mobile phone users access Internet services through the Wireless Application Protocol WAP.
In case it wasn’t recognised already, Nokia looks fixed to be setting a number of standards, not only for mobile phones but also for PDAs and ultimately for computing devices. Why? Because they are all part of the same infrastructure. This is why Nokia is so fascinating. Consider:
- With Ericsson and Motorola, Nokia has brought WAP to market without a glitch, to the demise of 3Com’s WebClipping and Microsoft’s MicroBrowser technology
- With Palm and Symbian, Nokia has settled on the next generation of OS for its devices, sidelining Windows CE and giving both Palm and Symbian every reason to be cheerful
- Again with Ericsson and others, Nokia has been instrumental to the success of Bluetooth which (despite teething problems – no pun intended) looks set to bypass other wireless local networking protocols, for example from the likes of Apple.
Just as information is power, so it is that in the information industry, he who sets the standards rules the world. While the gorillas of the IT industry have been fighting trench warfare in the standards game, the mobile manufacturers have been co-ordinating their efforts in far more gentlemanly co-opetition. These companies are fast becoming some of the most powerful in the world and their size, and their ability to co-ordinate efforts, gives them the potential to trounce the bickering upstarts of the silicon age. As the worlds of IT and communications continue to converge, the next standards battles will give the IT incumbents such as Microsoft, SUN and Oracle a true run for their money.
(First published 7 December 1999)
12-07 – Where lies the future of Web advertising?
Where lies the future of Web advertising?
Hmmm… advertising, advertising, advertising. As illustrated by the launch of the latest Real Networks’ server software, this subject continues to garner attention, particularly in relation to the Web both as channel for and a focus of advertising. It is worth highlighting certain issues that are coming to light based on the nature of the Web as an interactive, ubiquitous medium.
The interactive nature of the Web is only starting to be exploited as far as advertising is concerned. Banner advertising is the most visible form of display, but (and this is surely common knowledge now) organisations can pay to have their names appear at the top of Yahoo’s search list. Adverts can be couched using the traditional methods of sponsorship, but it does not stop there – competitions, surveys and partnerships between organisations are all becoming valid forms of getting an organisation’s name and message across.
It is already common (but not common enough, maybe) for adverts to be targeted depending on a user’s country of origin, stated preferences or logged behaviour. Advertising is part of a pre-sales process which may be managed using Web-based applications: advertising systems will be linked with marketing systems and relationship management facilities to enable a smooth pull-through of the prospective customer. Of course, with interaction, there is risk: it comes as no surprise that advertisers such as Conducent are (probably illegally) collecting details of users’ computers, or that Amazon intercepted customer emails. Such is the Web.
The second point is that the Internet is everywhere, or at least it will be. Despite continuing efforts to preserve existing channels of communication, eventually the Internet will replace the radio, the television and the telephone. Telecommunications providers have accepted this, cable companies have accepted this. This does not mean that traffic will move from one network to another, rather that all networks are becoming internet-based. In the old world, there was the channel such as the radio station, the billboard or the magazine, and there was the advert. In the new world, the distinction between channel and ad is no longer valid, as it is equally possible for an advertiser to host a radio service as a radio site to host ads. On television, a click on the remote control will take the interested party through to the sponsor’s Web site.
For all of these reasons, it is logical to assume that advertising, as a pure form of communication, is on the way out. Traditionally the ad has had a life of its own but it will increasingly be tied into other forms of marketing communication that eventually lead to a sale. The advert is an entry point into a process: if it is seen as such, it can be better targeted and more closely followed. As punters we should be glad, as companies reduce the splattering of irrelevant banner ads and turn their attention to focused direct marketing. The downside is, of course, that such organisations will be more in tune with our needs and weaknesses, and as such are far more likely to succeed in selling to us.
(First published 7 December 1999)
12-22 – 3Com – first the strategy, the rest will follow
3Com – first the strategy, the rest will follow
When 3Com bought US Robotics in 1997, it was unsure what to do with the PalmPilot product. Rumour has it that the decision was made for the company as 3Com executives found they could not do without the device, which became an essential element of meeting room apparel within a short space of time. Realising it was on to a winner, the company hung on to the Palm product line, a decision which has paid huge dividends for the company. Last week 3Com announced it had filed to sell shares in Palm Computing, in an IPO which is generating great interest across the board. The decision has been seen as good for Palm, but where does it leave 3Com?
Let’s look at 3Com’s current position. The company announced its second quarter results two days ago, seeing a fall in both earnings and revenues of just over 4%. According to the financial analysts, this was largely down to a 20% drop in sales of its networking equipment products and modems. Also, despite beating analysts’ expectations on earnings per share, 3Com warned that third quarter earnings would fall to 24 cents per share, 8 cents below predictions. Observers are using terms like “stumbling,” “struggling” and “challenged.” The sell-off of Palm is seen as a positive move as it will enable 3Com to focus on its core business, but some say that it is a lack of focus in this area which has dogged 3Com from the start.
Despite all this, 3Com do seem to be getting it together. The company has launched its e-networks strategy, in which it positions itself as a provider of core building blocks for the converged voice, video and data markets. This move is not just about networking equipment – applications software and services will also play a big part in the overall picture. The recent $100M stake that the company took in wireless applications company USWeb/CKS, as well as the positioning and feature sets of products like its NBX family, are illustrative of how 3Com is moving forward with this strategy.
3Com is moving away from battles it knows it cannot fight, such as sales of “pure” networking equipment. Instead it is setting its sights on the new frontiers of convergent technologies, applications and services. While this new landscape shows all the signs of offering lucrative opportunities, which the company is perfectly capably of exploiting, it is still very much in the future. It will remain so until bandwidth issues have been resolved, and standards and tools for application communications, security, directory services and management are not only set but also adopted across the technology industries. This presents quite a barrier to be overcome: for 3Com’s sake it had better happen sooner, rather than later.
(First published 22 December 1999)
12-22 – Novell bets its future on ASPs
Novell bets its future on ASPs
Directories are dull, but not to Novell as the company is betting its future on them. The future of the company hangs in the balance – if it gets it wrong, it could be consigned to IT history. If right, it could sweep the landscape of IT like a forest fire. The question is, which is it? And the answer may be found in Application Service Providers, or ASPs.
In our opinion, the arrival of broadband communications will signal new models for using IT. ASPs are an indication of the way things will go, with the rental of applications which are accessed over the Web. However this model of ASPs is just the beginning – many different types of service, including communications services, information feeds, application services and business services, will be integrated and provided to businesses and consumers alike. This is the vision, but it is currently hampered not only by bandwidth constraints, but also by a lack of a set of facilities without which the ASP model cannot function. These are billing, management, security and, of course, directory services.
Directories will play a central role in the service provision model, as they will hold information about all the who, what and where of the service infrastructure. With products like Novell’s NDS the mechanisms for directory already exist; however they have not yet been adopted on a sufficiently widespread basis. Novell is betting that they will, and when companies start to pick a product, it wants NDS to be the de facto choice. This is why Novell have started giving away sections of NDS to the open source community, hoping that this will speed its adoption.
What could go wrong for Novell? In a word, Microsoft. Active Directory is currently waiting in the wings for its moment of glory, however this will come in the next few months. In the meantime Novell are attempting to get as big a head start as possible. With NDS release 8, the company has already established a reputation for the product as stable, performant and well-supported by the growing ranks of qualified administrators. This week’s announcement by Information Week, that NDS 8 would be their product of the year, can only have been icing on the cake for Novell, not to mention the recently broken deal with CNN Interactive. Novell’s biggest threat may also prove to have too many other battles to fight, particularly as the portability across many platforms is a clear differentiator for NDS over Active Directory.
Given the way IT is going, it looks like Novell have read the runes correctly. The company is positioning its flagship product at the heart of the Internet, and it will take a substantial knock to remove it once it gets established. ASPs may still have a way to come, but when they do, Novell will already be there.
(First published 22 December 1999)
12-22 – Y2K – believe us, we’d rather be wrong
Y2K – believe us, we’d rather be wrong
So that we can keep our last missive before Christmas full of seasonal cheer and good tidings, we thought we’d better round up the doom and gloom stuff on the penultimate news day for IT-Director.com before the Millennium. First off, is Y2K fact or fiction? Let’s look at some recent news stories, which may make up your minds.
- the US State Department announced on Monday that it would suspend visa processing at embassies worldwide, for the first two weeks of January. The reason cited was “Y2K issues.”
- On December 13, Wells Fargo & Co. sent 13,000 renewal-notices to its customers with the year 1900 printed on them. Apparently the error was down to a supplier forgetting to change the date on its printing machines.
- In June, a Y2K test caused four million gallons of raw sewage to be spewed onto the streets of Van Nuys, Southern California.
- A food distributor has discovered that its computer system was tossing out items that expired in 2000, thinking they had sat on the shelf for the past 99 years.
These are a selection of real events. A more speculative report from BSC Consulting in the UK has said that the world is categorically not ready for the challenges of Y2K. The reason it gives is that, despite the huge efforts that have been made, “many governments and business programs have been a mixture of incompetence and complacency.” The report also points out that significant problems will be caused due to the interconnectedness of compliant countries with others which have not resolved the problems. All talk? Well, maybe not. The Y2K readiness survey recently published by the United Nations’ International Y2K Cooperation Center stated that “because some businesses, schools, and governments will not be sufficiently ready, they, and some of the people who depend on them, will suffer economic harm from Y2K-caused errors. These local impacts will range from minor inconveniences to the loss of jobs.” The IYCC’s overall assessment may be summarised as “many Y2K errors, moderate impact.” Unfortunately we are getting mixed messages from the IYCC, who reported more recently that “health care services worldwide are at the greatest risk for Y2K disruptions … hospitals and the health care sector in general have been the slowest worldwide to ready themselves for the new year, and … some nations may be overwhelmed by the sheer size of the problem.” This perspective sounds a little more than the “moderate” point of view espoused by its Y2K readiness survey.
If these reports, produced by bodies with hands on experience, are to be believed then all will not be well in the Millennium. Even if the problem does not manifest itself in “complacent” Western countries, the in its survey IYCC warns of humanitarian crises which could have at least an indirect impact on just about everybody. Don’t get us wrong, we would love the harbingers of Y2K doom to be proved foolishly false, however we also think that the current levels of complacency are as misplaced as the visions of Armageddon.
What will happen? Nobody knows… yet. But New Zealand, the first industrialised nation on the dawn of the new Millennium has volunteered to keep us informed. Updates on Y2K issues will appear on http://www.y2k.govt.nz/, as they occur. Also, the IY2KCC’s own Web site will be host to a by-country breakdown of Y2K status. The European Union web site may be found at http://www.ispo.cec.be/y2keuro/year2000.htm. Finally, for the sceptics out there that believe that either (a) there is not a problem, or (b) we can trust our institutions to resolve it, it might be worth looking at the paper by Peter De Jager on the origins of the Y2K problem, at http://www.sciam.com/1999/0199issue/0199dejager.html. Clearly, according to the paper Y2K is a complex technical issue which is difficult to resolve even with the best will in the world.
Many years as a software tester are reason enough for myself to err on the side of caution when it comes to computer-related issues. Y2K is probably the biggest “computer-related issue” that the world will see for some time, and it is certainly the most important to date. Doom or dawn, it would not do to move into the new Millennium with anything other than a note of caution. I won’t be bothering with bunkers, but I shall be watching things very carefully as the New Year starts.
(First published 22 December 1999)
2000
Posts from 2000.
January 2000
01-07 – AOL-TV brings forth embedded Linux – by stealth
AOL-TV brings forth embedded Linux – by stealth
Web TV has been long-predicted but, so far, has failed to stoke up the interest of the mass market. With the launch of AOL into the market, this looks set to change. And it looks like, where AOL goes, Linux is destined to follow.
At the Consumer Electronics show in Las Vegas, AOL showed off its interactive TV services which it is to launch, with satellite company DirecTV. The AOL solution is to use rebadged set-top boxes from Philips Electronics and from Hughes Network Systems. AOL intends to leverage the 20 million subscribers it already has, based on which (if it plays its cards right) it should secure it a substantial proportion of the 30 million US households predicted to be using interactive TV by 2004.
All very well and good so far, but – where’s the Linux link? There is currently one element missing from AOL’s portfolio. The current devices do not yet have the capability of caching live TV broadcasts such that they can be paused, rewound or just watched at a later date. Both Philips and Hughes intend to include such facilities in the future, and are licensing technology from TiVo to this end. TiVo products are based on a PowerPC architecture running (you guessed it) an embedded Linux kernel.
Should AOL’s strategy go to plan, the positive knock-on effects for Linux could be staggering with millions of consumers using devices with “Linux inside”. Consumers who turn to AOL are unlikely to care what operating system is running in the set-top box. Manufacturers and software vendors will care, indeed they will be watching with baited breath. Despite Linux’s strengths as an embedded operating system, its success in the consumer market will be more dependent on manufacturers’ preparedness to use the platform. Given a successful launch by AOL, other large-scale manufacturers will likely jump on the bandwagon: once established, there will be little that other companies can do to prevent the success of embedded Linux as a platform. Coupled with Intel’s recent announcements, it would appear that the future dominance of Linux on embedded devices is unassailable.
(First published 7 January 2000)
01-07 – Tokyo Joe suit starts the legal ball rolling
Tokyo Joe suit starts the legal ball rolling
“Moments of glory, desires, wealth, in the end all an illusion.” So proclaims the home page of Tokyo Joe, the darling of the Internet stock market. With four separate charges of fraud currently being filed against him, it would appear that the online stock guru is being hauled back to reality.
Amongst other things, the Securities and Exchange Commission (SEC) has accused Tokyo Joe of “scalping”. According to the SEC, TJ took advantage of his influential position to talk up the price of stocks, and he encouraged others to buy whilst selling his own share holdings. In addition he is said to have kept his own trading activities secret from his own clients and lied about his track record. Damaging charges indeed.
Whilst it is unclear whether the Tokyo Joe is in fact guilty as charged, or indeed whether the charges can be made to stick, this case serves as a landmark for the currently under-regulated area of Internet stock trading. As this case indicates, the legal issues surrounding dealing in eStocks are still very much on the drawing board. This is acknowledged by TJ’s lawyer who claims that his client had “legitimately taken advantage of legislative loopholes,“ according to The Independent. Hence why it is a good thing that the SEC has launched this, its first suit concerning Internet stock trading. Much of legal activity relies on precedent: while it is unclear what the outcome of this particular case will be, its results will provide at least a small stepping stone to future cases.
The Internet has left the law at a standing start. Its positively anarchistic nature can be said to have driven innovation in Web time, not to mention a potential new world of global free speech. Despite this, an appropriate legal framework is necessary to counter against the inevitable abuses of the system and its users. All in all, by raising the suit against Tokyo Joe, the SEC have taken an important step.
(First published 7 January 2000)
01-25 – At Linus’ Transmeta, Crusoe finds its Friday
At Linus’ Transmeta, Crusoe finds its Friday
If the recent surge of interest in handheld computing devices is anything to go by, it looks likely that Transmeta have found the goose that will lay their golden eggs. According to a survey by analysts NPD Intelect, December retail sales of plamtops grew by 169% compared to the same period in 1998. What is more, this success was not confined to the cheaper end of the scale. Sima Vasa, vice president of technology products for NPD Intelect, was quoted on News.com as saying that “Consumers are willing to pay a high price for the total mobility of personal digital assistants,” with the result that high-end devices were also seeing expanding sales. Overall, Palm was the winner, gaining an increasing share of the market.
All of this is pretty good news for Linus Torvalds’ new venture, Transmeta, which has barely been out of the news since it announced its Crusoe chip last week. Crusoe uses a technology known as Very Long Instruction Word (VLIW), which dramatically reduces the size of programs but requires software to prepare the instructions for the chip. This software-hardware combination is what is garnering the most interest: because it works in software, the code preparation or “code-morphing” also enables Intel instruction set to be used without fear of infringing Intel’s hardware based patents. Despite Transmeta’s clear targeting on Intel’s market share, analysts concur that the giant has little to fear just yet.
There have been many projects researching how instructions would be better structured and fed to processors, but so far none have reached mass-market appeal. Sometimes things work because the time is right – consider the take-off of the Palm Pilot following many failed attempts by the competition. In Torvalds’ case, it is his personality that is creating the wave that he rides – when he first announced his intentions, few could resist seeing what the demi-penguin could come up with next.
On the subject of Linux, it would appear that the little man has not forgotten his offspring. Several extensions to the Linux kernel have been made to support the Crusoe chip, and Transmeta are fully behind efforts to bring embedded Linux-based devices to the market. This could raise interesting issues of conflict of interest – should Linus remain responsible for what goes into the kernel? We shall leave this debate for another day.
Crusoe chips, to be manufactured by IBM, are currently running at around the 500MHz mark but there is a 700MHz version on the horizon. The first customer for the chip is Diamond Multimedia, already noted as trendsetters through their launch of the Rio MP3 player a year ago. Diamond will be using the first chips off the production lines in their forthcoming WebPad device. These chips are reputed to support Linux only, so it looks like this could be the first example of a Linux handheld.
Overall Crusoe will give technology a boost by showing that other architectures are viable and, indeed, prefereable when used inside devices. So – it’s good news for handhelds, good news for Palm, good news for Transmeta, and good news for Linux. So who is losing out? Well, all of these developments put the squeeze on any platform which is trying to get into the space. If press coverage is anything to go by, it looks like “Powered by Windows” may well become one of the forgotten taglines of early 2000.
(First published 25 January 2000)
01-25 – British Telecom – looped around a rock and a hard place
British Telecom – looped around a rock and a hard place
At last – the local loop looks like it is to be opened up to competition. A set of draft guidelines, published \link{http://www.oftel.gov.uk/competition/llu30300.htm,here}, were released by Oftel on Friday which set out the requirement for BT to open up competition to its “last mile” by June next year. 2001 could be a good year for telecommunications users in the UK, but maybe not such a good year for BT.
Essentially, this boils down to one of the few remaining areas in which BT can be said to hold a real monopoly. This is the wire between the local exchange and the end-user socket on the wall: the socket, the wire and the exchange are all currently owned, managed and charged for by BT. Plans to change this have been in train for some time, but a timetable has not been forthcoming. Not, that is, until now.
The effects of this change will be far-reaching. According to the Register, which very kindly summarised the main points of the guidelines (which, let’s face it, wouldn’t win top prize in a clear English competition):
“Among the conditions announced … is BT’s requirement to provide unbundled loops to other network operators, to permit the co-location of equipment at its local exchanges, and to provide any necessary services to open up the network to competition. It will also give Oftel the power to set the price for these services.”
Given the current challenges that BT is facing, from quarters such as free ISPs and government announcements about reducing telephone charges still further, this is one extra problem that BT could really do without. Inevitable it may be, but pleasant it is not. What is worse, the companies lining up to threaten BT’s monopoly position (from telcos like AT&T, cable providers such as NTL and ISPs like Alta Vista) are, in general, global players with few restrictions on what they can and can’t do. BT still faces a number of restrictions on its own practices: it is still unable to deploy cable services and, according to Oftel, is facing increasing pressure from the international calls market.
BT expressed fears about being bought following its recent share price falls – this may be the best bet for a company that is seeing its monopoly replaced by a regulatory framework which may leave it unable to compete. BT has been slow to move in the past, and its privatisation needed such a framework, but the world has moved on. The June 2001 date marks the end of an era for BT; it should also mark the beginning, with the company free to compete against some of the world’s largest communications companies. Even if its bonds are broken, these are battles which the company is not guaranteed to win.
(First published 25 January 2000)
01-25 – Linux consortium brings new weight to embedded debate
Linux consortium brings new weight to embedded debate
Fears about the fragmentation of Linux may well prove unfounded, if the goals of the newly formed Embedded Linux Consortium are realised. The alliance, which includes such big names as IBM, Motorola and Red Hat, has set its sights on the rapidly growing market for embedded devices such as Internet appliances, set top boxes and PDAs. The advantage of Linux – price – may well prove to be the clinching factor to ensure the success of the platform.
Why price? The fact is that devices are a box-shifting market. Device vendors are used to the extremely tight margins associated with such products, largely caused by the fact that they are aimed at the mass market where competition is vicious. Anything that can be done to shave a few cents off the price of a device which costs £100 or less, is seen as a good thing, so what could be better than a platform which is entirely free of licensing costs?
Also, Linux is already proving that it is capable in this area. It can offer pseudo-real-time capabilities in a small footprint. There may be fears about its memory management capabilities, but if these are true then they are likely to be addressed. Such is the nature of open source.
We are already seeing products which are running Linux under the bonnet (hood, to you transtlantic cousins). In our \link{http://www.it-director.com/ts/linux/index.html,Linux technology spotlight} we mentioned digital video recorders TiVo, which are to be licensed by some of the major players in the satellite TV space. More recently, at CeBit Samsung announced a Palm-a-like PDA which runs Linux.
Fears about the future of Linux have centred around its fragmentation – its open nature means that developers are able to develop it in a variety of directions, which may or may not be compatible. This is a real fear, particularly as the variety of platforms for Linux continues to diversify. However announcements such as this one go some way towards dispelling the fears as the evolution of the operating system can be dealt with by the organisations as a group, acting in co-opetition.
Gartner group recently announced that Windows CE would overtake PalmOS as the PDA operating system of choice, by 2002. In the light of this announcement, they may wish to change their minds.
(First published 25 January 2000)
01-25 – Microsoft delays aspirations to be gatekeeper of the virtual enterprise
Microsoft delays aspirations to be gatekeeper of the virtual enterprise
The IT industry is rife with stories of companies trying to emulate previous world-shaking innovations, with varying levels of success. Lotus tried to follow 1-2-3 with Notes, mainframe IBM brought the personal computer to the masses. The continued need to innovate is driven by a desire to stay in the game and to appease shareholders, whose skill at abandoning faltering companies would put ship-rats to shame. Microsoft has made repeated forays outside its traditional PC market space with varying levels of success. In media streaming, it is beginning to look like the company may dominate, through a recently brokered arrangement with Liquid Audio. Elsewhere the future is not so certain - examples like MSN and CE serve to illustrate how Microsoft may possess the goose that lays the golden eggs, but not the underlying technology.
A test case is coming round with the now-delayed launch of BizTalk Server, Microsoft’s long-promised engine for XML-based eBusiness communications. The product is over six months late in delivering to beta testers, with the probable launch date being autumn 2000. The reason, say observers, is that the competition is already ahead of the game. According to a recent report on Cnet news, the missing link is the connection with business processes. This is the ability to link the communications required between organisations with the higher-level business logic, for example ordering a product or handling a customer request. Products from HP and Vitria, for example, already support such a linkage. It could be argued that Microsoft’s competition is looking through the right end of the telescope by building business logic before trying to automate it. The bottom-up approach used by BizTalk Server is not a good starting point with which to handle the myriad complexities of modern business. Microsoft insiders claim that the BizTalk architecture is changing to take this into account – the question is now, whether the competition can capitalise on the delays which ensue. One point in Microsoft’s favour is its ownership of Visio, the de facto tool for business process modelling among many business analysts – it remains to be seen how the company will exploit this advantage, for example by providing a Visio interface to BizTalk.
For Microsoft, its aims for BizTalk are far more than just sales figures for a business communications tool. This represents the battle for the gates [sic] of CyberSpace – in the virtual enterprise, business success will be built upon communications and it is likely that a single company will end up the de-facto standard. The playing filed is currently empty, but soon corporate IT shopping lists will include the item “XML-based business communications facility”. Microsoft would very much like to replace this phrase with “BizTalk Server” in much the same way that Hoover and Coke have done in the past.
Microsoft has ridden the PC revolution with enormous skill and dexterity, becoming one of the world’s most powerful companies in the process. Even now, it seems unlikely that the dream will fade - the PC is moving into the machine room and Windows 2000 looks set to buoy up the share price for some time yet. However, outside the kingdom where the PC holds sway Microsoft’s less successful ventures should serve to keep the company firmly on the ground. Ballmer and “innovator” Gates are not gods, but mortal men who are no better at predicting the future than anyone else. There are no guarantees, particularly for a company which may see itself divided into three or more parts before the end of the year.
(First published 25 January 2000)
February 2000
02-03 – Free-PC is dead! Long live the free PC
Free-PC is dead! Long live the free PC
Not every good idea stands a chance of working in these topsy turvy-technology days, as discovered US startup Free-PC. The company was bought last November by Emachines who put the final nails in the Free-PC coffin by relinquishing all rights and responsibilities to the computers that had been handed out as part of the “free” deal.
Why the Free-PC model failed is a matter of economics and timing. Economics because the company founit could not generate the advertising revenue it required to cover the costs of giving out hardware. Timing because, well… have you seen what is just around the corner?
Samsung have just announced a disposable PC, which is likely to be based around Intel’s “system-on-a-chip” codenamed Timna. The device will be completely sealed and will retail at around $200. That’s pretty cheap for a PC.
Historians may well determine that Apple was to blame. When the iMac was released onto an unsuspecting market, the punters rushed to purchase a computer which was self-contained and, above all, simple to use. The trend was followed quickly by PC suppliers such as Dell, Compaq and AST, all releasing non-user-serviceable computers in rapid succession. Once the trend was started, products such as the one announced by Samsung became inevitable.
In the here and now, it is worth taking the wider view into account. Samsung’s move signals the removal of the final barrier from public perceptions about the nature of the PC. It is a device, just like all the other devices which are coming to litter our lives – the mobile phones, PDAs, set-top boxes, games machines and the like. As such it will take the same characteristics; this change of status signals the death of the PC as we know it. Out with the expensive computer which requires frequent, complex administration; in with the affordable, low-maintenance device. As such the PC looks set to be treated as any other device: to be sold, low cost, bundled with other deals such as subscription services and software packages. In the UK the hardware is often given away with such bundles: for examples, the latest mobile phones may command a premium of £100 or so, but last year’s models are given away as a sweetener.
Free-PC may have had the right idea, but as so often with technology, they came to the market too early. Come Christmas next year, when Samsung’s disposable PCs hit the shelves, the time will be right for companies to take advantage of the free PC model. By then, the chances are we will have stopped thinking about the PC as “the mainframe in the dining room” and it will take its rightful place alongside the rest of the gadgetry.
(First published 3 February 2000)
02-09 – EU W2K case puts cart before horse
EU W2K case puts cart before horse
There is always something to be said for planning ahead, but if the early reports from ZDNet UK are anything to go by, it could be that the European Union is take this a little too far. According to the reports, EU competition chief Mario Monti said that Windows 2000, the latest release of Microsoft’s flagship operating system, may be breaking EU antitrust law. A probe into the issue is planned, following “allegations that Microsoft could extend its dominance to server operating systems and electronic commerce”. Interesting allegation, but for two points. The first is that W2K is not a new product. Secondly, it is extremely difficult to work out how such a hypothesis could be tested in advance.
Windows 2000 is due to be launched in just over a week, undoubtedly with as much aplomb as the marketing muscle of the mighty Microsoft can muster. The OS, touted as “the business operating system for the next generation of computing,” is being presented as something new. However this perception is out of step with the facts. W2K is an upgrade of Windows NT. It may be much-enhanced for scalability and availability, transaction management and interoperability, but it remains an upgrade. Windows 2000 it may be to you and I, but to the developers and internal staff at Redmond the product is referred to as NT5.0. It is not, therefore, true that Microsoft are moving into areas which the company previously had nothing to do with. Indeed, had the operating system been able to support “enterprise-scale” applications five years ago it might well have won the Unix wars and the rest would be history.
The antitrust case that is underway in the US does not question the fact that Windows grew to be the dominant operating system. What it deals with is how Microsoft used their dominant position to stifle competition in other areas, such as Web browsers, by including them with the operating system itself. It is here that the EU may have a point: by including Microsoft Transaction Server as part of the operating system, it could be said that the company is exhibiting anti-competitive behaviour relative to other application server vendors (such as BEA, IBM and SUN). However, there are many application server vendors that support a number of platforms other than NT, and have already created a sizeable market. This is not the gorilla company stifling the activities of a bright young start-up; rather it is one company presenting its offering to an already-crowded market, of which both the evolution and the outcome are far from certain.
It is always worth keeping a watching brief on major companies to protect the markets against anti-competitive behaviour. However, in the IT sphere in which fashion plays as big a part as technology, it is impossible to determine in advance who the winners and losers will be. Windows 2000 is unlikely to sweep the world in the same way as its desktop sibling, for many reasons of which the most important is that Microsoft is no longer perceived as the only platform in the eyes of the board. Mainframes have survived the onslaught of client-server, Unix (with a little help from a softly spoken Scandinavian) is as popular as ever and new types of device are coming onstream every day. The context which helped Microsoft to world dominance no longer exists, and as illustrated by Palm, Linux and AOL, the company does not have a guarantee of success in every market it targets. The EU’s crystal ball is as opaque as everyone else’s: in IT, no company has a monopoly on predicting the future.
(First published 9 February 2000)
02-09 – Java overtakes C++, takes number one development slot
Java overtakes C++, takes number one development slot
It’s official – according to a survey by Bloor Research of over 40,000 job adverts in January, Java has taken the top slot in development language skills, with 36.8% of all programming job adverts in our sample now specifying Java. This figure is just ahead of demand for C++ skills. Meanwhile demand for VB and C are holding up with perhaps a tendency to decline, both hovering at around the 25% mark. These latter skills are still in widespread use but the slight decline suggests that we will start to see a drop in their use in 2001.
To use a well-worn phrase, of course there are lies, damn lies and statistics. A previous survey (reported on IT-Director) showed that the requirement for Java-based projects was holding static relative to Microsoft technologies. It has to be said, however, that a survey of the job market does present a pretty clear picture of exactly what is wanted right now, as opposed to projections and anticipated requirements. According to the ads, Java is what the world is using today.
It looks like Java’s position is becoming unassailable. Object orientation and component-based development are flavour of the month, both in the Microsoft camp (with COM) and the Java community (with Java beans). Non-OO languages such as C and Visual Basic are losing out to the brave new world of objects. C++ is still very popular, but it seems inevitable that it will go the same way as the other languages and Java will steal the field. Why? Because Java has already become the most popular teaching language at Universities. Because it is available on the largest number of platforms. Because it is of strategic importance to some of the largest IT companies, including Sun, IBM and Oracle. And because, once it moves ahead of other languages, it will build on its own success.
Fifteen years ago, few would have predicted that developers would still program in third generation languages such as the ones discussed here. The brave new world was one of self-generating code and dynamically customisable applications. Such visions remain pushed ever-further into the future; in the meantime, it looks like Java is set to rule the roost.
(First published 9 February 2000)
02-09 – Look at me, Mum! I killed eBay!
Look at me, Mum! I killed eBay!
Denial of service attacks have always had an uneasy relationship with other types of security requirement. The main reason for this is that they do not directly impact on that all-important corporate data: nothing is lost, corrupted or revealed to prying eyes. Hence, in security policy definition and implementation, such attacks have often been given a lower priority than they deserve.
This point has been starkly illustrated over the past few days, as a number of major commercial Web sites have succumbed to denial of service attacks. On the 7th February, Yahoo was the first to be bitten. The next day Amazon, Buy.com, eBay and CNN were all brought to their knees for anything between one hour and three hours. While it is not clear whether the attacks were all caused by the same group (as nobody has yet indicated responsibility), it is clear that copycat attacks are inevitable over the coming days and weeks.
Why would somebody want to bring a Web site to its knees? There are as many reasons as there are Web sites. Everything from cyberterrorism to (ironically) disgruntlement with the service, from anticompetitive behaviour to sheer high-jinks can bring a person or group to assault a site. Now that the simplicity of such attacks has been revealed, using “innocent” computers to host a Trojan Horse program which, at a predetermined time or on command, will send a stream of requests to the targeted site, the attacks look set to move into the mainstream of security problems. Given the arrival of broadband communications technologies which enable home computers to keep “always-on” connections to the Web, the pool of relatively insecure devices which can be used as proxies looks set to increase. Denial of service attacks are harder to prevent than they are to cause: the best measures tend to involve the inclusion of tools which can spot this kind of behaviour and either alert the appropriate person or instigate a suitable response.
Above all, denial of service attacks serve to illustrate the fragility of the electronic infrastructures that we are building, if they are not properly constructed to take into account all possible security and privacy measures. The attacks are not only damaging to the bottom line of the businesses they hit, but are bad press for eCommerce as a whole. So far it looks like investors are prepared to ride the storm and stick with the dot-coms, but given the looming issues of the long term financial future of such companies, the tar-brush of denial of service is one which organisations such as Amazon could do without.
(First published 9 February 2000)
02-14 – IBM strengthens ties with Rational but keeps the agnostic faith
IBM strengthens ties with Rational but keeps the agnostic faith
There’s something about IBM at the moment. According to repeated announcements, the company is embracing new technologies, forging alliances and striding headlong into the future with all the gay abandon it can muster. At the same time, reading between the lines, IBM is doing exactly the opposite – holding on to the old, keeping the competition sweet and, above all, ensuring that it does not close off any options it may have. In these turbulent times such a risk-averse position may be wise, despite falling short of the visionary attitude that we could hope for from the largest computer company in the world. A recent example is IBM’s stance on operating systems: Linux is the future, but so is NT, AIX, OS/390 and any other you care to mention. More recently the focus has been on development tools or, more specifically, Rational’s suite of application development products.
Rational first got into bed with IBM in July last year, forging a strategic alliance that would “help customers accelerate the development and deployment of e-business applications,” according to the press release. In practice this meant the provision of an XMI (XML Metadata Interchange) bridge between Rational Rose and IBM’s Visual Age, with both tools continuing to be developed and promoted. It has taken the two companies a full six months to deepen the interoperability between their toolsets - last week’s announcements, “strengthening the alliance,” were the further integration, via another open standard – WebDAV – of VisualAge with Rational’s ClearCase and ClearQuest configuration and change management tools.
IBM’s slow agnosticism can only be frustrating for Rational, despite the clear potential revenue stream for the company that the alliance can make. The bridges between the products are all open standards that are being adopted by the majority of tools providers including Microsoft, Sterling and Princeton Softech. Hence Rational may be being given a head start on the other vendors, but ultimately the aim is that all platforms will interoperate and exchange information. Rational’s frustration, founded in its desire to leverage the relationship before the other vendors catch up, was hinted at by Eric Schurr, senior vice president of Marketing and Suite Products at Rational, when he said that “this [recent announcement] is evidence that the IBM-Rational relationship is more than a paper relationship.” This begs the question – was it just a paper relationship in the past?
In the relationship between IBM and Rational, it is the latter that stands to gain the most. Unlike IBM, Rational cannot afford the luxury of covering all its bases. Its product set is limited to the relatively closed world of application development, whereas IBM can offer it safe passage to the Shangri La of eBusiness. Tools providers have always faced the squeeze – this is a situation which will not change (if anything, it will become more difficult, as Microsoft exploits the most valuable resource that is Visio – you heard it here first). Frustrating it may be, but without such relationships it is difficult to see how Rational will succeed.
One final note about WebDAV, or “Web-based Distributed Authoring and Versioning.“ This standard has been ratified by the IETF and has the support of Software Configuration Management vendors and Document Management vendors alike. I have long been of the opinion that the two disciplines represent two routes up the same mountain. It is very good news that an interchange format has been agreed between the two camps: Hopefully we shall see some merging of their related disciplines before too long.
(First published 14 February 2000)
02-14 – Lastminute.com provides an IPO acid test
Lastminute.com provides an IPO acid test
Almost a year after the company initially revealed its intentions to float on the stock market, Lastminute.com announced at the end of last week that it would be floated on the Sock Exchange and the NASDAQ some time next month.
The company is valued at around £400 million. The question is, is it worth it? As with all Internet IPOs, nobody really knows the answer. However the mutterings that IPOs are overvalued have become a chatter. A couple of weeks ago, the Financial Times reported that speculators were unsure about whether the company would achieve this valuation when the time came. Unfortunately for Lastminute.com, the only real test of IPO success is to just do it and see. The company has one chance: it may succeed, and then again it may succeed less well.
There are three kinds of Internet IPO.
• The first is for companies that provide the fabric of eCommerce – such as CyberTrust and CommerceOne.
• Second, we have Web businesses – such as Amazon, BlackStar, TicketMaster and Lastminute, who use the Internet as a direct revenue stream.
• Third we have the accumulators of potential revenue – Portal sites such as Freeserve and Yahoo!, who grow their eyeball business prior to exploiting their subscription base for profit.
Of the three, the fabric providers are most likely to succeed as they are the most tangible, making a profit whether their customers – the other providers – succeed or fail. Least optimistic is the third category, who may be unclear about how to turn potential into revenue (hence the difficulties suffered by Freeserver last year). Lastminute.com sits in the middle category: the chances are that the company will turn a profit, the questions are how much, when and who are the competition.
Unfortunately for Lastminute.com, competition is the one thing of which there is no shortage. Travel is a land of huge opportunity for me-too companies, which differentiate themselves on the scale and the level of the service offered. A recent contender, for example, is LateRooms.com, which claims to have 1500 hotels signed up to its site. Travel sites in general are subject to the vagaries of the online consumer – the competition really is only a click away. Customers will vote with their feet – my own experience a few months ago is that, whilst the offers on Lastminute.com were good, the breadth of choice was not. This may have changed but remains an indication of the rocky road to be travelled by a online bookings company attempting IPO.
(First published 14 February 2000)
02-14 – Microsoft gets back to its UNIX roots with Interix
Microsoft gets back to its UNIX roots with Interix
Given the current polarisation of the UNIX/Linux and the Microsoft/Windows camps, you could be forgiven for disbelieving Microsoft’s UNIX past. Today, IT history has chosen to quietly sweep under the carpet the fact that Microsoft played an instrumental part in bringing UNIX to the PC, a heritage which subesquently bequeathed Minix, then Linux. Quietly forgotten, but true. Today, with Microsoft’s announcement to integrate Interix with Windows 2000, it looks like the software giant has decided to take a leaf out of its own history book.
It was back in 1979 that Microsoft and The Santa Cruz Operation (SCO) collaborated to develop Xenix2, which was the first UNIX implementation for the Intel 8086 chip. Development was later handed over to SCO so that Microsoft could concentrate on its single-user operating systems in the form of DOS and Windows 2. Xenix2 gave rise to SCO UNIX, which became OpenServer, merged after a fashion with UnixWare and is now being integrated with AIX to spawn Monterey.
More recently, in September 1999 Microsoft acquired Softway Inc., developers of Interix. This product started life as OpenNT to replace the inadequate POSIX system that was bundled with Windows NT. From the outset the intention was to give companies the ability to more easily port applications from UNIX to NT, by providing a UNIX layer that runs on top of the NT kernel. The trouble is, Software did such a good job of developing Interix that the OS layer has passed compatibility tests of the Open Group, keepers of the keys to UNIX.
As it is, Microsoft are keen to distance themselves from their heritage. On its “Linux Myths” web page, for example, the company states disparagingly that “Linux fundamentally relies on 30-year-old operating system technology and architecture.” The line is (to quote Garfield) that Windows 2000 is new and improved, UNIX is old and inferior. So – why, oh why, in the shape of Interix, is Microsoft investing so heavily in UNIX?
The answer, according to Microsoft, is that by providing mechanisms which bridge the gap between the UNIX and Windows NT platforms, companies will migrate from the former to the latter. This is a tactic which has worked in the past, for example with the provision of facilities in Microsoft Office to read and write competitors files. Companies such as Lotus and WordPerfect provided read-only mechanisms and found to their cost that users were turning to Microsoft as a result. Interix, too, is a two-way bridge and Microsoft’s tactic, though proven, is not without risk. For a start by supporting a UNIX layer the company is, in some way, validating the existence of UNIX. Thirty-year old technology or no, says Microsoft, it runs perfectly well on top of our kernel. So it should, for that matter, and so will it run perfectly well in other guises such as Linux, Solaris, HP-UX or AIX. Secondly, Windows 2000 is already under pressure from a number of fronts. Linux is one example. Also, commentators are recommending that the new OS goes through a purgatory period prior to its more widespread adoption. Finally there remains the looming shadow of antitrust case, which is having knock-on effects in Europe and elsewhere.
This is not the first time that Microsoft has played the UNIX-for-Windows card. Last year, for example, the company teamed with Mortice Kern Systems to provide a set of Unix tools for NT. Like it or no, and despite Microsoft’s best efforts to shake it off, the reality is that UNIX will remain part of the landscape for the foreseeable future. A product like Interix may prove to be an asset, not to lead the company to world domination but to keep it in the game.
(First published 14 February 2000)
02-16 – Transport and environmental groups stress real impacts of eCommerce
Transport and environmental groups stress real impacts of eCommerce
In a report on BBC radio this morning, a very unusual event happened. A transport organisation, representing goods delivery services in the UK was in agreement with an environmental organisation, namely Friends of the Earth. Now traditionally, these organisations have been fighting tooth and nail, pitching economic argument of goods transportation against its environmental impact. So, what caused this seemingly unprecedented event? The answer is eCommerce, or the effects of it.
We techno-ostriches tend to keep our heads firmly stuck in the electronic sand, forgetting that with a large proportion of eCommerce transactions there is an associated fulfilment process. Even something as neat as a CD or book still requires to be transported to the sticky hands of its purchaser. As we turn to the Web more and more for our purchases, the amount of goods to be transported is inevitably going to increase. For example, companies like Tesco in the UK and Peapod in the US are starting to see success in online supermarket shopping. The models are still to be thrashed out, but the trend looks inevitable.
For the consumer, the electronic shopping Nirvana may be very attractive. Just click on a few buttons and, within a couple of hours, bags full of heavy groceries arrive at the door, accompanied no doubt with a smile and a wave from the delivery person. The realities being stressed this morning were delivery trucks clogging up the back lanes, duplicating journeys being made by competing companies in the rush to win each other’s business. In addition, the point was made about what would the now-liberated customers be doing with all that extra time? Probably spending it in the car, off to some leisure activity or other. The hyper-efficiencies of the virtual hypermarkets might well result in a hyper-gridlock.
Now environmental groups have a reputation for doom-mongering, but transport groups have not traditionally held this role. The fact that both groups agree is sobering indeed. As Robin Bloor said in eRoad, “eCommerce is coming, ready or not.” The tidal waves of change are already having many impacts, not all of them pleasant. However, it should be noted that both organisations are a little guilty of mapping tomorrow’s plans on today’s realities. IT will not only change the way we buy, but also the way we distribute goods and spend our leisure time. Supermarkets may well become warehouses, but might the corner stores become depots? Has anyone considered how letterboxes may need to change, or the possibility of overnight deliveries such that groceries are on the doormat, fresh for the day ahead? What s probably most important here is that we all sides enter the debate, transport and environment, IT, business and consumer, such that we can make the best of what is coming whilst protecting ourselves from the pitfalls it may present.
(First published 16 February 2000)
02-18 – Microsoft Windows – the Gates left open
Microsoft Windows – the Gates left open
Microsoft has been quick to backtrack from claims by a certain W. Gates that the company is prepared to open up the Windows source, if it would prevent the break-up of the company. Is this the kind of innovative idea that Uncle Bill wanted to focus on, when he resigned as head of the software giant?
Oh, the underhand nature of journalists. When is off the record off the record? Clearly not just after the videotapes have stopped rolling, as illustrated in last week’s televised interview between Bill Gates and Bloomberg. According to the transcript on Bloomberg’s web site, “After the on-camera portion of the interview was completed, Gates was asked whether the company would be willing to open the Windows source code in order to settle the case, and Gates said “yes.” He then added, smiling, “if that’s all it took.” ” Unfair tactic or no, “yes” is a pretty fair indication of what the great innovator was thinking.
We have discussed the potential perils of Microsoft opening up the Windows source, if nothing else for the scrutiny it would cause. Taking a pop at M$ is a common pastime in the IT industry, so just imagine the howls of delight from hackers and hopeful hecklers, as they find potential security flaws, weaknesses or just plain bad code. Like Greta Garbo without makeup, there are maybe some things that just shouldn’t be made public.
The bigger question is whether there is really room for two open source operating systems. I’m not including Solaris in this debate, because whether it is truly open is questionable. The issue lies between Windows and Linux: people want Linux because it is free, stable and perfectly adequate for a large number of uses; they want Windows because it runs all the right applications and because it is what everyone else is using. If Windows is opened, there is nothing to stop (indeed, just TRY to stop them) the open source community from linking the two OS’es. WINE, the Windows emulator, would be dropped like a stone, after all, why emulate Windows calls when you can have the real thing?
Open source Windows is a logical development as it equates to the rising tide of commoditisation in software. Mobile phone users would not expect to pay extra for the software that runs on the phone; rather, there are a certain set of expected facilities that are delivered with any device, and the OS is one of them. Traditionally, we have paid for PC operating systems and (grudgingly) upgrades, but as products like WebPads from Samsung and Diamond illustrate, the line between PCs and other devices is diminishing and so must the pricing models. It is a fact that Linux is being chosen as an embedded operating system for a wide variety of devices, from video recorders to the aforementioned WebPads, because it does not incur software licensing costs. Microsoft knows that it can only really establish itself in the device-driven market if it cuts its licensing fee structure to the bone, or if it drops it altogether.
Microsoft is in a quandary as it grows out from its desktop PC home ground. As it moves upward into the server space, where premiums are currently higher, is risks incurring the wrath of its user base due to the higher licensing costs it is demanding. Downward, the company’s success in the device space is dependent on it making the product much cheaper, or even free. This “Open-Windows” thing may solve these problems, by enabling the company to concentrate more on applications and services, but may also cause some problems of its own. Either way, it is an issue that is unlikely to go away.
(First published 18 February 2000)
02-18 – PointCast – Did it fall, or was it…
PointCast – Did it fall, or was it…
This is the way a once-darling of the wired world ends – not with a bang, but a whimper. PointCast is finally giving up the ghost, calling it quits, coughing its last. Nobody notices, nobody cares. The most tragic thing is, the technology it espoused is alive and well. Where, oh where, did things go wrong for PointCast?
Founded in 1992, PointCast was credited for beginning a revolution in Internet usage and was quickly copied by the likes of Microsoft with its Active Desktop channels. However, the dream quickly tarnished as the bandwidth requirements for pushing Internet content to the desktop overcame a number of companies’ capabilities, preventing email and “normal” Web browsing activities from taking place. Back in the mid-nineties many organisations were discouraging Internet access at all, as it was seen as distracting individuals from their work. Network Managers banned access to PointCast, the share price plummeted and the company was, eventually sold for a song. Consider this: at the height of its popularity the founder of PointCast, Chris Hassett is reputed to have turned down an offer of $450 million from James Murdoch of News Corporation. In December 1999 the beleaguered company, reputedly losing $2million a month, was bought by Idealab who wanted to use the technology to deliver advertising to customers of its eWallet software. Today the company, now merged with Launchpad to form EntryPoint, is on the brink of launching a new product but there is no guarantee of success now that the hype has gone.
So, where did things go wrong? At the highest level it is possible to say that the technology did not deliver on the hype. The IT industry is one of the best at delivering solutions without feeling that it needs to worry too much about the problems it has to solve. Push technology, like other technological also-rans (such as Document Image Processing and Satellite phones), was a solution without a cause, sold as a new way of working without due thought being given to how it could fit in with established practices. In itself, Push is an enabler, which comes at a cost of bandwidth and administrative overhead. Where the cost-benefit balance has been struck, Push is now establishing itself as a perfectly acceptable solution to a whole variety of problems, such as:
• Automatic distribution of software updates to end-user desktops, as illustrated by Marimba
• Online video-on-demand, for example online training which can be delivered on demand or following a predefined schedule specified by the customers - Multimedia Solutions, Inc. are having some success here
• Download of information to PDAs in idle time, which can then be accessed offline, for example through the service offered by AvantGo.com.
PointCast had the technology right, but aimed its sights at the wrong market. Corporate users, ultimately, had better things to do all day than configure and trawl through the wealth of information they were receiving. They could also do without watching their machines slow to a crawl once every ten minutes, as updates were received. Sure, all these things were configurable, but even that took time. If anything, this is an example of the fickleness of the Web community – in the end PointCast failed because its users couldn’t be bothered with it.
Whilst all this is a shame for the now-defunct company (and particularly for its founder), it is not the end of the technology. Push will continue to find niches where it can render itself indispensable and hence lucrative, for example in corporate information portals (keep an eye on companies like SageMaker). Once Internet bandwidth constraints are removed over the next year or so, Push technology will gain ground in providing real-time video news and sports feeds, consumer video rentals, audio channels and so on. It is no coincidence that this list overlaps with services currently provided by broadcast media. This *is* broadcast, with a difference - it is user selectable in real time. After all, what is Push and Pull, if not supply and demand?
(First published 18 February 2000)
02-18 – Web travel companies on the back foot
Web travel companies on the back foot
The US travel industry looks set for the heart of the sun, in a looming battle that looks set to generate as much heat as light. Three opposing forces are joining the fray – in one corner, we have the Web travel companies; in the second, the traditional travel agents that the Websters are so successfully undercutting. Last but by no means least are the airline companies themselves, that have risen to the challenge of the Internet and have set out to be a case study in disintermediation. When these three warring parties come together, what a battle there will be!
According to the bricks-and-mortar travel agents, the groups which are threatened the most by the Web, the last straw came when a number of airlines, not content with running ticket sales from their own Web sites, agreed to join forces and set up a travel portal. Cries of “permanent and irrevocable damage [being] done to the competitive process,“ were enough to see ASTA, the American Society of Travel Agents, rushing off to the Justice Department to file an antitrust complaint. Ironically enough it is not the independent travel agents that the airlines have in their sights, but the established Web travel portals such as newly floated Expedia and its main competitor, Travelocity. The question is, can the airlines compete with online companies without being accused of anti-competitive behaviour by independent travel agents? The answer is no, which leads to the inevitable collision course ahead.
The airlines may be facing a barrage of criticism from the independents. Maybe it is true that a consortium of airline companies will form, resulting in price fixing and “less value to the consumer,” as the mantra goes. However airline agreements are nothing new – OneWorld is one example, and there are undoubtedly plenty of others. Let us remember one thing above all: the airlines did not cause this. They are faced with the same set of opportunities and threats as the rest of businesses and they are reacting accordingly. Consider this: five years ago, nobody spent money on travel over the Web. Last year about 1% of the travel - $2.2B – was purchased online. This is expected to grow to $17B by 2003. The Web is anti-competitive, it breaks all the rules and, worst of all in the US, there is nobody to sue about it.
Slowly but surely, individuals and organisations are realising that online travel is cheaper than travel purchases from travel agents. Traditions are difficult things to drop: for example, when an analyst colleague was flying over to a recent briefing in Colorado, he was discussing ticketing systems with an analyst from another firm. When our man compared his agent-provided ticket price with the online cost that the other analyst had obtained, he realised he could have saved £200. Even analysts can be slow to react to the changes, but what this proves is that travel agents are living on borrowed time. Speaking both as observers of the IT industry and as consumers, we watch with interest the developments in the travel industry. The Web is creating winners and losers and, so far, nobody has been able to come up with a hard and fast set of rules as to what is going on. Not changing is not an option, which is a fact that traditional travel agents would do well to remember.
(First published 18 February 2000)
02-23 – Alcatel joins the new generation networking crowd
Alcatel joins the new generation networking crowd
Was anybody wondering about a slow-down in the consolidation of networking equipment vendors? If so, such thoughts may be premature, as demonstrated by Alcatel’s $7.1 billion acquisition of Newbridge Networks yesterday. The move adds a much-needed component to the Alcatel portfolio, namely presence in the Internetworking market. Not that Alcatel lacked products in this space, however it has lacked – how should I say – IP mindshare.
All this looks set to change with the acquisition which will enable Alcatel to go up against the giants of “new generation networks”, namely Lucent, Nortel Networks and Cisco. Alcatel is a company that has grown dramatically through acquisition. When it was first privatised, it concentrated on growing its multinational presence by the purchase of, and partnership with, European and US vendors such as SEL in Germany, Telettra in Italy and Sprint in the US. More recently it has set its sights more closely on mobile and data networking companies, such as Xylan and, of course, Newbridge. Newbridge has been hoping to be bought for some time. Despite successes a few years ago, its more recent past has not been so healthy. Speculation started in November last year, with include Ericsson and Nortel. Apparently these companies walked away from the table over the past few weeks leaving only Alcatel, who have accepted the mantle with confidence. Unfortunately the shareholders are not so sure – Alcatel’s share price has dropped four percent to 227.9 euros following the announcement. This is unlikely to perturb the company, which is no stranger to market volatility – following a profit warning in October 1998, the share price dropped 40% to a low of 68 euros.
Alcatel should not be underestimated as a communications equipment provider. Its greatest strength lies in the breadth of the products and services it offers: for example, in 1999 it carried 35% of the worldwide ADSL market – a share that increases to 52% in North America. Alcatel has major stakes in wireless and satellite communications, not to mention terrestrial and submarine cabling. All of this puts the company in an ideal position to appreciate the nature of convergence – data availability any time, anywhere and across any medium. The company has been maligned as an old-fashioned, “traditional European” company which is slow to change and slower to innovate. If this was ever true then it most certainly is not now – over the past few years the company has refocused, restructured and risen to the challenges that are faced across the networking industry. Europe has reinvented itself over the past three years as a hub of communications innovation and development, with companies like Nokia and Vodafone leading the way. Other technology companies based in Europe, such as Alcatel, are now able to bask in the glow that has been created.
Alcatel is placed to provide end-to-end solutions in ways that few other companies can approach. The company provides mobile phones, ADSL cards, set-top box chipsets at the client side, plus access systems for all of these devices and more, plus the underlying global connectivity. Perhaps Lucent can come closest to this soup-to-nuts vision, but even if this is the case, Alcatel remains one to watch.
(First published 23 February 2000)
02-28 – Do you Yahoo!? Go on Murdoch, you know you want to
Do you Yahoo!? Go on Murdoch, you know you want to
It is now six weeks since AOL and Time Warner announced their intentions to wed, throwing the industry into turmoil. But which industry? This is the question that is making things so hard for other companies to follow suit. AOL is an ISP, a Portal, a software company and a service provider. Time Warner is a news organisation, a film studio and a publisher. Not to mention the wide variety of other interests that the two companies now share. Orrin Hatch, the US Judiciary Committee chairman, was quoted on News.com as saying that the deal could have “profound public policy implications”. Too right – it creates a company the like of which the world has never seen.
Driven by fear and loathing as much as by the smell of opportunity, many others are keen to follow the lead that AOL Time Warner have set. One such example is Rupert Murdoch’s News Corporation, which has revealed that it is holding preliminary talks with Yahoo!, the portal company. News Corp has – or had – many similarities with Time Warner, owning film studios, television channels and publishing companies. But what of Yahoo!? Is the company right for News Corp? We would argue no, not in its current form, but it shows potential.
The problem with Yahoo! is easy to spot when it is positioned next to AOL. The two companies were never seen as rivals because, simply, they were not: Yahoo! was a portal, and AOL was a global ISP. Despite having teamed with AT&T and Mannesmann to provide Yahoo!Online, the company is not renowned as an ISP – indeed, as these partnerships demonstrate, it is not. AOL, however, most certainly is an ISP with global reach. Yahoo!’s lack of its own ISP presence is a weakness for the company, as it means that it has less control of its potential customer base. An individual may have several ISPs, but is likely to retain one as its primary access to the Internet. However, to most, Yahoo! is one online resource amongst many.
The AOL Time Warner merger demonstrates the coming together of three components, all of which are essential. The first is the communications channel, akin to the Web portal or the TV station. The second is the information to be transmitted or published. The third is the underlying technology – the communications network. It is this network which is lacking to both Yahoo! and News Corp – the latter has an enviable satellite service in the shape of Sky, but this is not to be the dominant communications channel.
An alliance between Yahoo! and News Corp can be successful, but only if it is followed by investment in a communications infrastructure company. This could either be a wireless provider or a land-based provider but is most likely to be a combination of both. It is unclear at this stage whether the combined News Corp/Yahoo! would be the suitor or the target of such affections. What is clear is that only the resulting corporation would be sufficiently powerful to take on the giant that is AOL Time Warner. Suggested tie-ups between News Corporation and Yahoo! can only be steps on the way.
(First published 28 February 2000)
02-28 – Ford, GM, Daimler put business at the helm of technology
Ford, GM, Daimler put business at the helm of technology
A week is a long time in this topsy-turvy technology world. Just last Monday (in “Oh, no, not another marketplace”) we were bemoaning the fact that Ford and General Motors were going it alone by setting up independent eProcurement marketplaces, a model that we do not see as sustainable. By the end of the week the two companies announced their intentions to collaborate with a third, Daimler-Chrysler, to develop a common exchange for automotive transactions. The collaboration will take the form of a new company with its own identity, which will offer services to all automotive manufacturers and their associated companies.
Now it would be wonderful to believe that the world’s largest automakers had changed their minds based on the advice of IT-Director.com, but sadly we cannot make any such claim. Last week’s announcement will have been based on a great deal of discussion and negotiation between the companies and their representatives. It will have been going on in parallel with the deals struck up, for example between Ford and Oracle, or GM and Commerce.One. It is currently unclear about which technology companies will be the winners (or losers) in the still-infant venture, however winners there will be, big time. A single market will require single sources for primary technologies, such as hardware, transaction management and application software.
What is perhaps most fascinating here is that the businesses involved are seizing the nettle of defining an electronic environment in which they are all prepared to work. Traditionally this role has fallen to technology companies or standards bodies, with businesses being involved to agree the form of any dialogues but not the environment itself. This move by automotive companies is unprecedented, not only because of the level of collaboration it requires but also because it puts them in front of the technology companies. “We’ll sort out the bigger picture,” the car makers are saying, “and we’ll be in touch.”
This change should not be underestimated: it is a major sea change in how technology is defined and used. It suggests a growing understanding, on the part of businesses, of the capabilities of technology. It also suggests a maturing of the technologies themselves. “What” – i.e. the exchange – is less the issue than “how” and “when” it should be implemented. Finally it indicates a transferral of power, from the companies providing the enabling facilities, to the companies wanting to use them.
The decision by Ford, GM and Daimler-Chrysler may mark a watershed but it is not over yet. As we indicated in the last article, there will be a consolidation and standardisation of marketplaces not just within industries but across them: specific exchanges, such as aircraft spares, car parts or electronics raw materials may be kept separate for practical purposes but will be based on the same underlying standards and technologies. The power of business to drive technology should not be underestimated, nor should the power of the consumer: the Internet has already given ample demonstration of how customers vote with their feet.. These changes in the power base between technology and business, business and consumer are happening now. Traditionally, the IT sector has kept its position through hype and gadgetry, exploiting the FUD factor to the full, but the monopoly that such companies have held on technological understanding is coming to an end. It may be difficult at this stage to predict what the impacts will be: all we can say is that they will be deeply felt.
(First published 28 February 2000)
March 2000
03-03 – Cable, wireless and now the servers: C&W goes ASP
Cable, wireless and now the servers: C&W goes ASP
There seem to be two options on the cards for telcos today: consolidate or specialise. These options are not mutually exclusive, particularly for companies with an interest in keeping abreast of the latest developments.
One such company is Cable and Wireless. Despite a fair reputation as a telecommunications carrier, the company has thus far failed to make much of an impression in the Internet space. This all looks set to change with the announcement that C&W are to set up “the largest Web hosting centre in Europe.” The centre, which is to be located in Swindon, “will have the capacity to host every Web site currently running in the UK.” Whether this will still be true when the site is complete is unlikely, but so is the chance that C&W will host all of the sites in the UK. It is clearly the company’s intention, however, to grab a sizeable share of the hosting market, both nationally and internat9ionally. Similar sites are being set up elsewhere in Europe, in the US and in Japan.
So – what’s it all about? Hosting is such a passive word, hardly doing justice to the potential of such sites. The current model is that companies can run their Web sites on C&W servers, paying for guarantees of service such as availability and performance. In the future, this model will become much more complex, with infrastructure service providers partnering with providers of commerce services, application services and business services in order to provide a one-stop service shop for the eBusinesses of the future. C&W know their strengths – with a telecommunications background, they are ideally placed to provide infrastructure services with the necessary service guarantees. Once established as a hosting company, partnerships will prove the key to C&W’s continued success. To quote one film, “build it and they will come.”
Cable and Wireless are now the seventh largest carrier of international traffic in the world. This is not to be sniffed at, but neither does it give the company a place at the top table. By moving into Web hosting, C&W are lining themselves up to catch the next wave – that of service provision. This specialisation is a logical step for a company which does not want to become an also-ran of the communications revolution. By doing so, C&W are making themselves an attractive proposition for the global content-and-delivery giants such as MCI WorldCom, Vodafone Mannesmann and AOL Time Warner. To specialise can also be to consolidate.
(First published 3 March 2000)
03-03 – For Palm, it is just the beginning. For 3Com…
For Palm, it is just the beginning. For 3Com…
Yes, yes, the Palm IPO has finally happened, and it was all we hoped it would be. At $53 billion Palm, Inc. is now valued at nearly double that of its proud parent, 3Com. The road ahead may be rocky for the fledgling company: nobody, not even Palm, can claim a monopoly on the future of mobile devices. As for 3Com, it success at nurturing the company which now holds a 70% share of the handheld market may be overshadowed by the difficulties it faces following the Palm IPO.
First, to Palm. Against all odds, some would say, the company has succeeded where the likes of Netscape and Lotus failed. It has taken on the company with a reputation for standing on the shoulders of giants, the “innovator” that is Microsoft, and it has won – despite a less functional product, a less glamorous interface and a far less effective marketing machine. IT observers have used their best hindsight and determined the reasons why Palm has been so successful, but the truth is that nobody really knows. Whether it is down to battery life, weight or Zen, the fact is that the punters prefer Palm.
The future is likely to be less predictable than the past – we can say this with some certainty. Just because Palm owns the lion’s share of the PDA market, this does not guarantee its future success. It is not so much that the competition in the PDA space is fierce. Rather, the PDA space is a transient thing, involving collision rather than convergence with other technologies such as wireless technology and portable computing. Already the company is feeling the heat of its mobile rivals and does not always appear to be jumping in the right direction. As we covered \link{http://www.it-analysis.com/00-01-13-2.html?its,here}, Jean Baptiste Piacengino, Product Line Manager for GSM products at Palm Computing saw a differentiation between PDAs and mobile phones, a line which we do not feel it is valid to draw. Symbian’s Quartz platform, which combines standard palmtop functionality with Web access, mobile telephony and Bluetooth facilities, was launched a CeBit as an indication of the kind of pressure that Palm would soon be facing. Other factors are at play, too: the mobile phone market has a far-greater reputation for product replacement than the IT-based PDA market. The two products may end up the same, but the purchasers may opt for devices from mobile phone manufacturers rather than from PDA makers, through force of habit.
Palm is unlikely to be resting on its laurels, and this fight is unlikely to ever be over. We only have to look at the turnaround of fortunes at UK-based Psion to see that, even if Palm lose a round, there is still plenty to play for. A huge strength for Palm is in the massive range of applications it supports – existing users may be loathe to give up the apps they know, not to mention to transfer the data that the device may store. There may be difficulties ahead for Palm, Inc., but the company is as in a good a position as any to overcome them.
Meanwhile, what of 3Com? In some ways, the sell-off of Palm is a good thing as it will enable 3Com to focus on its “core business” – whatever that means, given the current, chaotic IT landscape. However, Palm has played a large part in the recent success of Palm – the PDA division made up 13% of 3Com’s 1999 revenues. This is not to mention the lift it is reputed to have given to the overall share price, even prior to to announcement of the Palm sell-off.
3Com has a reputation for solid strategies, but has not always been so successful at making them happen. The company’s bread-and-butter market of network cards and modems is now crowded with low-cost manufacturers, leading 3Com to refocus its efforts on the converging worlds of telephony and networking. Here, however, we are back to crystal ball gazing: the company is entering new territories that do not guarantee success to all comers. It is unlikely that 3Com will fail: the company has a reputation for innovative products and is striking partnerships with companies such as Microsoft, Hewlett Packard and Samsung to help it along. The separation of Palm may well be what 3Com needs, as it can now get down to a business it understands. However, it has to be said that 3Com’s base is far shakier than that of Palm, as the latter already has substantial momentum and presence in its own markets.
3Com’s launch of Palm sees the birth of two organisations, both of which will be subject to the whims of technology’s Lady Luck. Palm has come out of the gate with new products and an enviable market presence, leaving behind a 3Com that knuckle down and make something of its renewed focus. Neither company is guaranteed success, but both stand to gain from their new status.
(First published 3 March 2000)
03-03 – Thomson Travel: there’s life in the old dog yet
Thomson Travel: there’s life in the old dog yet
At the end of last week, UK travel operator Thomson immediately followed its report of sliding profits (down 38 per cent on last year) with the announcement of a £100 million Internet strategy. In Friday’s Independent newspaper it was reported that analysts saw this as “too little, too late,” when positioned against the rising web stars which are Lastminute.com and Travelocity. In this game, though, it ain’t over until the well-proportioned lady sings, and we would like to point out a few points in Thomson’s favour.
The whole disintermediation concept is proving very difficult to follow. It seemed that intermediaries would be the first casualties of the Web revolution. Then it became apparent that new types of intermediary were starting to turn a sizeable profit – these are the transaction infrastructure providers such as Tibco and CyberSource, not to mention security, directory, billing and other service providers. Now with the clear success of the travel sites, it is obvious that intermediaries are here to stay – at least on the Internet. Companies that still lack a Web presence are quite right to be losing sleep.
Thomson’s main strength lies in the fact that it is not really a travel agent at all. High street outlets such as Lunn Poly are little more than a front for the wide variety of Thomson holidays, sold under different company brands. In other words, it is its own intermediary and it has real products to sell. To see why this is a strength, we only have to look at the proposed alliance between US and international airlines, to set up a portal for air fares and effectively cut out the middle man. A similar alliance between package holiday operators would certainly put the squeeze on the Lastminutes and other pretenders.
Second, we must emphasise the importance of the over-used phrase “the competition is only a click away.” In my own experience, when hunting for a last-minute weekend stay in London, I found Lastminute.com lacked the choice of other, similar sites. It was also – heaven forbid – slow, the navigation was non-intuitive and the end result cluttered. Even the best players in the game, new breed or old hands, can fail – to offer the necessary services, or to offer them in a way that turns off the prospective customers. Thomson have as much chance of success as anyone else.
Finally, nobody has the monopoly on the future. Financial analysts are making the best guess on how dot-coms should be valued and while it is generally agreed that many are priced way above what is reasonable, nobody is prepared to say which ones. The dot-com crash may come, and the pure clicks-and-mortars are not guaranteed to beat the companies they are trying to oust. The Web is a channel for products and services: it is true that it is indispensable, but ultimately it is the products and services which will determine the success of a company, and not the channel. Thomson cannot afford to miss out on the Web: to succeed, it must establish itself as an eCommerce player, and the quality of its web presence must be second to none. Given that it achieves this, its future will once again be down to the attractiveness of its holidays, and its ability to deliver them.
(First published 3 March 2000)
03-28 – Big Cis hits the top
Big Cis hits the top
Well, Cisco Systems have done it and suddenly hardware is ruling the roost again. The company’s market valuation rose yesterday to $555 billion, overtaking Microsoft which fell to $541 billion. For a company that has not been known for grabbing the spotlight, this really is quite a coup.
Cisco has, in a way, been the poor nephew in what is a very rich family. Certain companies caught the technology wave of the eighties and nineties and left their compatriots behind. Here we are not talking about the Oracles and Suns, although such companies have done exceedingly well in their own areas. There are a handful of firms which have become the de facto standard, and they have done it globally. These are Microsoft, Intel and, well, Cisco.
While it is easy to spot why Microsoft and Intel did so well, riding on the back of an IBM PC clone which undercut the young and arrogant Apple, the picture for Cisco is more hazy. Once there were many vendors of networking equipment, but despite the best efforts of the competition one rose up above the rest. It is difficult to say why this was – probabilities point towards the company’s early realisation that it was not selling equipment, but infrastructures. However it happened, Cisco has grown and grown its market share by continually developing its products and moving with the flow. Unlike most other networking companies, it has not grown through mergers, rather it has favoured stringing acquisitions of smaller companies, buying key technologies and expertise rather than direct market share. This strategy has clearly worked.
Cisco has seen its biggest rivals merge and merge again, to no avail. Back in 1994, Synoptics and Wellfleet merged to form Bay Networks and “give Cisco a rival,” according to a San Jose Mercury News article of the time. In 1998 Bay and Nortel then got together, ostensibly to challenge Cisco. They’re still at it - recently Nortel Networks made an \link{http://www.it-analysis.com/99-11-10-2.html?its,announcement} which, threatened the company, would bring Cisco to its knees. Of course, it didn’t. Cisco overtook the second most valuable country in the world, General Electric, a few weeks ago and now it has overtaken Bill Gates’ empire.
What is perhaps the most remarkable is that Cisco has grown into a leader not only in networking but in telecommunications, a market which is not known for sharing business. The company’s technology-agnostic stance is a key driver to this: the problem to be solved may be the unrelenting growth of the internet, but the solutions are legion – be it LAN or WAN, the best technology wins and no option is ruled out, be it ATM, frame relay, xDSL, or Ethernet to name a few. This approach has kept the company flexible, enabling it to grow into new markets through its reputation in networking. As the world goes IP, the world is choosing Cisco.
Where now for the world’s most valuable company? Onward and upward is the answer. The desire for infrastructure is speeding up rather than slowing and at least for the next couple of years, new developments are bringing new challenges which Cisco will be priming itself to exploit. The rise of mobile may open some chinks in the company’s architecture, which will be interesting to watch but there is plenty of growth in big Cis yet.
(First published 28 March 2000)
03-28 – Directory sky hook too late for Novell?
Directory sky hook too late for Novell?
Novell was due for a shake-up and now it has had one. It was only a matter of time before the company moved its focus once and for all, away from its operating system heritage and into a directory-based future.
In Salt Lake City yesterday, Novell unveiled a new strategy that can be summed up with the d-word. The Internet has been the reason for Novell’s current instability – the market for a commercial network operating system kind of goes away when the whole world has standardised on a NOS which is available for free. Faced with a shaky set of results, the company has had little choice but to put its business firmly where it has been seeing the most success. No surprises that this should be back on the Net.
We have written about this before. We have had high hopes for Novell. The NDS technology is good, the vision is right and, up until recently, the market was wide open for a directory product. The only competitor – Microsoft – was so late with its Active Directory product, first announced in 1997 and finally launched with Windows 2000 a few weeks back, that all Novell had to do was occupy the space. How very, very easy life can be sometimes.
Novell seems to suffer from a most unfortunate sense of timing. In the end Microsoft proved stunningly accurate about the launch date of W2K including Active Directory, so Novell cannot claim that they did not know.
There are a couple of points in Novell’s favour. The first is that the product is tried and tested. Most pundits are recommending that W2K is allowed to bed in for a while before it is adopted more generally – this may throw Novell the slack it needs. The second point is that the company, having dropped its baggage, is free to work with whatever leads to its success. The company’s recent Linux announcements for its directory services are an indication of this – it is unlikely that Microsoft’s Active Directory will run on anything other than a single operating system.
Novell still stand a chance of success. There is no question about the importance that directory services will start to play as the Internet evolves to become a virtual services infrastructure. Given the fact that Novell have lost its lead, whether it is NDS that is chosen may well prove to be mainly a question of slick marketing. Whatever it does, Novell had better do it fast.
(First published 28 March 2000)
03-28 – Online photo clubs – the killer app for SSPs
Online photo clubs – the killer app for SSPs
Yahoo! has joined the growing band of portal companies to offer online photo album services. The Yahoo! Photos site offers 15 megabytes of free Web space in which users can upload, share and even have photos printed and mailed back. The growing interest in this sort of service is doing more than threatening the high street photo services. It is also bringing Storage Service Provision, by stealth. And if the companies want this, they’d better start pulling their socks up.
There are now a good handful of companies offering online photo management, with big names including AOL, Kodak and Hewlett Packard. Services do not depend on the ownership of a digital camera – with Kodak’s service, for example one can send a 35mm film and the company will notify by email when the photos have been posted to the site. Digital camera owners can benefit from online albums to which photos can be uploaded and shared with friends and family. It all makes some sort of sense.
So far, we are informed, the service has yet to make too much of a stir. This may not matter, as factors such as bandwidth and the availability of digital cameras will affect it in the short term. The potential is there – companies are currently setting out their stalls and grabbing mindshare, so as to take advantage when the time comes. It seems that, once people start building online albums, it is relatively easy to get hooked and once the expectation is there the market will grow considerably.
Where things get interesting is that expectations are starting to be placed on the portal companies as service providers. For example, it is assumed and expected that there is some form of disaster recovery policy. Yahoo’s terms of service disagree – quote: “YOU EXPRESSLY UNDERSTAND AND AGREE THAT: a. YOUR USE OF THE SERVICE IS AT YOUR SOLE RISK.” Not too good if you lose the family photo collection, really. This is an indication of the shape of things to come: if companies want to be leaned on for services, then they’d better start understanding what the nature of services are.
(First published 28 March 2000)
April 2000
04-28 – Sony goes Symbian: whence Palm?
Sony goes Symbian: whence Palm?
Sony shocked observers at the end of last week with the announcement of a collaboration with Texas Instruments and Symbian in order to produce a next generation mobile device. The announcement moved Sony squarely into the mobile mainstream, at the same time as raising the question – what happened to Palm?
In November of last year Sony and Palm announced that they would be collaborating on future versions of “the Palm Computing platform,” a.k.a. PalmOS. Palm would integrate Sony’s Memory Stick technology, whilst Sony committed to use the Palm architecture as a basis of “handheld consumer electronics products,” according to the press release. Last week’s announcement by Sony, TI and Symbian raises the obvious question about where next-generation wireless phones sit alongside handheld consumer electronics products, and what happens to Palm as a result.
It may well be argued that handheld computers and PDAs are different. This has certainly been Palm’s stance in the past, as we have already discussed in a previous article. However, as we argued, this position does not align with mainstream opinion as WAP phones and devices such as the Nokia communicator demonstrate that the merger of the two technologies is already with us. If Sony thinks the former, that never the twain shall meet, then Palm may already have a fight on its hands to keep its relationship. If PDA and phone are two sides of the same technology then the battle may already be lost for Palm.
Palm may not be too bothered, with a reported 75% of the world’s handheld computer market, but this figure is misleading. Again, if phone and PDA are kept separate then Palm is on relatively solid ground but if not, then the statistic should be put relative to the number of mobile phone users there are in the world – this would put Palm well into the minority. Ultimately, if sources inside Palm are to be believed, the company is more interested in growing its applications portfolio and becoming a platform company than it is in hanging onto its hardware base. This agrees with the alliance formed between Palm and Symbian, in which Symbian provide the underlying kernel and Palm provide the “user layer”.
Overall, then, all may not be lost for Palm. This is just one alliance amongst many and on the ground there is little to suggest that Palm’s success is waning. Sony’s collaboration with Symbian may be a disappointment to Palm, whatever they say, but rest assured there is life in the old dog yet.
(First published 28 April 2000)
04-28 – Will USB become the system bus of the future?
Will USB become the system bus of the future?
It is a surprise that nobody thought of it before. A flash memory device in the form of a Universal Serial Bus (USB) connector – plug it into the hub and it becomes near-instantly accessible as a drive on a Windows 98 system. This is exactly what a Japanese company - Shin-Nichi Electronics (SNE) – has done with the launch of the Thumb Drive. Though touted as a competitor to Sony’s memory stick rather than the beginning of a more general trend, it doesn’t take much of a leap of faith to see the potential for such a form factor.
In fact, somebody did recognise the potential of the USB port as more than a cable connection. Over a year ago Aladdin Systems were giving out free samples of its eToken, a USB-based security dongle. This device, which can be carried on a keyring, can be used as a smartkey holding authentication and encryption information about a specific user. At the time that the eToken was launched, Aladdin was worried about the potential uptake due to the relative newness of USB. Today, there isn’t a motherboard or a laptop which does not have at least one USB port.
There are several advantages to these tiny USB devices – they are extremely light, portable and robust, and due to their plug-and-play nature they can be used by the least computer literate. Not to mention the fact that USB is rapidly becoming the interface technology of choice for PCs and peripheral devices such as PDAs and digital cameras. Now, with the advent of flash memory supporting half-decent data volumes, the USB-based form is becoming appropriate for peripheral storage. This move towards USB could have significant impacts on other interface standards, both outside and inside the box.
Efforts to build the internal bus of the future are lumbering forward, driven by the Infiniband consortium which was formed by the merger of two industry associations, Future I/O and System I/O. Infiniband offers gigabit transmission speeds managed by a switching fabric, all well and good. USB 2.0 products will only be able to support 450Mbps when they become available in the latter part of this year. While Infiniband may be compelling for PC based servers, it is highly probably that USB will become the interconnect standard of choice for client and consumer devices. There is a drive towards integrated client PCs such as those from Hewlett Packard and Compaq (based, it should be said, on the iMac architecture) which consist of sealed PC boxes which can be extended using USB-based peripherals. Wyse has taken this one stage further with its Thinovation initiative. All of these devices have a common factor – they all rely on the expansion capabilities provided by USB.
USB is more than a connection standard, it is becoming a means by which cheap, extensible computing appliances can be constructed and exploited. Devices such as the Thumb Drive and the eToken, coupled with USB-based modems, cameras, wireless networking and other peripherals are the shape of things to come, driving down the cost of computing whilst increasing its flexibility both in the workplace and in the home. To bring things full circle, Sony’s PlayStation 2 may support Memory Stick but it also includes two USB ports. It is difficult to believe that, of the two interface types, Sony’s proprietary solution will take over from a standard that is rapidly gaining global acceptance.
(First published 28 April 2000)
May 2000
05-01 – Microsoft – down, not out, but not in neither
Microsoft – down, not out, but not in neither
Following the Microsoft trial has become an obligatory part of the job description of anyone in IT. There are two reasons for this. First, there are few individuals that have not had their jobs changed in some way by Windows, Office, Exchange, SQL Server or any of the other products in the Microsoft portfolio. Second, there are even fewer people that would not accept even a fraction of a percent of Bill Gates’ fortune. Even so it is difficult to see what impact the ultimate ruling will have.
For those who missed it, on Friday Judge Jackson revealed a “proposed Final Judgement” which, amongst other things, advocated the breakup of the company into two businesses. This was supported by a share ownership restriction meaning that Bill Gates and Steve Ballmer could only own shares in one or the other of the two companies. “Ha, that’ll show ‘em,” will have been the thought that ran through the minds of Gates-and-Ballmer wannabees across the globe. After all, this is a personal issue however well disguised in corporate legalese it may be. Other provisions included protection for companies which had given evidence against Microsoft in the trial, a levelling of the playing field for PC manufacturers that OEM Microsoft Windows and restrictions on Microsoft’s ability to use Windows to promote its own products over anybody else’s.
So what does it add up to? Let’s start with a couple of stories. Once upon a time there was a king who believed he was buying cloth so cleverly woven that only the most intelligent could truly see it. Once upon a time there was a company which so cleverly marketed their products, everyone in the world thought they would answer all their problems. Six or seven years ago Microsoft was such a company, the undisputed pretender to the IT throne. IT decision makers across the globe were implementing one-word IT strategies – “let there be Microsoft” – as they truly believed that they had found a company offering the silver bullet. In the former tale a little boy shouted “that man’s naked!” and all hell broke loose. In the latter, the many cries of “Microsoft is not the only answer!” have, in the most part, fallen on deaf ears. Even those that believe in Microsoft’s diabolic nature, begrudge that it is better the devil you know.
All good stories have a moral. The king’s new clothes may be summarised as saying “never trust a salesperson”. The Microsoft story has an additional moral, “know what you want.” Fair to say that, in most parts of the world, Microsoft is no longer seen as the keeper of the silver bullet. We have learned our lessons and it is for this reason that we may rest assured that, whatever Microsoft may have succeeded with in the past is no indication of the company’s continued success in the future, court case or no court case.
(First published 1 May 2000)
05-01 – Phew! ISP’s found not liable for email content
Phew! ISP’s found not liable for email content
In these days of information overload, it seems a bit of an anachronism to expect an Internet Service Provider (ISP) to be responsible for the content of messages passing through its site. Not to mention the fact that the ISP may not even be the origin of a message, merely a step along the way. Fortunately, a court in the United States ruled yesterday that one ISP – Prodigy Communications Corp., was not liable for its “failure” to spot a threatening email passing through its servers. It has to be said that this must come as a bit of a relief for ISPs across the globe where, though internet laws may not yet be internationally binding, legal precedent is everything.
The email in question was sent by an (as-yet untraced) person pretending to be Alexander Lunney who was fifteen at the time. Young Lunney’s father sued for damages as soon as it was revealed that his boy’s details were being taken in vain – it is this damage claim which has now been quashed. This is the upside – the downside is that the impostor will probably never be traced.
Fair to say that cyber law has quite a way to go. Still grappling with the vagaries of eCommerce, it now has to deal with the problems of the wireless Internet. For example in whose jurisdiction is an off-world bank account? The recent MP3 copyright case may show how existing laws can be adapted to the cyberworld (as does this case) but it is fair to say that unheard-of situations are being invented every day and, no doubt, exploited by those that fail to see the difference between unethical and legal. Consider, for example, the merger of contract-free, pay-as-you-go telephone services (available in the UK) with the Wireless Internet – such facilities may offer true anonymity, with all of its strengths and weaknesses.
Every development marks a whole series of opportunities, coupled with ever more complex threats - the technological sword cuts both ways. We are already a long way off having a stable, relevant, global legal framework for the Web: before this can come about it will be necessary for things to slow down a while. This seems unlikely: the way things are going the technology revolution is still accelerating and has a good few decades in it yet. In the meantime rulings such as this one show that common sense is still a valid legal currency. Long may it continue to be so.
(First published 1 May 2000)
05-05 – Tele2 – a niche wireless solution for the corporate masses
Tele2 – a niche wireless solution for the corporate masses
It was unfortunate that Patricia Hewitt, eCommerce minister and Member of Parliament for Leicester West, had to go into hospital at the end of last week for a knee operation. Otherwise she would have been available at the launch of Tele2’s innovative wireless Internet service, which she has been closely involved in for some time. So - what’s it all about?
Tele2 have developed a hybrid radio and microwave-based Solution which offers high-speed, always-on bandwidth. End-user customers install a “squarial” which connects to an Ethernet-based LAN device. The squarial communicates via radio to a local access node, which in turn is linked via microwave to a base station. A standard configuration for a small town is one base station and seven access nodes. Tele 2 are currently offering a service of between 128Kbps and 1Mbps, with users being charged for the amount of bandwidth they require and the amount of data they transfer in both directions.
What is interesting about Tele2’s offering is that it is relatively easy to deploy and offers a high level of communications quality. Tele2 are actively advertising on \link{http://www.tele2.co.uk,their Web site} for landlords and small institutions to permit an access node to be situated on their (preferably flat) roof. The node requires only a mains connection and is no more than a foot high. Given this, it sounds ideal for business parks and campuses, not to mention smaller, less accessible towns. The high bandwidth, relatively low cost and the always-on capability has resulted in a number of trials of the technology for security cameras in car parks and service stations. It is also of potential benefit for organisations that want to run Web sites in-house, rather than outsourcing them to external ISPs.
Because the service is packet switched and is purely aimed at data users, it does not need to be backwards compatible with voice communications facilities. The downside is that end users still need their existing voice or ISDN lines but this may change in the future when Voice over IP reaches maturity.
Prior to the launch, Tele2 has been running a Pilot service in the Reading area of the UK, which is currently supporting 500 connections. The launch involves Leicester, Leeds, Bradford and Nottingham, with a wider launch planned for September. Tele2 is focusing on the corporate space, but its target market is clearly the smaller organisations which cannot afford the equivalent bandwidth from land-based services.
It is in keeping with the present speed of technological change, that new, innovative ways of using technology should appear. Tele2’s technology and business model will broaden the reach of high-speed Internet and, if done right, will give the lumbering roll-out of land-based, broadband technologies a run for their money.
(First published 5 May 2000)
05-05 – We are all to blame for the I Love You virus
We are all to blame for the I Love You virus
If you didn’t know by now that there was a new virus about, you must have been on holiday in a place where the technology has yet to reach. Once again, some bright sparks have used their skills of innovation and techncal expertise to ensure that they will have something to tell their grandchildren about. Once they get out of prison, that is. The poor, misguided youths have the FBI on their tail and it seems like only a matter of time before they are hauled before the international courts.
Don’t get me wrong. Clearly any human being that intentionally wreaks havoc on the scale that this recent attack has seen, is a weeping sore on the face of society and should be removed. There remains a nagging doubt, however – what bunch of idiots would design anything so fragile that a few students could put it out of action for days, particularly something on the global scale of the World Wide Web? Let’s face it, park benches are these days made out of concrete and fire-treated wood to protect against the expected acts of mindless vandalism that they will experience. Why do we not treat our fragile, Internet-based computing infrastructure with similar caution?
As with so much to do with technology, the answer is one of convergence. The hugely stable, resilient Internet was designed separately from the application suites and email systems, which were originally designed to run on closed corporate networks. It was a small step to design gateways which enabled email systems to exchange information over the Internet, but a giant leap would have been preferable. What we have ended up with is a mentality which says “Oh dear, I am getting email from a whole variety of unpleasant sources, and I can do nothing about it.” Nobody is taking responsibility, not the Internet Service Providers or the application vendors or the end users. All, however, are culpable. Mechanisms have existed for years to permit computer systems to be secured and sources of emails to be checked. The majority of the world’s computer users accept an operating system on which the end user is essentially the super user, and applications with only a minimum of security checking set up by default. There is the argument about whether the user is to blame, whose involvement is necessary for the I Love You virus to spread. But what chance does the hapless user stand, when every feature and function of the system is left wide open and accessible?
Some will be saying that Microsoft should be held in some way responsible for the spread of this latest virus, as it is the provider of the operating system (Windows), the application (Outlook) and the programming language (Visual Basic) amongst other things. But this is missing the point. Microsoft has provided what customer demand has sought. Admittedly the company can guide requirements and expectations but it cannot decide them – if users want poorly secured systems, they will get them. In the past we could perhaps claim technological ignorance but this line is starting to pale. No more excuses – as a global IT community, we must either act to implement systems that really meet our needs, or forever live in the shadow of being held to ransom by some bunch of mavericks on a faraway isle.
(First published 5 May 2000)
05-05 – Web users may be going up, but service quality is not
Web users may be going up, but service quality is not
The UK Net population may be on the up, but unfortunately the same cannot be said for the quality of the services that said users are getting from Web sites, according to the results of two separate studies. The first, reported on \link{http://www.zdnet.co.uk,ZDNet UK}, was from Internet measurement firm MMXI Europe. MMXI found that, in just six months, the number of Internet users in the UK as risen from 7.8 million to 9.2 million – that’s a rise of nearly a million and a half. Meanwhile, on \link{http://www.theregister.co.uk,The Register}, a study commissioned by Sunrise Software found that, though 55% of companies noted that Web-generated enquiries were on the up, only 40% offered facilities for customers to email or otherwise communicate via the Internet.
It was not so very long since the Financial Times ran a mischievous little test in which it sent information request emails to the address provided on the Web sites of the major corporates. Only a minority of companies responded.
Clearly companies still do not get it. Most (if not all) major businesses now have Web sites, but this seems to be more through luck than judgement as companies are still not entirely clear what it is all for. The well-documented evolution of a Web site, from brochureware, through online catalogue and basic transactional, up to eCommerce site, should not need to be repeated (though we would be happy to provide a full explanation, should anyone request it). The Web is more than “just another Sales channel” – it is an integral part of an organisation’s business processes, branding and reputation. Companies which are leaving such vital information such as email details for information requests of their Web sites, are doing more than just missing out on prospective customers (a number of whom would rather look for another Web site than make a call or write a letter). They are sending the message that they are difficult to deal with, slow to react, unfriendly and backward. Not too good really.
Now for the good news. The Internet really does offer huge rewards for those that exploit it successfully. Provision of email details is one part of the story – the Web site of an organisation should be designed with the same care as the marketing collateral, the sales training and the management of operational services. The Web is the company’s shop front, and it opens onto the world. As the number of Internet users continues to increase, organisations still have the choice of risking lost business through inadequate Internet facilities. However there will come a point at which the risk becomes a reality and companies with shoddy Web sites will not survive. This isn’t a “get online or die” message, rather it is noting that the Web site is a fundamental, integral part of each and every business. Put simply, companies with the best Web sites, coupled with the best products and services, will do better than those without. Your choice.
(First published 5 May 2000)
05-11 – G8 jaw, jaw, jaw about Cyberspace law
G8 jaw, jaw, jaw about Cyberspace law
It’s good to talk, agreed delegates at last week’s computer crime conference in Paris. Attendees from G8 nations - the United Kingdom, France, Germany, Italy, Russia, Japan and the United States - spent three days discussing the issues surrounding computer and internet-related crime. Nothing concrete came out of the talks, apart from a general commitment to international co-operation to combat the growing threat. But that is already a good start, yes?
No. The agreement reached by the delegates was to co-operate more fully, a process that is long overdue in starting. What is missing is any concrete actions coming from the G8 concerning exactly how cybercrime is to be combatted. At the same time, chinks are already appearing in the international armour. As reported on Silicon.com, different countries favour different approaches: for example, the European nations have already signed up to a Council of Europe treaty that favours tighter laws and regulations. However the US prefers self-regulation to form the basis of the fight against cyber crime.
Of course, the problems faced are not easy ones to solve. The problem is that cybercrime, like everything else technology-related, is raising its ugly head in most unexpected ways. The “traditional” view of computer-related crime, immortalised in “The Cuckoo’s Egg” by Clifford Stoll, involves seasoned hackers breaking into government computers and selling the uncovered secrets for drug money. The real world has moved on however. Consider, for example, the denial of service attacks on high profile eCommerce sites (eBay, CDNow and the like), or the students breaking copyright laws by exchanging MP3 versions of their favourite CD tracks. Consider the “I Love You” virus, purportedly the most expensive criminal act that the Internet has seen so far. It appears that the fragility of the Web is under far greater threat than the unauthorised access to company or personal data. The arrival of mobile and always-on, high bandwidth technologies will no doubt bring with them whole new weaknesses to be exploited. For example, there is a strong possibility that the convergence of PDAs and mobile phones will result in a new breeding ground for denial of service worms. What is more, the arrival of always-on Internet connections will open the threat of attack to PC owners across the globe. Personal firewall manufacturers will have a field day.
The main problem is one of catch-up for the institutions. Legal practices and international conventions are way behind the technology curve. With the current political set-ups, this looks unlikely to change. Let’s face it, if it takes over a year for Strasbourg to draft new guidelines for the expenses claims of Members of the European Parliament, what chance do we have of getting the G8 nations to agree on a global cybercrime framework before technology has changed the landscape beyond recognition? In corporate law, too, the world is moving too fast - consider the Microsoft trial, where the anti-competitive acts committed a few years ago are now hopelessly old hat. We need an internationally agreed, slick, flexible process of communication and mitigation of the threats posed, as they start to become possible and not after the events which will undoubtedly occur. One thing is for sure: the time for talking is well and truly over.
(First published 11 May 2000)
05-11 – Storage Networks - the storage space goes ASP
Storage Networks - the storage space goes ASP
The fog surrounding the Application Service Provider (ASP) market is starting to clear, as companies move away from the traditionalist view and start to demonstrate solutions that reveal the real potential of ASPs. One such company is Storage Networks, a company that is coming from “over there” to spread the word about Storage Service Provision “over here”.
So - what is an ASP? At the moment there seem to be two prevailing views. ASPs are generally accepted to be concerned with delivering “applications over the wire” where an application can be any software package. A more specific model is that being touted by enterprise package vendors, such as Oracle, Siebel and SAP. For these companies, the ASP model is a means of marketing their products to the Small to Medium sized Enterprise customers that have been unable to afford such products as an outright purchase. With a rental/lease-based ASP model, smaller organisations can benefit from such applications and, of course, vendors can benefit from reaching a previously tapped customer segment. Taking the two views above, the ASP model is both an architecture and a business model.
With a bit of imagination however, ASPs can be much, much more. Rather than considering ASPs as applications that can be served, the model should cover the provision of the widest possible variety of services. Services can be arranged as a stack, with lower level communications, storage and processing services serving higher-level information, application and business services. This view has already been discussed from a theoretical standpoint in the IT-Director.com ASP feature. Now, real examples of its use are coming onstream, not as valiant attempts by niche technology players but through companies who are already reaping huge rewards from the model.
One such company is Storage Networks, which offers a managed raw storage service to other ASPs. Storage Networks has implemented fibre-based Metropolitan Area Networks (MANs) in 36 US cities. Each MAN is used to connect a variety of types and configurations of storage equipment (for the technologists, Storage Networks is running both SAN and NAS equipment over the same fibre infrastructure). These configurations are then used to provide raw storage, backup/restore and data replication services to Storage Networks’ customers who pay a fixed monthly fee per Gigabyte available. The company is launching a service next week in the UK, and intends to be in ten European cities by the end of the year.
All is not roses in the storage camp, not least due to the fact that the different storage vendors have yet to standardise on a single mechanism for managing storage. Storage Networks’ customers can monitor their own virtual store through a read-only interface and can request repartitioning or increased bandwidth. All such requests are funnelled back to a management centre where updates are made using the proprietary software of each vendor. Against this backdrop, the other issue faced by companies such as Storage Networks is the accelerating requirement for storage. The company is finding out that despite the continued improvements in disk sizes and data rates, storage supply is only just keeping up with demand.
There are clearly still some problems to be over come in the storage market, but let us not be distracted from the fact that the underlying trend is towards provision of storage as a service. Such facilities go way beyond the 10Mb of disk space currently allocated by ISPs, often with little or no guarantees of service levels. What we are talking about here is the evolution of IT towards enterprise scale, performant, available facilities that are delivered as a utility rather than in a crate. Faced with the increasing cost and complexity of technology, the service provision model is one that few organisations will be able to avoid forever.
(First published 11 May 2000)
05-19 – Love and viruses: the upside of crying wolf
Love and viruses: the upside of crying wolf
Security software companies have been accused of hyping up the potential risk of new viruses. It stands to reason: the more dangerous it is out there in cyberspace, the more we shall turn to trusty virus detectors, firewalls and other protection mechanisms. It’s all good for business. With the ongoing saga of love bug viruses hitting the news over the weekend, even the protection software vendors are starting to get uncomfortable about the dangers of over-egging the viral pudding.
According to Dan Schrader, chief security analyst at Trend Micro (reported on News.com), there is a fear that all the noise and stink created around the love bug risked making the whole issue fasde into the background noise. “If we cry wolf often enough, they’ll tune us out entirely,” said Schrader. This may be true, but at the same time there are positive signs that the PC proletariat are starting to understand the nature of such things as malicious attachments. After all, these new worms are not like viruses of old, which would infect executables without any outward indication of their presence. It is understandable how, say, a utility or game could be passed from one computer to another, carrying the virus with it. This did not have to be malicious – there have been several instances of viruses being passed on the cover disks of magazines. However, malicious email attachments require user intervention, namely a double click on an attachment that is a virus in all but name. It isn’t buried in the middle of a harmless program – it is a program in its own right, relying on a lack of caution on the part of a user. If having a lack of caution is a crime, I daresay we are all guilty (who did not double click on an email from a friend last Christmas, containing the message “Have a look at this!” and the attachment “Elf bowling.exe” or somesuch?)
This is where crying wolf is working in our favour. Far from being turned off by the constant media blare about the latest variant of worm X or Y, a large number of people have been on the receiving end of such malicious programs. I, for one, received two emails from acquaintances saying “do not open anything I have sent you! I was on the receiving end of the I Love You virus.” I have certainly been a little more cautious when opening unexpected executables or VBS files (Visual Basic Scripts). Not that I have yet received any of the latter, but you know what I mean. All the media attention in the world cannot interest us as much as some real experiences: there are few organisations that were unaffected by the recent attacks, and as a result both organisations and individuals are becoming a lot more savvy.
As is discussed elsewhere today [link to G8 article], we cannot fully predict the vulnerabilities that will be opened up by new technologies. However signs are encouraging that the water level of eNouse is rising. We are getting smarter – hopefully we shall learn fast enough to beat the next generation of viruses before they happen. After all, the solution is just a double click away.
(First published 19 May 2000)
05-19 – Monterey on borrowed time
Monterey on borrowed time
Every now and then, industry observers have to stick their necks out and this is one of those occasions. No ifs, no buts – Project Monterey may be nearing its release date, but it will find that it has only a short life span. Why, oh why, I hear you ask, is the flagship OS to be left by the wayside? The answer is simple – Windows 2000 and Linux will make Monterey an irrelevance.
Project Monterey was launched a couple of years ago as a joint project between SCO, IBM, Sequent, Intel and Compaq. The plan was (and still is) to produce a Unix operating system for Intel’s IA-64 platform. The intention was that the Unix-based PC server would become as much of commodity item as the Windows-based PC client – a standard supported by the majority of hardware manufacturers and software vendors, with the resulting economies of scale for suppliers and customers alike.
As usual, in the duration of the project, the technology world has changed. There are two major changes which should be looked at. First, we should consider Microsoft. At the inception of Project Monterey, Microsoft was on the back foot when it came to server operating systems. Attempts by the company to demonstrate the scalability of Windows NT were not impressing the new generation of infrastructure customers, and Unix was becoming the OS of the Web. Microsoft has taken its time and now, with Windows 2000, it is in a position to fight back at least at the lower end of the server market.
Second, Linux has moved from academia to the mainstream, winning mindshare as the commodity operating system that Monterey had designs on becoming. The strength of Linux’s position is in the fact that it has been ported not only to IA-64 but also to just about every platform under the sun. Last summer it was already being reported that Monterey consortium members, whilst remaining bullish about Monterey on IA-64, were tellingly quiet about porting the new OS to other platforms. The attitude to Linux could not be more different: last week, for example, IBM announced that services and software were now available for Linux on the S/390 mainframe. According to CRN News, Greg Burke, VP of Linux for S/390 saw this step as a revitalisation of the S/390 platform and of mainframes in general. This line just doesn’t tie up with IBM’s current stance about Linux being an interim step for customers wanting to move to higher end systems running AIX or Monterey – after all you can’t get much higher than a mainframe. Try asking Mr Burke when Monterey will be available for S/390.
Ultimately it will be the customer that decides, and it is here that we are already seeing the last nails in Monterey’s coffin. According to the Register on Friday last, Fujitsu Siemens already has around 60 customers who are trialling 4-way Itanium servers based on the IA-64 architecture. And what about the operating systems the prospective customers are choosing? “Most users want Windows 2000, others ask for Linux but hardly anyone is interested in Monterey,” said a source from Fujitsu Siemens. Telling stuff.
(First published 19 May 2000)
05-25 – From blue screens to screen blues for handheld manufacturers
From blue screens to screen blues for handheld manufacturers
Not so long ago, the bane of computers was the blue screen of death. Today, the world has moved on – handheld manufacturers are more concerned about having any screens at all, or at least ones worth looking at. Hewlett Packard are red-faced over the fact that their new PocketPC-based device can only display 4,000 colours, whilst Palm are facing supply problems of their own. These are exactly the sorts of events that cause shake-ups in the industry.
First of all, let’s look at Hewlett-Packard. The manufacturer with a reputation for the highest possible quality was embarrassed to admit that its new handheld had been beset by design problems. According to News.com, a 16-bit component was accidentally substituted with a 12-bit component, meaning that the number of colours that can be displayed has been constrained to 4,000. This is an order of magnitude less than that expressed in the marketing literature. It is quite a surprise that Hewlett Packard should let this one through. It is also an indication of the reality that is faced by all technology companies developing complex products, where even a small error can have enormous consequences The message is: buyer beware, don’t believe the marketing. Despite the fact that this error has not involved the king of hype, namely Microsoft, this may be just too much reality to bear coming as it is just at the moment when the company is pushing hard its new PocketPC platform. This release of the now-defunct Windows CE is “new and improved,” covering the tracks of poor synchronisation, bugs and usability problems that beset previous versions of the software. The last thing the Seattle software giant needed was further evidence of technological weakness, even if this time it was not the cause.
Palm has had its own share of display problems – this time, due to supply being unable to keep up with demand. This is not just an issue for Palm, but for the whole LCD market, particularly with the need for enhanced screens for WAP phones and other electronic devices. Let’s face it, today if it hasn’t got a natty screen it’s probably not worth having. Plus, if it wasn’t enough to have screen shortages, Palm has had problems sourcing sufficient Flash memory. At the moment it is still possible to find Palm handhelds on the market, but some companies are already out of stock and are taking back-orders. Interestingly no problems have yet hit Palm clone manufacturer Handspring. Maybe the latter had a stock surplus due to its initial difficulties in managing the glut of requests for the devices when they came on the market a couple of months ago. Also, the Handspring device does not use Flash memory. It could just be that the fault lies with Palm, being unable to manage its own supply chains. Whatever the case, the danger signs should be flashing for Palm, which cannot afford to trip over and lose the lead to an increasingly slick competitor from Microsoft and its partners. If Palm is having screen shortages now, it needs to get its act together before the summer, when a lowering of LCD prices will fuel a new boom in their use.
Palm still holds the lion’s share of the market, but the next few months will be make or break for the handheld manufacturer. Ultimately the company could be living on borrowed time as smarter, more functional mobile phones start to replace the need for handhelds. So far, Nokia, Ericsson and the like have shown only partial interest in borrowing technology from the handheld market, with Symbian as the main contender. The next six months will be very interesting indeed, and neither Palm nor Microsoft can afford to make any mistakes.
(First published 25 May 2000)
05-25 – Is it time for a socially responsible Internet?
Is it time for a socially responsible Internet?
This week, a Paris court judged that Yahoo.com – that’s the US-based firm, not the French subsidiary of Yahoo – was in breach of French laws concerning the sale of materials with racist overtones. What is more the company has two months to prevent French surfers from accessing any auctions of Nazi memorabilia and other items deemed racist. Ouch.
Of course, this is not the first time that auction sites have been condemned from hosting the sale of unseemly stuff. In November last year, eBay was taken to task by the Simon Wiesenthal Centre about its auctions of memorabilia which “glorify the horrors of Nazi Germany”. Indeed, according to this week’s New York Post, eBay was back in the spotlight, as it removed from sale the charred remains of what was reputedly a gun salvaged from the former Branch Davidian compound in Waco, Texas.
At first sight it appears that auction sites are attempting to have the butter and the butter money, as they say in France. eBay does not permit Nazi memorabilia to be sold on its German eBay site, as that contravenes German law. However the Waco gun was removed from the site due to eBay’s policy on not allowing gun sales. The two cases reveal a significant difference in policy. In the Waco gun case, spokesman Kevin Pursglove was quoted as saying “eBay won’t host any gun sales.” However in the case of the German memorabilia, Pursglove was quoted (this time on \link{http://www.zdnet.com, ZDNet}) as saying “We expect eBay users to adhere to the policy and guidelines of the country in which they are living. It is not our role to police compliance.” So it is unclear whether eBay is prepared to take a moral stance on the items that are auctioned, or not.
What is also unclear is the matter of jurisdiction. The French law suit has effectively said that a US company is not permitted to make available materials to French citizens via the Internet. This ruling is virtually impossible to implement and, indeed, its validity is unclear. Just as it would be possible for un Francais to telephone a bid to an auction in the US, so such a bid can be made over the Internet. It is difficult to see how the auctioneer can be held responsible for the actions of the prospective bidders, particularly across international boundaries. Even if successful, it is even harder to see how such rulings can be enforced. As noted by President Clinton in the \link{http://www.mercurycenter.com, San Jose Mercury News}, trying to police the Internet is “like trying to nail Jell-O to the wall.”
Ultimately, auction companies that want an international standing will require certain, globally accepted standards to be upheld. The example of eBay suggests that it is applying national laws in its offerings to individual countries, but it is applying its own (apparently Californian) moral standards across all its sites. The definition of “globally acceptable” may be a tough nut to crack – for example, even Barbie Dolls might offend the sensibilities of certain nations – but nonetheless a line has to be drawn between what is acceptable and what is not. Companies such as Yahoo and eBay may claim immunity from such standards, but if so they are not walking their own talk – Yahoo, for example, has been accused by civil liberties groups of “outing” authors of offensive postings on its chat boards (sometimes without even a complaint being made). Such companies are either proactively responsible for what is happening on their sites or they are not – they cannot have it both ways.
(First published 25 May 2000)
05-25 – Microsoft SOAP on the ROPEs
Microsoft SOAP on the ROPEs
Hmm… very interesting. Microsoft seem to be keeping its SOAPy hands firmly to its chest. SOAP, or Simple Object Access Protocol, is a technology standard announced by the company as part of its DNA initiative. Essentially it involves using XML as a communications language to enable object and component services to be accessed remotely, for example over the Internet – a kind of long-distance remote procedure call. Great idea, but now Microsoft seems to be balking at the principle of opening up a dialogue (as it were) about this new “standard”.
There are several reasons why Microsoft does not want to reveal its hand at this stage. For a start, SOAP is not ready for use, indeed it is little more than a twinkle in the company’s eye at this stage. If Microsoft was to reveal all it would be opening itself up to both ridicule and idea stealing. Let’s face it, there are plenty of people that would not be able to resist having a pop at the software giant, it’s a way of getting a bit of payback for the extortionate costs of the software over the years. As for stealing of ideas, imagine what would have happened if SUN had released details of Java to the IT population before it was fully fleshed out? There would have been no shortage of companies prepared to copy and even patent the fledgling ideas. Okay, there are the positives. Of Java, more later.
On The Register yesterday a second suggestion was raised. SOAP is to be an instrumental part of Microsoft’s Next Generation Windows Services (NGWS), also known as MegaServices. Detail about these is sparse but the oft-quoted example is of Microsoft Passport, which can be used as an authentication mechanism for any internet-enabled application. Other MegaServices could include transaction management, payment/billing, inventory and so on. Clearly some mechanism is required to access these services. SOAP is the ideal candidate, coupled with a communications handling mechanism such as the Remote Object Proxy Engine or ROPE. Now, suggested The Register, SOAP is being kept quiet not for its own sake but for the sake of NGWS, upon which Microsoft’s whole future may depend. Having seen Microsoft’s fears that the ongoing court case may kill NGWS, this may well be true.
There is one more reason why Microsoft are being reticent about the detail of SOAP. As we’ve already mentioned, SOAP is following the same path as Java and with good reason – like Sun, Microsoft do not want to lose control of the “standard” once it appears. Why? Because, to Microsoft, SOAP is a Java killer and more – the company has set it sights on the whole EJB/CORBA caboodle. With an XML-based standard for application intercommunication, why bother with the layers of complex interfaces that have evolved around the Java spec? That’s the marketing theory anyway – the reality is that, with ROPE, Microsoft are re-inventing the request broker in their own image and hoping that its adopters will squeeze those nasty competitors out of the picture.
We’ll just have to see how it gets on.
(First published 25 May 2000)
June 2000
06-01 – Cybercafes reach the farmers of India
Cybercafes reach the farmers of India
If there is one thing that India has got, it is railways. With railways come both stations and many, many thousands of miles of straight track, the latter including channels for telecommunications cables. Just north of Madras in the Bay of Bengal, a pilot project has started which will link five stations to the Internet by the end of June. BBC News tells us that, at each station there will be a cybercafe, with Internet terminals available for rent on an hourly basis. Perhaps more importantly, one of the stations will act as a wireless base station providing Internet services to up to fifteen households.
From one perspective, what we are seeing here is how existing technologies can be combined innovatively to spread the reach of the Internet. Like Tele2 in the UK, which uses a combination of radio, microwave and land-based services to provide high-speed Internet access to business parks, this is about combining technologies in the most cost-effective way. In a way, each technology is competing against the others, encouraging progress and exerting a downward pressure on costs.
In addition there is a more important process at play here. Technology does not exist for its own sake, but as an enabler. To coin a cliché, the Internet is the great leveller, both for businesses and for individuals, and that is what will give countries still at the trailing edge of technology some freedom to catch up.
Not that India has anything to be ashamed of, of course. We have already written about how the second most valuable software entrepreneur in the world, Azim Premji, has built his fortune on India’s ability to provide high quality software at a lower cost. Similarly, Indian companies such as Hexaware have shown how innovation does not need to be at the expense of software quality. All the same, Indian cities have many advantages over its rural communities. The Great Leveller is now turning its attention to this divide.
What can the Internet offer the rural population of India? The answer is simple – education. The wealth of information resources offered by the Internet, and the form in which these resources are offered, go hand in hand: the Internet never sleeps, and computer-based software never loses patience. Furthermore, once the connection is in place, the majority of resources are available for free.
There is still plenty to be done – this project is a pilot, after all. Many villages only get electricity once or twice a day (again, a problem to which technology may well have the answers). Internet resources may not be entirely suitable in their current form (but could be made suitable, once the need exists). Educational organisations used to working in one way may well find that it is worth their while supporting the efforts of this project, rather than supporting educational processes directly as they have in the past. One thing is for sure - the tendrils of the Internet may reach around the globe, but it will not truly be a global phenomenon until the world’s population has access to it. The Great Leveller will not rest until old and young, rich and poor, city, and country, West and East, are afforded equal access to this fundamental resource
(First published 1 June 2000)
06-01 – NetSanity brings one-stop publishing to the Web
NetSanity brings one-stop publishing to the Web
As we claw our way out of the primeval swamp of technology, we are hitting a wave of diversification. The move towards appliances and devices may have its attractions but it also brings a series of problems with it. One of these is how to handle the range of forms that information must take to reach this variety of devices. As always, where there is a problem there will be a start-up which claims to have found the solution and in this case it is NetSanity.
NetSanity’s primary goal is to bridge the gap between information display on the Web Browser and on WAP-enabled smart-phones. There are several possible solutions to this, depending on where one thinks the problem should be solved. NetSanity have gone back to what it sees as the origin of the problem – the stage of preparing the information for publishing. The company offers a one-stop publishing solution in which organisations can prepare their information in an agreed structure based on XML. NetSanity then provides an ASP-type that which takes the information and delivers it to the supported range of formats and devices. As well as HTML for browsers and the Wireless Markup Language (WML) used for WAP phones, NetSanity supports other formats including SMS which can be delivered to existing mobile phones and pagers.
A rather clever part of NetSanity’s offering is the preference facility it gives to browser and phone users. Rather like (the sadly passed away) PointCast, NetSanity provides a console (termed the SmartBar) that offers a selection of content – news, sports results and the like. However the user’s preferences are not stored on the device, rather on the server so users can keep up to date whether they are using a WAP phone or a Web browser. Preferences can be managed on a per-device basis – for example, a user may wish to see only the headlines on a WAP phone but the full story on the desktop.
NetSanity is keen to stress that it is more a conduit than a service provider. Its prospective customers are the content and service providers themselves, who can turn to NetSanity to get their message out to a wider audience. The model appears to be working: the company has already signed up Nokia to trial a NetSanity-based service.
The future target of NetSanity is to support the drive towards mobile commerce and location-based service provision. Both of these goals will pose new challenges for the company, as it will not only be the information which needs to change but also the processes and transactions involved. Even if NetSanity succeeds in the short term, it will face an even bigger hurdle later on. Whatever the case, it is good to know that innovative companies such as this will continue to exist for as long as there are such hurdles to overcome. The giants will not be able to do it by themselves.
(First published 1 June 2000)
06-01 – SurfTime: BT drops the Beach Ball
SurfTime: BT drops the Beach Ball
After much speculation and some ribbing from friend and foe alike, it looks like the end is nigh for BT’s SurfTime Internet service. Despite a relaunch effort last week, even spokespeople inside the company are seeing the writing on the wall. So – where is it all going wrong for BT?
According to BT, “SurfTime is the new way to use the Internet without paying call charges.” Unfortunately, this is wrong on two counts. SurfTime may have been new when is was first touted last year, but today a number of ISPs and other communications companies, such as Amazon, Freeserve and NTL, offer similar services. BT’’ offering may now be competitive to these other providers, but it is not free – it costs 19.99 for a 24-hour service and 5.99 for evenings and weekends, plus (read the small print) users may pay an additional charge for the ISP service they connect to. Okay, there are no charges for individual calls, but a flat fee for the month does not really qualify as “without paying” in anyone’s language.
SurfTime’s launch on Thursday last week was followed a ruling by Oftel which requires BT allow other telcos to allow unmetered access to the Internet. Essentially, this will force the company to release its iron grip on the “last mile” between the local exchange and the home, at least for Internet access. This will cause whole new levels of competitions which, over the next few months, look set to cause BT to replace SurfTime with something competitive. BT has been competing before, but against rival technologies: this time, it faces competition on the local loop, which has proved unassailable in the past.
What is more, a little research on The Register discovered that, though BT announced 20 ISPs supporting the SurfTime service, 17 of these belonged to the same service provider - Affinity Internet. Hardly a ringing endorsement. A BT spokesman was quoted on ZDNet UK as saying “If neither ISPs nor the public want it, it will die a death and so be it.” If that’s painting a gloss, we’d love to hear what BT spokespeople are saying behind closed doors.
What we are seeing here is the dying throes of a very old company, which has already announced a strategy to release it from the shackles of the past. BT has reached a point at which it can no longer rely on momentum to bring in profits, as proven by its less-than-successful results announced a month ago. The company’s new ventures should stand it in good stead for the future, as long as it is prepared to act as one of the new generation of communications service providers and not as an old-style, monopolistic telco. It is right that Oftel has acted to level the playing field, at least for unmetered Internet access: we wait with interest to see how BT will react. Any decisions it makes over the next couple of months could well seal the company’s future.
(First published 1 June 2000)
06-07 – DOJ separates Windows from its reason
DOJ separates Windows from its reason
Sometimes the simplest of statements can hide the most complex of matters. Take this one, for example: “The separation of the Operating Systems Business from the Applications Business.” As the whole court case was started on the basis of the line between the two being less than simple to define, it is worth delving a little into the detail: what, in technical terms, is the planned breakup said to entail? We need look no further than clause 7 of the final judgement. To quote:
“Operating System” means the software that controls the allocation and usage of hardware resources of a computer.
As far as the judgement is concerned, the operating systems business need concern itself only with the OS as defined above. The term “computer” is given a pretty broad definition, including PDAs, mobile phones and set top boxes (but, noticeably, not games consoles). Even broader is the definition applied to the term “Application”, comprising anything else you can think of. Which leads us to a very interesting situation, very interesting indeed.
First of all, Microsoft is currently shipping versions of its OS family with a whole raft of facilities that go way outside the scope of the definition of “operating system”. These include such high-level features as transaction servers and email facilities, but also lower level software such as calculators, media players and even text editors. The judgement considers all of these as “applications”, to be assigned to the applications business. The Microsoft OS business has been allowed a license in perpetuity to all “applications” which it currently sells as part of an OS package (apart from the browser, that is). In the future, it appears that it will be free to develop new applications but each will be under the scrutiny of the compliance committee. This makes things messy for the company, to say the least – after all, who decides when something is part of the operating system or not? For example, is internetworking software part of the OS? If so, what about secure internetworking software, such as encryption/decryption? Is a music player part of the OS – after all, it drives a hardware element of the computer.
The second issue is one of bundling. Microsoft is currently a customer facing organisation, delivering operating systems and applications that work in harmony. However it is the applications that are of more interest than the OS. Computers are evolving into devices and appliances – essentially sealed packages that perform specialised functions. Think of Network Attached Storage or Email appliances, or the EasyPC sealed PC initiative that has spawned the iPaq and NetVista, or even set top boxes and internet phones. All of these are manufactured from both software and hardware components, and the user of each is encouraged to ignore what is happening under the bonnet. In other words, the only way the OS can gain attention is by self-promotion, à la Intel Inside. Microsoft’s success has been helped by its appeal to the business end user and consumer alike. However the proposed division could hide OS for good behind an application fascia, forcing the company to become a pure OEM business – a far harder market to keep sweet than its traditional stamping ground.
Microsoft’s OS business may be permitted to develop new applications of its own, but it will have a standing start. It could buy its traditional competition (how ironic, if the new company acquired Corel and Borland), or start from scratch. One thing is for sure, it cannot survive on the strength of selling nothing but operating systems – the market is too slow and the potential for innovation is limited. The breakup of Microsoft will see two companies leave the Redmond fold – one is an applications business with good potential for growth, one is an operating system company that needs all the luck it can get to stay in the game. Windows owes a large part of its popularity to the availability of applications: without the latter, the former cannot survive.
(First published 7 June 2000)
06-07 – NEON’s iWave bridges the business process divide
NEON’s iWave bridges the business process divide
World religions are defined by some as “different ways up the same mountain” - a phrase that could also be applied to the ever-converging domains of IT software. The border between enterprise packages and bespoke applications is becoming ever more fuzzy, and the question is moving from whether to build or buy, towards how to achieve business solutions with both. At the same time, the business (by which we mean all non-technical elements of an organisation) is playing an increasing part in defining technical solutions. In fact, in a number of areas there is little division between the two: take, for example, Customer Relationship Management (CRM) in which business capabilities are largely dictated by the underlying technology services.
One of the keys to resolving the increasingly disparate portfolio of applications has been the provision of integration software, a family which includes middleware, messaging and, more recently, Enterprise Application Integration (EAI) packages. This latter group does what it says on the box - it provides a means of communication between enterprise applications such as CRM and ERP packages. It is widely admitted that EAI companies have their work cut out, largely because the scale of the problem is far greater than can be resolved by simple means. Analyst firms report that EAI is in its infancy - it may be able to deal with standard integration paths between common applications but can flounder when faced with more difficult environments.
Rather than attempt to be all things to all applications, one vendor is adopting a different tack. NEON Systems, responsible for the software which has enabled Thompson Holidays to power its Just web travel service, is to extend its iWave range of EAI software to support business processes. This merits a little explanation.
iWave Integrator is a many to many EAI product. In other words, it can be used to enable communications between multiple enterprise applications. The product currently supports the gamut of CRM and help desk products, so if for example a customer service call was being dealt with using Peregrine, a request could be passed to Siebel to provide the customer’s sales history. iWave does not use a hub approach, rather a management console permits the registration of different packages and platforms.
iWave Integrator works by allowing the registration of a number of objects that are supported by a packaged application (such as ‘customer’, ‘order’ and so on) and the definition of a number of verbs which can be associated with each object. For example, ‘customer’ may support ‘create’ and ‘delete’ as well as more complex verbs such as ‘make purchase’ or ‘change address’. Thus far, however, each application is acting as a separate entity - either of the packages, or none, can act as “master”. All that is provided is a means of communication between the two.
Now, get ready for the exciting bit. What if, rather than using these objects and verbs as a common language for packages to communicate, they were used to build business processes in their own right? This is exactly what NEON Systems are doing. A new facility, to be known as iWave Business Flow, is on the launch pad: this enables the graphical construction of workflows that can then be executed using the objects and verbs of the packages available. In one fell swoop, the EAI platform has become a tool for combining package functionality to define and automate business processes. This is powerful stuff.
There may still be some weaknesses in NEON System’s approach and products. iWave Business Flow has not yet been released, and its first version is likely to have its limitations as first versions do. Also, iWave Integrator currently lacks connectors for the wide range of packages a fully-fledged EAI environment should be able to support (such as SAP, Peoplesoft, Broadvsison and the like - although we are assured that these will be available within the next few months). Nonetheless, iWave is an innovative facility that bridges the gap between package integration and application development. It uses ‘objects’ and ‘verbs’ as its terminology, but it could so easily have used ‘components’ and ‘services’. Furthermore, the iWave portfolio includes an Interface Development Kit which permits the creation of bespoke interfaces to legacy or hand-crafted applications.
The different approaches taken by IT software may well be different sides of the same mountain, but what they boil down to is delivery of software functionality that automates business processes. Coming from an EAI position, NEON Systems seem to have grasped this fundamental and looks prepared to run with it. We can only hope that the iWave products, when they come fully integrated with a workflow engine, live up to their aspirations.
(First published 7 June 2000)
96
Normal 0 false false false EN-GB X-NONE X-NONE
/* Style Definitions */ table.MsoNormalTable {mso-style-name:“Table Normal”; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:“”; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:“Times New Roman”,serif;}
06-08 – Encryption: Should the RIP bill rest in peace?
Encryption: Should the RIP bill rest in peace?
The debate over the Regulation of Investigatory Powers (RIP) bill rages on, according to Silicon.com, but at the end of last week a lone voice appeared on the scene to welcome its measures. At the heart of the bill is the issue over encryption keys, or access to them: as it mentions on Silicon.com, individuals could be jailed if they cannot produce an encryption key for data sent over the Net. This measure does appear a bit harsh, considering that flushing non-electronic evidence down the toilet as the Police hammer down the door is not yet treated as a jailable offence.
The lone voice in question was Frank Coyle, IT director at John Menzies. To quote: “We cannot underestimate the threat to businesses from organised crime using the Internet. I think we have to try it. At the moment we have nothing, and that puts the initiative in the hands of organised crime. If the government did nothing it would be accused of being inept.”
So – where does the truth lie? So far an impressive array of organisations have lined up against the bill. Specifically, the British Chamber of Commerce, the Data Protection Commission and the Institute of Directors, not to mention numerous civil rights groups and seventy percent of respondents to a recent Silicon.com poll. Facing these massed ranks are the UK government (specifically, the Home Office) and Mr Coyle (not to mention the remaining 30% of Silicon.com pollsters).
Organisations against the bill are pretty clear in expressing their worries. Business organisations fear the damage that the encryption key measure might do to UK business, particularly in the light of Britain’s attemopts to become an eCommerce hub for Europe, if not the world. Other organisations are expressing concerns about the disregard for the basic human right of innocence until guilt can be proved. Possession of an encryption key is not a criminal act, it is argued. The third argument is that the measures, draconian as they appear, will have little or no effect on cybercrime.
Meanwhile, the government’s fears about patrolling an encrypted Internet also appear to be well-founded. What with the European Union relaxing export restrictions on encryption technology, there is a real danger (in the eyes of the government) that existing surveillance methods will become inadequate or worse. In Japan, for example, 1024-bit keys are now de rigeur: even the current generation of supercomputers would take months to crack the simplest of messages encoded in this way.
The issue of encryption is fraught with hazards. It is undoubtedly true that strong encryption would hamper attempts to track down criminals that use the Internet to communicate. What is also true, however, is that even today, technology is providing workarounds for such criminals, such as the (legal) practice of steganography which involves hiding encrypted communications in other files such as video clips.
Whilst it may be true that new laws are required to deal with new types (and means) of crime, we must not throw the legal baby out with the bathwater. The organisations that are crying out for this bill to be repealed (or at least, amended) are not just local pressure groups, but national organisations which represent our industries and our rights as individuals. It may well be true that strong encryption prevents surveillance, even to the extent that a tried-and-tested law enforcement technique becomes relegated to history. True or not, this is no time for the government to panic and implement a law that fails to achieve its objectives and causes a great deal of damage in the process.
(First published 8 June 2000)
06-14 – Cheer up, Lou – $2.1 Trillion should ease your pain
Cheer up, Lou – $2.1 Trillion should ease your pain
Lou Gerstner is a worried man. According to a leaked internal memo posted on The Register, he is fretting about many companies’ apparent seduction “by the lure of the magic market-cap wand.” He is worried that organisations are seeing the Web as key to the electronic city, as a license to print money. His fear is that organisations are not looking beyond eBusiness to other, equally fundamental changes in the global market economy. Oh dear, oh dear. Well, he may choose to seek some solace in a report just published by WITSA (That’s the World Information Technology and Services Alliance), an international consortium of 40 IT trade organisations. Among the findings of the report was the fact that global IT spending topped $2.1 trillion in 1999. Now that should give even the dour Mr Gerstner something to grin about.
It may be that Lou has a point. There is plenty of evidence that companies are e-Nabling e-Verything with the hope of huge returns, or at least the fear of losing their existing places in the market. Organisations like our own are quick to point out that businesses without an eBusiness strategy might as well pack up and go home. Are we wrong to suggest that the Web is changing business? No, not according to Gerstner. “Don’t get me wrong,” says Lou. “I’m convinced that eBusiness really is changing the entire basis of the global economy.” In other words, don’t ignore the threat and the promise of eBusiness. But don’t let it cloud the other, ongoing changes in stock markets around the world. And (perhaps most importantly) don’t think that any old eBusiness strategy will yield huge returns. Call us old fashioned, but we’re hoping that the Boo.com debacle burst that particular bubble.
The New Economy might be dangerous ground for businesses in general, but according to WITSA it is being very kind indeed to the technology sector who, let’s face it, stand to gain whether the businesses they supply succeed or fail. In the nicest possible way, it’s a bit like arms dealers who sell to both sides and clear off quick before the bombs drop. This is certainly the picture painted by the WITSA report, a summary of which is available \link{http://www.witsa.org,here}. Furthermore, the report paints a “bullish” picture of the future, with a prediction of $3 Trillion annual IT spend by 2004. The drivers it indicates are:
- continued expansion of the Internet, fuelled by wireless, broadband and the device explosion
- privatisation of government-owned businesses
- adoption of eBusiness facilities such as vertical electronic marketplaces
- harmonisation of international law concerning the electronic economy
- emerging markets, such as China, India and Brazil.
All in all, technology companies have a great deal to smile about, particularly companies such as IBM which are ideally placed to take a slice of the pie just by continuing on their present course. We are sure your feelings were heartfelt, Lou, but we don’t imagine you will be losing sleep for too long.
(First published 14 June 2000)
06-15 – Spam ban, thank you Ma’am
Spam ban, thank you Ma’am
It all sounds so simple. Late last week the US House of Representatives’ Commerce Committee agreed to support a bill that could make the problem of unsolicited email, or spam, a thing of the past. According to \link{http://www.theregister.co.uk,The Register}, central to the bill is the requirement that spammers include a valid return address in their emails – this alone could be sufficient to deter the majority of their number.
Spam is as big a problem as ever. It was reported on News.com last week that a recent study found ISPs and free email services were capable of blocking up to 73% of unsolicited email. The downside is that the 27% figure refers to an ever-increasing pile of junk email, with spammers becoming increasingly determined to get through the Net. On a positive note, the anti-spam measures were not found to be preventing a single kosher email from being delivered.
So – why should a simple measure like the return address make such a difference? There are several reasons. First of all, as mentioned, the measure is likely to put potential spammers off. It is one thing to spam anonymously, but to move out into the open is an entirely different matter. Second, with the existence of a source of the email, it will be easier for the recipient to take steps to prevent further emails from the same place. The address can be blocked, or reported to the ISP or to an anti-spam organisation, or even flamed with a few hundred responses. Third, by making anonymous unsolicited email illegal, the case can be brought against individuals more easily than under present laws.
Of course, there will be nothing to prevent spammers from continuing to use insecure servers as through-routes for the thousands of messages that they send. The method is simple: dial up to an ISP, set mail.acme.com (where Acme have an incomplete or incorrect security configuration) as the mail gateway, then fire a thousand or so emails at the Acme server which then forwards them to the destination addresses. After a while, disconnect from the ISP, reconnect to a different ISP (with a different IP address) and do the same thing. Tracing the initiator of such emails is virtually impossible at present. It can be hoped that the number of servers that leave themselves open to this kind of attack is dwindling as sites become more security savvy, either before or after finding themselves victims.
Unsolicited email is a product of an insecure global network, weakly legislated and poorly policed. Things will probably get worse before they get better, for example as the mass rollout of ADSL provides a new pool of insecure computers which can be used as unsuspecting host for spam forwarding. It will take a combination of better legislation and tighter computer configurations to make spam a thing of the past.
(First published 15 June 2000)
06-15 – Sun Storage: the future is bright, the future’s Purple
Sun Storage: the future is bright, the future’s Purple
One of the big surprises about Sun’s recent storage announcement was that it was now making more money out of sales of storage devices, than of servers. Now that’s food for thought. Clearly, this is the factor that has spurred it on to go head to head with companies more traditionally associated with the pure storage market, notably Compaq, HDS and EMC. Sun have announced a new storage strategy and a new product line, the flagship of which is the StorEdge T3 – code name Purple – which fits 300 Gb in a box the size of a desktop computer.
Sun’s StorEdge range is designed to plug into AIX and Windows NT networks as wel as Sun’s own Solaris environments. Linux support is also planned - the goal is to provide open, scalable storage to work in heterogeneous environments. Having said this, fifteen years ago Sun was one of the first companies to apply the “Open Systems” label to its Unix-based workstation and server products. In this context, “open” meant, “can communicate with other Unix systems.” It is worth delving a little into what Sun means by open storage – look no further than Jiro.
Jiro (previously known as StoreX) has been on the cards for a long time – a good couple of years, if memory serves. Essentially Jiro provides a set of interfaces to enable heterogeneous storage systems to be managed, configured and allocated as a single, virtual hierarchy. The StorEdge range of products will be Jiro-enabled and, as Sun’s \link{ http://www.sun.com/storage/index.html,Storage web site} proudly boasts, “many technology industry leaders support the Jiro effort.” This includes companies like Veritas and Legato, who together have the storage software market pretty much sewn up.
Trouble is, Jiro isn’t the only storage management standardisation effort in town. The Storage Networking Industry Association (which can be found \link{http://www.snia.org,here} is looking to provide exactly what Jiro supports. Similarly, the Distributed Management Task Force is also working on storage management standards. The good news is that the two organisations are working together. So what of Jiro? Companies like Veritas freely admit that they are standards-agnostic and they are supporting the work of SNIA and Jiro, not to mention efforts of other organisations such as the Fibre Alliance. Companies such as EMC and Sterling (now part of CA) have also proudly presented their storage management facilities, all the time actively supporting the standards bodies.
All in all, storage management standardisation is not a pretty picture. No doubt, as with Java, Sun’s idea of an open standard is one that Sun has the casting vote over. Jiro may have its technical merits and it may have the buy in of other organisations, but it appears that the possibility of a truly open storage environment is a long way off yet.
(First published 15 June 2000)
06-15 – Telecomms unbundling – watching the watchers
Telecomms unbundling – watching the watchers
The UK’s telecomms watchdog, OFTEL is between a rock and a hard place. Back in March, it confirmed that the 1st of July 2001 loop was to be the “absolute deadline” for the unbundling of the local loop. At the same time however, PM Tony Blair was in Lisbon signing up to a pan-European agreement that fixed the deadline for the end of 2000. Now it is looking like the EC will hold OFTEL in breach of directives that are hastily being drawn up to carry forward the Lisbon agreement. A storm is brewing, and it looks unlikely to be the last for poor OFTEL.
This will not be the first time OFTEL has fallen foul of European directives, but the third. The watchdog has already been held to account for its less-than-agile approach to competition in the mobile market, not to mention an ongoing issue surrounding carrier preselection, that is the ability to use another telephone company without entering additional digits at the dial tone. As if it were not enough to be accused of sluggishness by the EC, when it acts, it is accused of pandering to the incumbent telco BT and penalising UK businesses in the process. At the end of last week, OFTEL’s attempts to kick-start the stalemate over the xDSL rollout were met with incredulity by some members of the Spectrum Management forum, set up by OFTEL itself to keep momentum in the unbundling process. Already, OFTEL is expressing fears about missing the unbundling deadline and this is without taking into account the fact that, according to the EC, it has already slipped the date by six months.
Not everything that OFTEL has done has been seen as such a bad thing, however. The decision made at the end of last month, that BT SurfTime was uncompetitive as BT did not offer a wholesale version of the product, was hailed as a “victory for consumers,” according to \link{http://www.zdnet.co.uk,ZDNet}. ISPs can now re-sell BT’s SurfTime in whichever way they choose: this will effectively open up the market to a whole new wave of unmetered access deals. This has both up- and downsides – on the upside, prices will come down still further than the currently offered deals; however the complexity of the market will no doubt increase, while the quality of service may drop in the short term anyway.
Whatever the final date for unbundling, be it Christmas or the middle of next year (and we rather suspect the former, after all Tony Blair is never wrong), the pressure gauge will rise and OFTEL will have no choice but to accept its share.
(First published 15 June 2000)
06-22 – IBM’s voice recognition grows up
IBM’s voice recognition grows up
We knew that Ozzy Osbourne’s talents were diverse, but we didn’t expect to see him popping up as general manager of IBM Voice Systems. Okay, it is a different chap but somebody who associates himself with the hard rock icon must be either barking mad or know exactly what he is doing. Judging by IBM’s latest announcements in voice recognition, we would suggest the latter.
IBM has announced a complete revamp to its voice strategy, which reflects and confirms the direction taken by the IT market as a whole. Essentially, the technology landscape is moving away from the fat client architecture using general purpose PCs, towards a structure that concentrates hard processing on the server and performs certain specific functions on embedded chips in client devices. These two areas – thick, general-purpose servers and thin, function-specific clients – provide the model upon which IBM is to base its strategy for voice recognition in the future.
It has to be said that something had to give. As an advocate of the potential for voice recognition for many years, I felt obliged to run through the learning mode of IBM’s latest release of ViaVoice. After over an hour of reading sections of (most appropriately, I must say) Alice in Wonderland to my computer, I then attempted to dictate an article. The results ranged from the hilarious to the baroque – progress was slow, not helped by my own, frankly puerile giggling at some of the phrases that were generated. Great dream, but the reality sadly lacks. More success has been seen by companies such as SpeechWorks, which concentrate on specific, server-side areas such as telephone share trading or flight checking, but even these have been subjected to the mockery of the general public. Even assuming that the ability of computer software to interpret the spoken word does become a reality, the fact is that we do not speak as we write – dictation is difficult enough to beg the question – why bother? Which is why IBM’s strategy makes a lot of sense. Let’s look at why.
IBM is focusing on two areas. The first is the server, in which (like SpeechWorks), the aim is to integrate speech recognition into enterprise applications. According to \link{http://www.news.com,News.com}, planned for autumn release is WebSphere Voice Server with ViaVoice Technology, a suite of tools for helping call centres better use the Web. Also to come is a product that will enable Siebel users to integrate voice calls and Web-based queries. In addition, IBM announced a partnership with Internet speech specialist General Magic that is to target voice for eCommerce applications. The second area of focus is the embedded device: IBM is to release embedded ViaVoice, a Java-based software development kit which is targeted at PDAs and mobile phones as well as in-car devices and the like. According to W. S. (Ozzy) Osbourne, IBM is positioning its technologies as a framework for others to use rather than trying to go direct to the end-user.
Why is IBM’s strategy sound? To believe this, one first has to accept that voice recognition does have a future, albeit more focused than the generalised “human-talks-to-computer” model. Given this, on the enterprise IBM is concentrated on specific application areas such as call centres - no doubt, with limited vocabulary though enterprise servers will have the processing power required to support more general recognition. On the client side, IBM are facilitating voice features to be built into devices, rather than implementing such devices themselves – it is likely that product developers (such as Motorola, likely to bring out an in-car facility) are better placed to identify and develop workable applications for voice.
IBM is not dropping ViaVoice, but they are recognising that the shrink-wrapped market is never to be a major target for voice recognition. Rather, they are focusing on areas that make good business sense and also give these still-young technologies a better chance to shine. Even then, it will be a while before voice recognition manages a reasonable interpretation of the lyrics of Mr Osbourne’s namesake. Not even IBM can do everything.
(First published 22 June 2000)
06-22 – Microsoft registers www.NewStrategy.NET
Microsoft registers www.NewStrategy.NET
The IT industry is a strange one sometimes. On Thursday last week, Microsoft pulled the covers off its gleaming new strategy, an announcement that it has been building towards for many months. Products incorporating the fruits of this strategy will become available next year, which begs the question – what was the previous series of big strategy announcements all about?
Let’s work back from the answer. Last week Bill Gates announced .NET – an Internet-centric vision that sees Microsoft’s current product lines as being the building blocks for a seamless, global computing environment. As reported on The Register, we get Office.NET, Windows.NET and MSN.NET for starters (and you can work out the rest for yourselves). Products will be .NET-enabled by bolting on XML-based interfaces. The whole thing begs a few questions.
First off, there are issues of (if I may) a techno-architectural nature. It is one, relatively straightforward thing to draw a blueprint diagram of the “ideal” architecture for Web-enabled applications. There aren’t that many ways to skin that particular cat, at least not in theory. In practice however, things get a little more complicated for two reasons. Yes there is the issue of legacy – “this is not a green field site,” say the consultants, no doubt earning plenty of money in the process. There is also an issue of global complexity. Microsoft has drawn up a reasonably comprehensive framework of its own but it seems to be dependent on a couple of factors – that the whole world adopts it, and starts from scratch to do so. Either factor sounds just a trifle infeasible. Let me stick my neck out here: the way of the future will be one that supports the heterogeneous mass of complexity that already exists (and there is more on the way, what with Mobile and all). Single-company solutions, however elegant and widely adopted they may be, cannot succeed.
The second issue is one of strategy versus product. At the beginning of last year, Microsoft launched BizTalk, an XML-based framework to support business communications. XML is becoming a bit of an overachiever however – business-to-business traffic is not enough for the megalomaniac language, which (as SOAP, in partnership with Microsoft and now IBM and Sun – see the link?) is being touted to be the format for communication between application components as well. Trouble is, Microsoft may well be quick to see the exponential potential of XML, and are changing their strategy on a monthly basis to fit. However the products are forever trying to catch up with the vision. Not long ago, delays were announced to BizTalk Server to include support for business processes, another conquest of the XML strategy. BizTalk Server may be out after the summer but by then, may well be eclipsed as companies hold out for BizTalk.NET, unlikely to be available for at least six months.
There is one more question that must be asked – is the .NET strategy anti-competitive? Of course it jolly well is. What Microsoft has done is pulled together its own product lines as building blocks to act as a foundation for the future of the Web. The company is between a rock and a hard place: clearly, the future lies in the integration of today’s applications and operating systems, however Microsoft does not wish to concede that customers should have a choice of different platforms. Keen that its own portfolio should be used in preference to others, the picture the company paints is exclusively Microsoft. This is a flawed perspective, which will ultimately cause the Seattle giant more problems than it solves. It looks like the future holds plenty more opportunities for Microsoft to change its strategy.
(First published 22 June 2000)
06-28 – European tech investment rides the storm
European tech investment rides the storm
There has been much in the press in recent months about the dot-com bubble. It either burst, or is looking decidedly leaky. In either case, it doesn’t seem to have affected the continuing rise of interest, and injections of capital, into the European technology sector.
According to a report in Tornado Insider, a pan-European survey from PricewaterhouseCoopers saw venture funding in technology companies up 70%, to 6.8 billion euros. Overall, about five thousand individual investments were made with an average value of 1.4 million euros. Unsurprisingly, the largest share of investments went to software companies, with business to business eCommerce and Wireless companies both proving popular. According to the PwC research, the trend is continuing upwards in 2000. So - what was all that about a shake-down in technology stocks?
According to Marco Rochat, the change may not show up in the trend charts, but it is still profound. “We don’t think the market crash has changed much, just where the money is going,” said Rochat. Reading between the lines, the technology bubble may well not have burst, but one bubble certainly has - that of throwing money at all possible ventures, in the hope of making some fast returns.
We have recently seen what we hope are the last vestiges of this practice, which is as doomed as it is foolish. Recently, we heard reports of potential investors saying, “put money into the mobile space. Don’t ask questions - just invest!” Scary stuff, as it is based on the fundamentally flawed assumption that new companies in this arena are more likely to succeed than to fail. The message that caused the tech stock crash earlier this year was that there was no guarantee of success, not even for high-profile players. Having learned the lessons, discernment and foresight are the qualities that venture capitalists are bringing to bear. Companies are springing up which enable the quality of ideas, the strength of products and even the capabilities and skills of the staff to be tested before any investment is made - it can only be considered surprising that VC’s were not making these checks in the past.
The technology roller-coaster ride is thrilling, and there are great rewards at stake. However this does not prevent investors from looking before they spend. It is telling that, even given the additional care that is being taken prior to funding decisions, the same number (or greater) of investments are being made. This can only mean one of two things - either a careful eye is enabling companies from investing in the good companies rather than the less-good, or the start-ups themselves are coming to the table with more than a good idea, enthusiasm and flair. Either brings an increased maturity to the game, which will add to the stability of the market in general. Long may the trend continue.
(First published 28 June 2000)
06-30 – Ain’t the PC dead yet?
Ain’t the PC dead yet?
There’s only a few drink cartons and strips of gaffer tape to indicate the presence of PC Expo, which came to a close at the end of last week. One of the debates it spawned was whether there was any future in the Personal Computer: the answer, was of course, as diverse as the IT market itself. Essentially there are two camps, the gadget freaks and the PC diehards: ho would ever have thought that PC companies would be the ones looking staid, next to the new generation of technology companies?
A major strand of activity at PC Expo was caused by companies such as Palm, Handspring and Sony, all touting there wares as the future of technology. Certainly there is a place for the device – there may be more PCs than there are handhelds, but the latter is about to merge with the mobile phone which has already overtaken the PC in worldwide sales. Given the rate of churn in the mobile phone market, in two years time there will e few phone users who do not have PDA application functionality on their phone.
At the same time, PC manufacturers were quick to explain how the market for PCs is going full steam ahead. Based on current statistics the prediction is for 1 billion PCs to have been sold by 2005. According to a study released last month, one of the drivers for PC sales is the PDA – users require a central repository for the data that is spread across multiple devices. Enter the PC, in its role as “the mainframe in the living room.” Trouble is, there are a couple of factors that the report appears to have missed.
The first of these is broadband technologies. Ralph Martino, VP of strategy and marketing for IBM’s personal systems group, was quoted on \link{http://www.news.com,News.com} as saying that broadband communications, in both the wired (xDSL) and wireless (UMTS) forms, would provide a backbone making the PC even more indispensable. Trouble is, broadband is also the enabler of a new technology model – that of the Application Service Provider or ASP.
Individuals and businesses with broadband Internet access be turning more and more to services available over the Web. The reason for this is simple cost-effectiveness: it will be cheaper to do so than to buy and install the applications. Broadband games users, for example, will not have to purchase a copy of a multi-user game before going head to head against their pals; rather, they will put in a credit card number and start to play. The end-node device need only be capable of receiving and displaying the graphics.
Maybe the debate is centring on the wrong topic. Many PCs wil be sold in the future, but they will be very different from the ones we use today. The self-contained units based on the Easy PC initiative, produced by companies such as Dell and HP, are an indication of what is to come. Further indication came at PC Expo, with IBM’s impressive demonstration of a watch-sized PC. The PC architecture may never die, and why should it? It may not be optimal, but it is a perfectly reasonable basis for computing. What is already on the slab, is the need for expensive, complex, noisy, error-prone combinations of hardware and software, either in the home or in the office. This model of computing has never worked, and the sooner we can get it in the ground, the better.
(First published 30 June 2000)
06-30 – Guess how Clinton signed the Digital Signature bill?
Guess how Clinton signed the Digital Signature bill?
A major piece in the security jigsaw dropped into place at the end of last week, as US President Bill Clinton travelled to Philadelphia to sign a bill give electronic signatures the same legal status as handwritten. Quite why he had to travel all that way in this wireless world is quite beyond us, but all the same the Signature signature (sic) represents a very important step indeed.
Anyone who has attended security conferences over the past few years will have realised that there is very little left for the technologists to do. Most of the major problems – authentication, non-repudiation, encryption and the like – were solved years ago and the biggest problem facing security vendors is now how to ensure the adoption of such technologies. One of the main blockers to this adoption process has been the legal basis of the digital signature itself.
Digital signatures employ a public key mechanism. Two keys are used, one private which is used to encrypt the message, and one public which can decrypt it. The rather clever effect of this pairing is that, if a message can be decrypted using a person’s public key, then it must have come from the person as nobody else could have encrypted it. Hence we have the concept of non-repudiation – it becomes possible to guarantee the source of a message.
Security facilities are built into email systems, transactional systems and vertical applications, but they have not been getting the use they deserve. For example, the only emails that we have seen using digital signatures, have been those coming from security vendor companies. This lack of desire has been due in part to the all-or-nothing principle – if nobody is doing it, then nobody does it. It does, however, open up a weak link in the chain: it is possible to hold up an email as evidence of a commitment, but it does not provide a legal basis of its own. Also items such as contracts still require to be signed and posted, or faxed, before they are acceptable: both options are slower and more onerous (and costly) than a purely electronic means. That is, until now.
Once again of course, it will be necessary for other countries than the US to adopt the measure before it will really kick in. This will happen – it is only a matter of time. When it becomes possible for parties to agree contracts and other transactions electronically, the final nail will be hammered into the coffin of paper-based communications.
(First published 30 June 2000)
06-30 – Sun’s second thoughts about Open Source Solaris
Sun’s second thoughts about Open Source Solaris
Sun is developing a bit of a reputation for raising expectations to the highest levels then sheepishly admitting it cannot carry them through. It happened with the standardisation of Java, when Sun lodged the Java specification with standards body ECMA before withdrawing its application a few short months later. It looks likely to happen with Jini, the much-touted plug-and-play standard for devices which has thus far failed to deliver a product. And last week, it was the turn of Sun’s open source initiative, on the rocks only five months after it was announced.
What’s the problem? In this case, it seems to concern the scale of the task. There are nearly ten million lines of code in Solaris , and Sun did not feel it could just plonk it onto the mass market without “making it user friendly” first. It is understandable that massaging ten million lines of code would take a while, but a couple of questions remain.
The first is – what is so unfriendly about the code? Fair enough if this equates to adding a few copyright notices and generating an HTML code browser (à la Java). Less reasonable is if the code needs restructuring or additional comments to make it readable, or worse – are there parts of the code that are currently too convoluted to see? This brings us to the second question – surely a plan was put in place in January, so what has changed to delay the plan? It may be that, on investigation, Sun realised that the problems with the Solaris code were going to take longer to solve than they expected. Now, it may be that this is an attempt to overstate the issue – us commentators are always looking for a wound to rub salt in. But the fact remains that Sun is having to lower expectations about when and how it releases its code to the open community. Probably the biggest issue is this: Sun has realised that, given the profile of Solaris, there will be a horde of points-scorers ready to identify weaknesses, bugs, design faults and security holes in the code and Sun must do everything in its power to minimise any risks.
Sun’s code giveaway will come, but more slowly than was previously announced. Sun are likely to hit their Q3 target, but only for certain sections of the code. According to Anil Gadre, Sun’s vice president and general manager for Solaris software, “the other thing we are finding out is that maybe people actually wanted certain parts and not the whole thing.” Sun is therefor considering releasing the code in a piecemeal fashion. Gadre’s remarks sound a little like Sun is hedging its bets (use of the word “maybe” is the giveaway), and will be a good fig leaf if the time comes to stagger deliveries of the code. Sun will release the code sooner or later, but more with a whimper than a bang.
(First published 30 June 2000)
July 2000
07-06 – EC to Microsoft: We just don’t trust you
EC to Microsoft: We just don’t trust you
Just two days after denying the Wall Street Journal news report claiming that the European Commission would block Microsoft’s investment in a UK cable company, both parties confirmed on Friday that this was indeed the case. According to The Nando Times Microsoft no longer plans to up its stake in Telewest Communications plc, a deal which would have seen the giant investing $3bn to gain control of the company.
Microsoft has never been cagey about its real intentions when investing in cable and other communications software. The plan has always been to develop new markets for the sale of Microsoft software and products: the company knows full well that the cash cows of the PC market will dry up one day. However we are talking about one of the biggest corporations in the world here, and it has got a bit of a reputation for being over-bullish. Hence the anti-trust case which is still running its course, and hence also the stance of the EC which is determined to do what it can to ensure Microsoft does not engage in anti-competitive practices.
It is easy to take a pop at Microsoft but at the same time it is worth considering the other side of the argument. In blocking the deal the EC has effectively prevented a US firm from making a $3bn investment in Europe. This is an indication of the kind of muscle that the EC now holds – without the details of the case we can only hope that the decision was made wisely, and quite clearly it is not something that can happen too often before it starts to damage the economies of the member states. Equally clearly, the EC is so fearful of the monopolistic might of Microsoft that it is prepared to step in and act.
The message that comes through loud and clear is that it is not over yet for the software giant. We have said in the past how the company’s every move would be subject to the most intense scrutiny – having been found guilty of abusing its position (albeit subject to appeal), most onlookers will watch each deal with no little cynicism. The failure of the Telewest deal is indicative that the EC has teeth, and that Microsoft’s corporate future can be compared to walking on eggshells. Bill Gates has already expressed his worry that the DOJ ruling will limit his company’s ability to innovate. Even before the antitrust case reaches a conclusion, it looks like the chickens are coming home to roost. The road ahead could be rocky.
(First published 6 July 2000)
07-06 – Oracle’s NC relaunch – Its not about winning, its about taking part
Oracle’s NC relaunch – Its not about winning, its about taking part
Guess what? An Oracle spin-off company is to launch a new type of computer, called the Network Computer. Hang on – haven’t we heard this before? Let’s just check the facts before we go on.
Oracle first attempt at the NC came in 1996, hot on the heels of the negative press the PC was receiving. Total Cost of Ownership was the name of the game – the PC was turning out more expensive than its list price had implied. It is academic whether Larry Ellison was driven by a brave new world vision or by a desire to kick Bill Gates while he was down: in either case he failed. Microsoft’s Zero Administration Initiative may have been a fig leaf, but it served to allay the overhyped fears of the time. In any case, the momentum behind the Windows-based PC has proved unstoppable.
We knew in November last year that Oracle would be relaunching the NC. Surely the company can’t still be chasing the rainbow of preventing its greatest rival from taking over the world? It is fair to say that the landscape has changed considerably over the past four years – the Internet has arrived and shaken up IT vendors, end-user businesses and consumers alike. You’ve read the stories before, involving great riches but – oh look – some failures too. You know that the future lies in devices and appliances, and that anything goes in the thin client world of the browser-based Internet. You know that services are moving more and more online, what with eCommerce hosting, electronic marketplaces and the stealthy arrival of the Application Service Provider or ASP. Against this background, Oracle’s NC has become an Internet access device, a Net appliance promising a low-cost point of entry to the Web. The question is – will it succeed?
There is no shortage of companies bringing consumer Net appliances to the market. Companies like Netpliance, who only last week increased the price of its I-opener device from $99 to $399, not to mention Intel and Emachines are all sure that a market exists for “pure” net devices, that is computers which can do little but access the Web. One thing is for sure: this market can only exist if consumers have high-speed Internet access – this may be true for cable or ADSL connected users, but these are not yet in the majority even in the United States. In the UK, ADSL trials may have started but the general roll-out is not expected to complete before June next year. In the meantime, companies such as The Free Internet provide 0800 ISDN access for a one-off yearly payment. The second issue is whether the range of services currently offered on the Internet is an adequate replacement for software packages on the PC. Online games still require users to install software; the Microsoft standard word processor is not yet available to consumers over the Web.
For organisations with a budget, such as corporates and government/educational establishments, the NC model starts to make more sense either for companies running a server-based computing model or for those trialling ASP services. Even then, a Citrix-style model is currently needed to ensure that all required applications can be provided over the wire: this model will not suit all comers.
There can be no doubt about the validity of the thin client model. What is less certain is the maturity of the server side of the equation. Even once thin client becomes the dominant model, there is little reason why Oracle should win the lion’s share of the device market as one of the key factors is the interchangeability of devices. Oracle may well succeed in increasing the momentum behind Net appliances, but it is unlikely to claim the market for itself.
(First published 6 July 2000)
07-06 – Web service: still no cigar
Web service: still no cigar
Oh dear, oh dear. Some of the largest corporations in the world have had a year to make improvements, but cannot yet claim to deliver the level of service that is expected of them. At the end of last week, \link{http://www.theregister.co.uk,The Register} reported a Rainer UK study into web responsiveness of the UK FTSE 100 and the US Fortune 100. By “Web responsiveness” read “how long it takes for a company to respond to an email sent via its Web site.” The conclusion of the study was that “40 per cent of the leading UK and US public companies are failing to take the Web seriously as a communications channel.” The results included the following:
· Of the 200 companies, 148 provided a reasonable Web-based contact mechanism. Of these, 113 responded to the contact email within 30 days
· The most responsive companies sent a reply in an order of minutes, with pole position being taken by the UK’s National Power
· The three companies that took over 20 days to respond were all technology providers – Colt Telecom won the wooden spoon, followed closely by SBC Communications and Dell Computer.
Stephen Waddington, managing director of Rainer in London, was reported to be appalled by the findings of the study. “Two in every five of the Fortune 100 and FTSE 100 Web sites are little more than corporate wallpaper,” he claimed.
Waddington was right to be appalled. The Internet has been taking some stick in recent months, with some major casualties of eCommerce proving that clicks-and-mortar ain’t necessarily going to win over bricks-and-mortar. It is becoming abundantly clear that the success stories of the future will be those companies that can run successful businesses in both the physical and virtual domains.
It is fair to say that email is not the be-all and end-all of eCommerce. The cliché “the competition is only a click away” refers to difficulties purchasing goods and services, rather than getting a response to a electronic enquiry. However, this survey is a good indicator of the integration between clicks and bricks. A year ago, eCommerce was seen as a mechanism for reaching out to the global marketplace. Six months ago, it was driving the need for back-end application integration. Today, eCommerce is recognising what many businesses have always known - that customer service is the major differentiator. Companies achieving sub-hour response times on queries have clearly had the wherewithal to acknowledge this and to do something about it. The other companies have a stark choice: to learn the easy way, from surveys such as this, or the hard way as customers vote with their mice.
(First published 6 July 2000)
07-14 – MCI WorldCom and Sprint run into the sand
MCI WorldCom and Sprint run into the sand
All is not well for either MCI WorldCom or Sprint, as the failure of the proposed merger between the two companies,. has left each in danger of being taken over. While this may be not such a bad thing for Sprint, whose smaller size and mobile technology capability make it an attractive addition to the portfolio of any major carrier, it can only be seen as damaging for Bernard Ebbers’ company. Coupled with the outages at its MAE West facility that sent shock waves across the Net, last week was not one of the best ever in MCI WorldCom’s history.
Not that the deal was very likely to go ahead. As we reported \link{http://www.it-director.com/99-09-29-1.html,here} in September last year, the deal hit rocky ground as it left the starting blocks. At the time MCI WorldCom was already in trouble with the EC over its takeover of Cable and Wireless, and the proposed takeover of Sprint caused an immediate hostile reaction from the Department of Justice. The final decision by the two companies came Thursday last week, two weeks after the DOJ started the legal process of blocking the $120 million merger.
MCI WorldCom must be devastated. The deal with Sprint would have added to MCI’s portfolio some capabilities that the company wanted to enable it to keep its position as one of the major US and international telecommunications companies. The deal is mobile: Sprint may be the number 3 long distance carrier in the US, which is the reason the DOJ wanted to block the deal, but this is not what attracted MCI WorldCom which urgently needed a slice of the burgeoning mobile market. Now the merger is off, the company must think again at the same time as steeling itself against a raft of takeover bids of its own (not to mention a share price which has dropped over a third since the merger was first announced - down to $47 from $75). BT has been rumoured to be considering a purchase, as well as NTT and Deutsche Telecom.
Where next for MCI WorldCom? If it was not bought first, maybe it would consider buying Orange if the latter’s deal with France Telecom runs into the weeds. This would at least give the company some mobile capabilities, albeit not as extensive (or as geographically attractive to the US-based company) as would have been the case with Sprint. All is not lost but it will be a good few months before MCI WorldCom can put these events into the past.
(First published 14 July 2000)
07-14 – Microsoft for rent
Microsoft for rent
If it’s Microsoft, it’s good, right? Well in this case it just might be. On Friday the company launched its software-for-rent strategy in which software licensing will be paid for on a subscription basis rather than as a one-off fee. This announcement is expected to be the first of a series that will align the company with the principles and practice of Application Service Provision (ASP), or the delivery of applications over the wire.
The main strengths of the subscription model for applications are based on addressing the current issues of software delivery and licensing. The current approach causes multiple versions, incompatible installations and (perhaps most infuriatingly) the need to pay for irrelevant parts of bloated software bundles. Let’s be honest here – how many times have you really used that copy of Microsoft Access? Furthermore there is the constant bugbear of having to pay for upgrades to packages in order to resolve bugs in previous versions. Wouldn’t it be great if, for a one-off, yearly fee we only have to buy what we want and all upgrades are free? Yes – if the price is right.
The only flaw so far in Microsoft’s strategy is the issue of pricing. In pilot studies, the company took the list price of a package and divided it by 24, effectively meaning that if you are likely to use a package for more than two years without upgrading then you might as well buy it outright. That sounds a bit steep to say the least – in other words, one advantage Microsoft will not be promoting over shrink-wrapped software is that of cost. Of course this does not have to be the only costing model. It should be possible to buy a package for a month, for example, or even for a minute (for example to open and print an attachment created by an obscure package). Certain packages – the obvious ones being word processors, spreadsheets, email and Web browsers, should rightly command a premium as they are them most used (but conversely, maybe should also be subject to quantity discounts). The fact is that the issue of cost has yet to be fully fleshed out, by Microsoft and everybody else. It is clear that different application types will need different models – for example it is unlikely that any corporation will be running SAP on a pay-as-you-go basis. However the definition of these models – and how they fit together – will take time.
Ultimately we see one of the greatest strengths of the ASP model to be one of granularity. Everything costs, but it should (as in, we need it to be) possible to pay for specific software functionality on an as-needed basis rather than through purchasing applications just in case. This is true for office applications, enterprise packages or anything else. Over the next few years, hopeful vendors will attempt different ways of enticing corporate customers to invest in their own approaches. The customer, or the business case, shall decide.
(First published 14 July 2000)
07-14 – Red Sun is rising on Linux
Red Sun is rising on Linux
And the conclusion is – Linux will not conquer the desktop or the laptop, but will win on embedded devices. This isn’t idle speculation: let’s face it – when a significant number of electronics manufacturers from the World centre of such products line up behind Linux, then the rest of us should take note.
Linux isn’t necessarily doomed on the desktop. Indeed, it might well have faced a rosy future if it wasn’t for the question: “what is the point?” The world has already chosen an operating system and hardware architecture which, whatever its faults, is proving adequate for most uses. Linux will not succeed on the desktop any more than, say, Windows 2000 – neither gives a user sufficient additional value to merit the swap. There will always be advocates for desktop Linux but the mainstream has already flowed one way downhill and would take some pushing to get it down a different route.
On embedded devices, however, we can see a different story. The advantages (and disadvantages) of embedded Linux have already been covered, but advantages do not a product make. How different the world appears when companies like Sony, Fujitsu, Toshiba, Mitsubishi and so on – 23 of them in all – line up behind the operating system. This isn’t one company setting a strategy to give it USP against its competitors, or a hopeful start-up looking for a niche. The message is clear: Linux is a perfectly adequate operating system for us to use in our devices. So much so, that we want to work together to make it even better.
Sony has been one of the loudest advocates of Linux. Already it has announced its support for the Linux-based TiVo video appliance, which can store up to 30 hours of TV programming. On ZDNet in March, it was noted that a Sony representative revealed the intention to use Linux in future generations of the Playstation. “We needed a stable operating system,” explained tongue-in-cheek Phil Harrison, of Sony Entertainment America Inc.
So – what is enticing Japanese companies down the open source route? The obvious reason is cost – apart from the obvious research and development investment, there is no charges for licensing Linux which brings down product costs significantly (compared to licensing, say, Microsoft or Palm products). The second reason is the chicken-and-egg argument of choosing what everyone else is using. This is what makes it clear that embedded Linux is being taken seriously – everyone is doing it. There may also be an element of Japan seizing the opportunity to leap-frog the US software industry, an area that has only had limited success for Japan in the past.
There is one other consequence of the Linux move. Leaving the PC industry aside, electronics companies have traditionally taken proprietary approaches to platforms. Take, for example, the games system market with each of Sega, Nintendo and Sony guarding their own platforms for their own games. With Linux at the core we may well see an opening up of such platforms, with differentiation being on brand and functionality rather than on available software. Whatever happens, there can be no doubt that Linux has won over a large and powerful proportion of the electronics industry. It may not win on the desktop PC but given its huge potential elsewhere – plus the potential for embedded devices to put the squeeze on desktop computers – it is unlikely to be too upset.
(First published 14 July 2000)
07-21 – eCommerce comes in from the cold
eCommerce comes in from the cold
The bubble that is eCommerce may have burst, but that does not downgrade its potential for companies new and old. This is the message that may be garnered from recent events, notably the closure of News International Network News service, the acquisition of CDNow by Bertelsmann and the go-ahead given to LetsBuyIt.com by financial analysts.
Perhaps it is the closure of the Network News office in London (with the result that 30 staff are being made redundant) that is the most telling. The intention is to roll the production online versions of newspapers such as the Times and the Sunday Times, back into the paper-based newspaper development. Clearly running Network News as an autonomous operation did not work, however we can rest assured that online versions of these papers will continue to be produced.
Secondly, Bertelsmann are buying the beleaguered CDNow for $117 million. CDNow’s shares have tumbled from $21 last July to the current levels of $2.8, and the company has been searching for a buyer since early this year. Bertelsmann will be keeping the CDNow brand going, but the company will become just a front for part of its eCommerce division. Ironically, CDNow also recently closed its London office.
Following last minute discussions, banks are now satisfied that the LetsBuyIt.com launch is worth the risk. By the time you read this, the company should have been launched.
Put all these three things together ans a pattern emerges. The launch of LetsBuyIt.com is an indication that, despite the negative publicity surrounding dot-com companies, there is still money in them there hills. The Gold Rush may be over but the gold mines are still profitable. What is interesting though, is the change of perspective. LetsBuyIt.com wasn’t given free rein, based on the (now disproved) assertion that being on the Web was a license to print money. The company had an uphill struggle to convince the banks that it was all worthwhile. The other assertion which can now be laid to rest is that dot-coms are the only way of doing business. Bother Network News and CDNow have found that their new-and-improved business models were no substitute for the old, established methods and both have now been brought back in line.
Things are not as simple as this of course. Bertelsmann has gone through substantial restructuring, not all of it successful, to meet the demands of the Web. However it has shown that it can compete. Similarly, by closing Network News, News International are saying “we don’t need a separate company for this – we can change ourselves to meet the demands of the Web.
This is all sobering stuff and is indicative of the maturity level that the Web has now reached. The promise of eCommerce remains, but not at the detriment of all that went before it. There is still room on the Web for the Bertelsmanns and News Internationals among the Amazons and LetsBuyIts, and vice versa.
(First published 21 July 2000)
07-21 – Unbundling on course, but so was the Titanic
Unbundling on course, but so was the Titanic
Unknown to its navigator, the Titanic was several miles off course when it ran into difficulties with ultimately tragic consequences, seeing its doom for the first time only as the mother of all icebergs loomed out of the fog.
Consider recent events surrounding handling of the unbundling issue. Both BT and Oftel are sending out mixed messages with respect to both opening up exchanges and roll-out of ADSL services. We only have to look at a single recent story on VNU.com to get the picture:
• “The EC has proposed a draft law requiring European Union member states to unbundle their local telephone networks by 31 December 2000”
• “Oftel … is “well on schedule” to meet the December 2000 deadline, even if unbundled services wouldn’t be widely available by then” – a contradictory statement in itself and one which falls short of EC requirements
• “A BT spokesman said the company is on course to meet the deadlines agreed with Oftel, and unbundled services would be working by 31 July 2001.” – another contradictory statement, considering Oftel’s “well on schedule” remarks above.
Other information in the press and on Oftel’s web site muddies the icy waters still further. Rival telcos are to gain only limited access to local exchanges between now and the end of the year, with up to a hundred to be made available for “pilot projects.” The concept of a pilot project does not sit well with that of a “limited service” – Oftel’s term to describe what would be available by year-end.
Clearly it is in BT’s interest to slow down the process in any way it can. ADSL is a key element as it is the “killer app” that will give subscribers real reason to transfer allegiances, hence BT can use any time it has left to roll out ADSL kit to local exchanges, giving its OpenWorld service the incumbent position (“Why wait months for the competitor service? Sign up with BT today!”) and also tying up valuable floor space in the local exchange. Let’s face it, no company with any nouse would open the doors to the competition before getting its own act together. Unfortunately for BT, past delays in its ADSL strategy have got the company into the situation it now finds itself. It is running out of time, and even the timescales it thought it had agreed were cut by six months in Lisbon.
Meanwhile, the competition are not just settling back and taking this. BT are treading a fine line, as if it can be proved (and it would not take much) that BT are damaging the business of other telcos, they could be sued for enormous sums. Companies considering legal action are Fibernet, Colt Telecommunications and Global Crossing; it is likely that others will follow suit.
So where’s the iceberg? The hard deadline is the end of 2000. Neither BT nor Oftel are happy with this, but they do not have much of a case. The EC will not allow BT to restrict its rivals as it is currently doing, nor will the competitors themselves. The difference with the Titanic is that the navigator did not know that the ship was off course, and nobody could have seen through the fog guessed the scale or potential damage that the iceberg could have. In this case, however, there is no fog, only a smokescreen from a company trying in vain to protect assets it no longer really owns.
(First published 21 July 2000)
07-21 – Would mad cows use mobile phones?
Would mad cows use mobile phones?
It’s niggling doubt time. Do mobile phones push out harmful radiation or not? Let’s ask the scientists. Trouble is, can we trust them? It isn’t a case of corruption, but contradictory evidence that is then used by our own, dear politicians to promote their own agendas. In the UK, the starkest example of this was the unfortunate case of “la vache folle”, as they say over the channel. British beef was so, so safe to eat that the then Minister of Agriculture even went on television with if daughter, who was “encouraged” to eat a beefburger on film. We know politicians are manipulative, but to see them stooping so low in public still comes as a shock, especially as pockets of the human form of the disease – CJD – have now been traced to production methods of baby food and school dinners. The over-riding conclusion that we have to reach (apart from the dubious nature of politicians) is that science cannot necessarily be trusted. The adage that “there is no evidence to prove a link” does not mean that there isn’t a link, just that we are too primitive to find one.
And so to mobile phones. To coin a phrase, there is no evidence to prove a link between microwave emissions from mobile phones, and brain cancer. Sure, the frequency used is that used by ovens as it coincides with the frequency of boiling water. Sure, heat scans of phone users show the area of the head around the phone is warmer than its surroundings. But – no evidence to prove a link.
It does not matter if the risk is small. While uncertainly reigns (and it always will, until a link is found), it is essential that any potential risk is seen to be minimised. We saw this with the BSE tragedy that followed the world-wide ban on British beef, in that thousands of cows were slaughtered. What we have not seen so far is mobile manufacturers working to minimise the risk of microwave emissions. Not, that is, until now.
The Cellular Telecommunications Industry Association (CTIA) in the US has gained agreement from mobile manufacturers to publish the emission levels of mobile phones. So far the only company to agree to the August 1 deadline is Ericsson, but Nokia and Motorola are said to be following suit. In itself, this may not sound like much but this step is a major one. In publishing this information, companies are opening a door to scrutiny. It is inevitable that an emissions league table will be published, and equally inevitable that phones with higher emissions will be rejected in favour of lower-emission phones. Over the years, phone manufacturers have been producing mobile phones with decreasing emission levels. Market forces will give added impetus to further improvements, such as the integration of additional shielding.
The mobile phone issue may or may not be a red herring, but following the BSE calamity we should have learned that trusting science at face value was not an option. Even if the risk is small, it is worth encouraging any move to reduce that risk still lower.
(First published 21 July 2000)
07-31 – Microsoft Windows for free
Microsoft Windows for free
Microsoft looks scarily close to becoming innovative. Sure, the ideas may have been developed elsewhere, but with its games console, set-top box and wireless appliance, the company is really making a go of it. The company previewed several new technologies at an analyst briefing at the end of last week. Meanwhile, at another briefing, Microsoft updated journalists on the operating systems state of play – very much in the Microsoft old school. These two lines of attack – old and new – will define the essentials of the company over the next few years.
Forget dot-Net, forget C-Sharp. All those big announcements make nice wallpaper, but they are not really where the action is for a product company. For Microsoft, shareholder value is about shipping product, and in the past it has to be said that they have been remarkably successful at it – more successful, indeed, than any other company in the world.
Traditionally, Microsoft has made its money selling three product lines: operating systems, office applications and development tools. It has wiped the floor with the competition in all three areas, but broad as this market may be, sooner or later it will be saturated. If we look at the operating system announcements, they are in fact (just like their predecessors) details of upgrades rather than anything new. The media player may be updated, the Web browser may support new forms of content but the underlying technology remains essentially the same. With the new release of WindowsME, aimed at replacing Windows 95 and 98, Microsoft are moving closer to a code base shared with Windows 2000 (and its own replacement, codenamed Whistler). Clearly this benefits Microsoft, but is unlikely to cut much ice with the end user. Even the recent forays into new look-and-feels have been little more than a rehash of the old. An OS is an OS is an OS, and that’s all there is to it.
It would be impossibly un-PC (sic) for Microsoft to ditch this model, particularly as there’s life in the old cash cow yet. To all intents and purposes Windows is Microsoft and to question it would be like the Pope denouncing Christianity. So – Windows is still very much strategic, but so is the “great software on any device” tag line – enter the new product lines.
By entering the domain of the appliance – the games console, video engine, wireless PDA or whatever – Microsoft is recognising one crucial fact. With appliances, nobody cares about what is under the bonnet, the external functionality is more important than the internal components. This is the rationale behind other companies (such as \link{http://www.it-analysis.com/00-07-14-1.html,these})using Linux – there is no operating system sell, it is the box that counts. Also, with Linux there is no operating system buy as it is license- and cost-free. Microsoft cannot compete at the OS level as others are giving it away, hence they are competing at the level of the complete device. This is a dangerous game – hardware margins are notoriously lower than software margins – but the company has no other choice.
By bundling Windows with the device, Microsoft is essentially giving it away. In doing so it gives itself an exit strategy from the oncoming drought in the OS space. Perhaps more importantly, it can move on without losing face – a necessity in the technology market, where image counts for far more than people give it credit.
(First published 31 July 2000)
07-31 – Napster movement goes underground, people responsible
Napster movement goes underground, people responsible
In the oft-downloaded words of the Carpenters, “It’s only just begun…” Napster may have until Friday to shut down its operations, but it doesn’t take much to realise that the injunction will not spell the end of online piracy of copyrighted materials. The judgement was against Napster alone, and not the individuals using it – according to \link{http://www.news.com,News.com}, this would require lawsuits to be filed against individuals. Similarly, peer-to-peer duplication products such as Gnutella, Centrata and Akamai (covered \link{http://www.it-analysis.com,here} are not affected as this would also require individuals to be pursued. It looks like, whatever happens in the future, individuals hold the key.
Let’s get one thing straight. Piracy of any creative work is a bad thing, as it is stealing from the livelihood of its creator not to mention the agencies that work on his or her behalf. These may be perceived as “the enemy,” capitalist men in suits who like nothing more than to make a fast buck off the backs of the innocent public. The major record labels and publishers do provide a necessary service – without them, for example, Harry Potter might still have been a manuscript languishing in the bottom of the drawer. All the same, recordings are priced high and the temptation to make copies of them has been too great for any of us. If there is anybody out there that has not, at some time, taped an album or a song off the radio, speak out! We want to hear from you.
Together, digital quality recordings and the Internet changed a problem that was seen as a necessary evil by the publishing industry, to one which could have catastrophic consequences. This is probably a reasonably accurate analysis, if the duplication and distribution of MP3’s were not stopped. The question is, can it really be stopped? One company – Napster – has gone down, but others (such as AppleSoup) exist. Is the RIAA going to sue every bunch of students that put together a few lines of code to permit peer-to-peer file sharing? Let’s face it, even an Instant Messaging facility and an email service is sufficient to allow an exchange of information on pirated files, not to mention the files themselves, which could be exchanged automatically using facilities such as ftpmail.
Sooner or later, attention has to turn from the pirating software to the pirates. In the UK in the eighties, a levy was put on blank cassettes to recompense recording companies for lost sales, however it is difficult to see what similar mechanism could be put in place for the Web.
Given the vagaries of human nature, it is possible that an honour system is the only one that will possibly work. Stephen King may have had only limited success on the first day of his online publishing venture but the idea is sound: download a chapter of my book, if you pay I’ll publish the next one. The only difficulty is that it is a one-shot operation – once the entire book is published, being on the Net it will be subject to the same issues as any other online work. The concept of a “second edition” goes out of the window.
As more and more people get online, and as connection speeds improve, the problem of piracy can only increase. This is inevitable, whatever lawsuits may take place – savvy students and others the world over are unlikely to take too much note of the results. If the recording industry wants results, it must appeal to the individuals responsible for both its existence and its possible demise. It may get results, but it will not get everything its own way.
(First published 31 July 2000)
07-31 – TANSTAA Free Internet Service
TANSTAA Free Internet Service
Oh, how the wired world must envy the UK! Over here in Blighty, we have free Internet Services, and what a wonderful place it is to be. Well, it would be if only such services worked. Unfortunately the realities are proving shockingly different to the hyperbolae.
Take LineOne, for example. This ISP was originally a subscription-based service, before adopting the “free” model where LineOne took part of the cost of the call to finance the service. In April this year, LineOne launched a joint venture with low-cost call provider Quip, in which £5 per month of telephone calls would qualify the subscriber for free Internet access. The initial service was swamped – users reported slow connections after many failed connection attempts: an infrastructure upgrade speeded up the free service but it continued to degrade and at peak times was almost unusable. The problems also impacted on performance for the “paying” customers. Two weeks ago, a letter was sent to the Quip subscribers saying free access would be terminated in September, and the original £20 cost of the Quip box would be refunded in call charges. So – no loss financially, just in time and effort.
Second we have Breathe. At the end of last week company took the shocking step of disconnecting some of its heavier users and cancelling their subscriptions, all in the name of customer service. A note was sent out saying that the service was discontinued and giving a web address for those affected to claim their money back, but – as one subscriber pointed out – how the heck do you get there, once your Internet connection has been cut off? This move is staggeringly insensitive and will most likely prove very damaging to Breathe’s business. Finally, users of one of the more successful free ISPs – \link{http://www.thefreeinternet.net,The Free Internet} – have been finding that the 0800 calls have been appearing as chargeable calls on their phone bills. Other horror stories, concerning free service providers such as Screaming.net and others, abound.
What is going wrong? There is the inevitable issue of service quality – in LineOne’s case, the company misjudged demand and failed to protect its existing subscribers from its new services. Having decided that its business model is fatally flawed, it looks like the company is extracting itself reasonably well from a situation it clearly finds untenable. As for Breathe, who has also suffered from service level failures, the reaction of the company against its customers absolutely cannot be condoned. It will be interesting in the extreme to see how the company puts this behind them.
The bottom line is the bottom line – there is nothing for free in this world. The business model of The Free Internet is, in fact, a subscription model and it is likely that this is the only model that can work: costs may be reduced to £50 per year, but they do not vanish completely. Companies giving the impression that they can deliver on the promise of 100% free services, will be found out sooner or later. Nice dream, nice dream.
(First published 31 July 2000)
August 2000
08-25 – CA shifts innovation from marketing to business strategy
CA shifts innovation from marketing to business strategy
Many companies exist by running two lines of products. There are the cash-cow product lines, which keep the company going through thick and thin, and then we have the showroom products that keep the company looking innovative and which give it a chance to compete in the years to come. For a software industry example we need look no further than Microsoft, which continues to reap the harvest of Windows and Office while announcing new strategies such as .NET to show how it is keeping up with the game. Less obvious, but similarly split is Computer Associates, which likes to portray itself as a future-looking company whilst all the while relying on its mainframe-based products to bring in the money. Unlike with Microsoft (which can guarantee a good few years of revenue from its older stable of products), the clock is ticking for CA. There comes a time for every company to make the new range of products strategic, and lave the older lines to wither a little (and perhaps, to die later on).
CA’s innovation USP is two-pronged. First up are is Neugents - software modules that use artificial intelligence technology to draw conclusions, such as the likelihood of a server crashing, or to draw out patterns and trends in business data. The second strand is 3D visualisation, as demonstrated through the wrap-around interface of Unicenter TNG (not to mention the acquisition of graphics company Viewpoint last summer).
The shine was well and truly taken off the CA logo a couple of weeks ago, as Sanjay Kumar took over from Charles Wang as CEO of the company. This unprecedented step (Wang has been CEO since the company’s formation) came as a result of a series of bad performance announcements and profit warnings. Kumar’s is stated to be to intensify efforts in the more innovative, growth areas of CA’s business.
Last week’s announcement of the release of the stand-alone package Neugents ii was the first of many that will no doubt see CA’s key differentiators being brought to the fore. To be fair on CA, the company announced months ago that it would be “componentising” its product portfolio – a strategy which (in the form of Neugents ii) is now starting to get results.
Over the next few months we can expect to see CA release new products to give it a real foothold in what should turn out to be exciting new markets. It would not come as a surprise for Kumar to take the unprecedented step of replacing the old, dependable approach to its mature product lines for one that is more slash and burn than weed and feed. However it remains to be seen whether he can turn around the behemoth and make it dance. Truth be told, for CA to be successful in the long term the real innovation will need to take place in the boardroom and not in the product catalogue. The CA well is drying up, giving the company little choice but to move the more innovative product lines to the centre of its business strategy rather than just its marketing.
(First published 25 August 2000)
September 2000
09-01 – How many Open Source developers does it take…
How many Open Source developers does it take…
There is certainly no shortage of companies offering up their products to the open source movement at the moment. Last week, IBM announced its plans to release the source code to its AFS enterprise file system to the “developer community.” A few weeks ago, as we discussed \link{http://www.it-director.com/00-07-03-3.html,here} Sun Microsystems admitted to difficulties in opening up Solaris. This relentless march of companies is delivering a growing stack of source code to the seemingly infinite resource pool that makes up the open source movement. The cathedral builders are handing over their plans to the bazaars, and the world will be a better place for it. That’s the theory anyway – in practice, as demonstrated by the recent announcement of a company-sponsored open source laboratory, it is the major corporations that rule the roost.
As far as we know there are no statistics concerning the breakdown of the open source movement into its constituent developer types. It seems reasonable that there are three categories:
- commercial developers, who are salaried workers tasked with the modifications for business reasons
- academic staff and students, with lectureships, research grants and undergraduate projects to spare
- miscellaneous others who dabble or spend their waking hours working on code.
The utopian idea of any of these individuals developing for the greater good of mankind is unrealistic: each has a goal in mind that may be commercial or personal. It is absolutely not be the case that every release, by a vendor, of code to the community is seized upon with delight and absorbed automatically into the great open source repository in the sky. Two facts are clear about the release of code – that it does not imply that a vendor is going to stop working on it (rather, that any work will be more transparent), and that there is no real loss of ownership. Just as Linus Torvalds still has power of veto over the Linux kernel, so Sun and IBM will still keep control over their own offerings.
In fact, when a large company “opens” its code, it amounts to little more than opening up its APIs. Sure, you can see the code and modify it if you like, but only of you are willing to take the time and effort to understand it in detail, not to mention construct the development and test environments necessary to enhance it. Anyone that has worked on file system development, particularly something as complex as AFS knows exactly how unlikely it is that anybody will understand the code, let alone want to modify it. In other words, IBM is nodding in the direction of the open source movement as a whole rather than facilitating anything in particular. Some companies may choose to download the code, and may even decide to build on it but it is unlikely they would do so without working in partnership with IBM. So – if it isn’t that generous an arrangement, what is it about? Making money, of course, though not directly.
There is nothing new here. Companies have been using the open source model for perceived competitive advantage for many years. Novell, for example, is releasing the source of parts of (note that) NDS in an aim to dominate the directory space against its main competitor, Microsoft. As we reported \link{http://www.it-director.com/99-09-01-3.html,here} Novell’s own bugbear is TRG, which offers a NetWare-compatible file system as a free download from its \link{http://www.timpanogas.com,web site}. This product is open source and comes with the thinly disguised aim to take away Novell’s market share. Similarly, it is no secret that IBM is putting its back into Linux, despite having perfectly workable Unix-a-likes of its own (and probably scuppering the 64 bit AIX replacement, Monterey, in the process). Its motives are to reduce the costs of infrastructure software and make life very difficult for companies who depend on such things for the bulk of their revenue.
Open source is not the sub-culture that so many would love it to be. This sub-culture exists, but it is by far a minority in an open source world that is controlled, as ever, by the corporations. The ultimate success of open source, to be adopted by the mainstream, will also be its doom.
(First published 1 September 2000)
09-01 – Marillion.com – turning the record industry on its head
Marillion.com – turning the record industry on its head
It is no secret that the music business makes a fickle bedfellow. Marillion may be thought of today, if at all, as the prog-rock group that had a brief string of folksy hits in the eighties before losing its hydrophilic lead singer and fading into obscurity. This popular perception is not shared by the ever-loyal following that the band still enjoys, however it has been proving increasingly difficult for Marillion to make any mark on the music mainstream. That is, until last month, when the band proved it still had something to sing about by using the power of the Web (sic) to leverage a major recording deal.
This is how it works. A band wanting to make an album gets an advance from recording company, to be set against estimated royalties. When the CD hits the shops, the first tranche of income is used to pay back the advance. Inevitably less popular bands have less negotiating power with the recording companies, hence advances (and good percentages) can be hard to come by. In addition, such contracts give very little flexibility in terms of how the music is released, or even what should be included on the album.
Even bands with a good following still need the advance – to cover the production costs, not to mention that the musicians have to eat. Like many bands that are fed up of living hand-to-mouth, Marillion has been puzzling over this conundrum – how to make an album that it knows it can sell, without putting itself at the mercy of the recording industry? The answer is as simple as it is profound.
In June this year the band members put out an appeal on their http://www.marillion.com web site}, and emailed their fans. The question was this: “If we asked you to pay in advance for the next album, would you do it?” The response was staggering – nearly five thousand respondents said they would, leading Marillion to take a leap of faith and go ahead with the idea. Estimates suggest that over £50,000 of advance orders have already been taken.
The benefits are clear. First of all, Marillion has been given the wherewithal to record an album, with the only commitment being to deliver a CD early next year. Marillion have no obligations to any record company, so there are no licensing issues and no limitations to what the band can do artistically. This leads to the second point. Effectively, what Marillion have done is shown that the album will sell – effectively, it already has. This puts them in a very strong position to negotiate for its distribution. According to Lucy Jordache, Marillion’s marketing manager, most of the major labels were keen to accept Marillion back onto their books before EMI won the deal. This time, without the shackles of the advance, the band has been able to put together a very nice package indeed.
In using the Web, Marillion’s goal was not to miss out the corporate middlemen, but to give themselves some security and a stronger negotiating position. This reflects the stance taken by Stephen King, whose recent publication of “The Plant” at $1 per instalment threatens to be “Big Publishing’s worst nightmare”. King was quick to agree that, though this might knock the publishers off their laurels, it would not completely destroy the publishing industry. At the same time it was seen as trailblazing for “midlist … and marginalized writers who see a future outside the mainstream,” according to the author. At the time Stephen King’s move was accused of being more hype than substance. However, few could deny that Marillion have managed to get tangible results from their online strategy.
In the words of the Marillion song King, “they call you a genius cause you’re easier to sell.” With bands and authors selling themselves over the Internet, the recording industry looks set for interesting times ahead.
(First published 1 September 2000)
09-08 – mCommerce – less big bang, more whimper
mCommerce – less big bang, more whimper
It would be dangerous to say that analyst firms make things up, but of two reports just out, one must be wrong. The first, from Forrester Research, puts the value of mobile transactions at £3.2 billion by 2005. The second, from IDC, says it’ll be £25 billion by 2004. If we’re not mistaken that’s a factor of ten.
It has to be said that neither figure is particularly small. The smallest percentage of Forrester’s estimated figure would still be a worthy addition to the revenues of any company. All the same, it is not enough to send businesses into a panic in the same way as, say, eCommerce has done. From Forrester’s point of view, the figure represents a mere 3% of online retail revenues – let’s face it, most businesses are still trying to get their heads round how they can get a slice of the other £95 billion. Even given IDC’s more inflated figure, mCommerce is still the icing but eBusiness is very much the cake.
Much would appear to depend on the form factor of tomorrow’s mobile devices. Mobile phones are small, neat and almost impossible to do anything with, other than make phone calls. Given this, mCommerce transactions need to be as easy and quick to perform as tapping in a phone number and saying “I’d like to order a pizza, please.” There are some obvious wins for mCommerce, for example online betting at crowded sports events. However many currently touted applications for mobile devices are deep pan pie in the sky – Robbie Coltrane may want to check his bank balance from his hotel room, but I can’t see the need myself.
The hype around mCommerce is exactly that – hype. Fortunately it appears that the bubble is bursting well in advance of it doing any serious damage. There can be no denying that location-sensitive services will become a reality and our fast-paced city dwellers will no doubt decide that the world is a better place for them. However this is the stuff of the future, and even then will not have a serious impact on the online transaction count. Here’s a simple exercise – list all the things you might need to buy when on one place, compared to all the things you might buy going from one place to another. One list is big and the other is small, right? QED.
Things will be different in ten years’ time, when UMTS broadband access is available to mobile devices. By that time it will be questionable whether most punters will know, or care, whether a given transaction is taking place from a land-based or a wireless connection. In the short term, the additional functionality being built into mobile phones will sit, and wait, until its users determine a real use for it.
(First published 8 September 2000)
09-08 – Sony - lies and on the take?
Sony - lies and on the take?
Let’s just get this right. The Sony PlayStation 2 has sold 3 million units so far, the US has just pre-ordered 1.5 million units and meanwhile, in the UK, only 200,000 of the things are going to be available for Christmas. The nice people from Sony are trying to tell us that this is not a marketing ploy and, frankly, we believe ‘em.
The UK pre-order launch of the Sony PlayStation 2 happened at midnight on Thursday last week. High street retailers such as Dixons opened their doors to bleary-eyed queues, determined that little Jonny would get what he wanted this Christmas. Consumers hoping to wait until after the weekend may well be disappointed.
According to Sony’s Playstation \link{http://www.playstation-europe.com/hardware/playstation2.jhtml), the European release date for the PS2 has been delayed to Novemer 24 “due to completely unforeseeable and unprecedented consumer demand for the PS 2 in Japan.” This may or may not be true – production is production. However its paltry rationing of the devices for the UK market smacks ever-so-slightly of “looking after one’s own.” Does that sound at all like sour grapes to you?
With what is currently the most powerful games platform (and, arguably, some of the best titles) out of all the console manufacturers, Sony are in a position to gamble with the consumer. By the time that the PS2’s nearest rivals, Microsoft’s X-Box and Nintendo’s GameCube, get anywhere near the market, Sony expect to have already shipped a good 10 million units across Europe, roughly half of what the company expects to ship worldwide this financial year. Competitors will arrive at the party only to find that the guests have already moved on (no doubt forming an orderly queue for the PlayStation3). Meanwhile, before the event, Sony can afford to pump up the prices (to £100 more than what is to be paid in the States) and justify an additional 25 quid for the right to buy.
Grumpiness aside, there can be no doubt that Sony is onto an absolute winner with its games console. The manufacturer is already a victim of its own success – fortunately it has no real competition at the moment otherwise it could have missed the boat. Ask any child which device he would rather had and brace yourself for the reply that will appear obvious to anybody under 20. “Why, a Play Station of course!” This attitude does not limit itself to the younger generations – we know plenty of seemingly mature adults who are holding out for this latest device.
How exactly the company has done it is unclear, but there can be no denying that (in the material world we call Europe) the PS2 is a thing to have. If it achieves nothing else, the arrival of the PS2 will set the minimum standard for games consoles. Bargain hunters, look out for older devices going for a song in second hand stores and online auctions, from January 2001.
(First published 8 September 2000)
09-08 – XML – in, out, in, out
XML – in, out, in, out
There were two seemingly conflicting reports about XML adoption last week - both appeared in UK IT rag Computer Weekly. The first trumpeted the fact that the Police in Scotland have decided to adopt an XML-based architecture for its main applications. The second dealt with the decision by the National Health Service (NHS) to stick with an EDI-based strategy. So, is XML right or is it wrong? Is XML ready or is it not? Perhaps most importantly, does the UK have a coherent strategy or does it not?
It is pleasing to note that the Scottish Police are not jumping straight into XML with both feet. A firearms licensing package has been built as a proof-of-concept system. This package is now to be piloted by Fife police force, before being rolled out across Scotland. Should the pilot prove successful, additional applications are planned including “command and control, personnel, custody, intelligence and crime management,” according to Computer Weekly. Already, sponsors of the firearms package are expressing delight at the fact that it took only 6 months to write, rather than the “six months … it would have taken in the past.” How very encouraging.
Meanwhile, south of the border the NHS is paining a different picture about XML. Far from the “flexible, scalable adjectives” being used in Scotland, NHS executives talk about XML being “uncoordinated” and less reliable than EDI. “If we go down the XML route now we would have to wait two years to get proper protocols,” said Rick Jones, author of a report into the use of XML for medical applications.
There seems to be a difference of perspective here. The Scottish Police are saying “let’s give it a go, nothing ventured, need the experience” and are having some good results. The NHS meanwhile are saying “not my job mate, it ain’t ready, we’ll stick with what we’ve got.” These approaches are diametrically opposed and only one of them permits the organisation concerned to keep on top of the new technologies. By burying its head in the sand, the NHS is missing out on more flexible, interoperable architectures, not to mention infrastructure and development cost savings and the potential for new applications.
It is worth mentioning a point made in each of the report – who is in charge. In the Scottish Police the forces themselves, driven by the Scottish Association of Chief Police Officers, decide IT strategy. Meanwhile in the NHS, it is the government Central IT Unit, Citu, that decides policy. It is fair enough that no organisation should be pushed into making technology decisions too early or for the wrong reasons, particularly if it relates to the health service. At the same time, it is unacceptable that the NHS rejects Citu’s strategy for reasons of “not invented here.” At the very least, the NHS should be prepared to adopt the XML strategy in principle, and to pilot it in non-contentious or low risk areas.
(First published 8 September 2000)
October 2000
10-11 – Bluetooth PANs the Wireless LAN
Bluetooth PANs the Wireless LAN
The arrival of Bluetooth, the short-range wireless communications standard, is moving ever nearer. Products are not due until the latter half of this year but the announcements are now coming thick and fast, with the IT majors now joining forces with the technology providers to make it happen. At the end of last week, for example, IBM announced it was partnering with TDK to develop Bluetooth solutions for its Thinkpad range of laptops. Despite the fact that TDK is not known as an IT company, it has carved itself a niche as a producer of networking devices that meet the standard formerly known as PCMCIA. Back in 1998 TDK was quick to pick up on the potential of Bluetooth, and has been supporting standards work and developing products ever since. IBM, too, has been an active proponent of the technology but has recently been pulling back from the production of networking devices. The result: IBM has a partner that can deliver, and TDK has a conduit for its products to die for.
The principle behind Bluetooth is simple. Devices broadcast a wireless “hello, I’m here” into the ether and listen out for any responses. Should one device detect another, the two will form a loose network, known as a Personal Area Network or PAN. PAN clusters can then form into a kind of super-PAN using hub units that act as a switch between different PANs. The active range of Bluetooth is ten metres: the intention is not to replace existing LAN technology but more to enable locally positioned devices to interact, for example a PC with a printer or a mobile phone with a PDA. If the standard had anything in its sights, it would be the infra-red standard IrDA but Bluetooth is primarily an innovation – it augments existing facilities rather than replacing them.
This is all well and good, but (and there is always a but) Bluetooth isn’t the only wireless technology on the block. There are a number of vendors who are indeed setting their sights on the LAN. These include Apple, which has already released its own wireless protocol. Most likely to succeed is the wireless Ethernet standard, behind which a consortium of vendors is already lining up. However, through the fault of nobody in particular, Bluetooth is proving to be the fly in the wireless ointment. The two wireless standards can, and do, interfere with each other making it difficult to run both PANs and wireless LANs.
Steps are being taken to minimise the damage. Symbol Systems, for example, has included frequency-hopping in its 2Mbps wireless LAN technology so that if interference is detected, it can be countered. This has all the hallmarks of a workaround, and apparently it does not work for higher data speeds. The current advice is not to use the two in the same part of a building: clearly this is not a rule which builds confidence.
Where to go from here? Data communications experience suggests that, when two or more protocols get together, some bright spark designs an interface between them. It may be that if wireless LAN devices can also talk Bluetooth, both standards can exist in the same frequency range. Ultimately the question arises whether more than one wireless protocol is necessary at all: time will tell but, even though the standards do not overlap in principle, it may prove unwise to focus too heavily on one technology.
(First published 11 October 2000)
10-11 – Geoworks WAP licensing – fast buck or insurance policy?
Geoworks WAP licensing – fast buck or insurance policy?
Geoworks, the company perhaps more famous for its small footprint operating system than its wireless credentials, has been coming under fire in recent weeks. Its crime is to enforce a patent, which the company holds on a part of the wireless protocol that is essential to WAP. So – is Geoworks on the make or is it protecting its future?
The problem started back in January, when Geoworks announced its intentions to introduce a licensing scheme for WAP vendors. On the surface the licensing scheme seems reasonable enough. Individual vendors wanting to make use of the Geoworks technology need pay a flat fee of $20,000 dollars per year. If the company in question earns less than $1 million dollars, the fee drops to $25, a veritable bargain it has to be said. It is with the service providers that things get interesting, as the fee equates to $1 per year, per service user. For a company such as Vodafone, which now handles over 10% of the world’s handset users, the sums would become phenomenal. It is unlikely that the larger players will pay the book cost for the privilege of using WAP but nonetheless Geoworks looks set to make a pretty penny on the arrangement. Stock holders certainly think so, with shares doubling in value on the day of the announcement.
Of course Geoworks is not the only company to bring up the issue of licensing. Indeed, the company is a pussy cat compared to some of the larger players in the WAP forum, namely NEC and Phone.com. As a smaller player, Geoworks commands more sympathy: with just over 100 people in the whole company, it cannot afford to be frivolous about its R&D spend. Neither does it have the luxury of giving away key technology for the greater, in this case wireless, good. On the make it may be, but Geoworks has few other options.
Despite the underdog stance, Geoworks has succeeded in putting the cat amongst the wireless pigeons. In an interview with IT-Director.com, Ken Norbury, General Manager of Geoworks in the UK, agreed that the stance was “upsetting to people not in the know.” This was a view echoed by the chairman of the WAP Forum, Greg Williams who said on News.com “I’m not saying at all that Geoworks [tried to take advantage of the process] … I think what they’ve tried to do was set a price that is fair and reasonable, as anyone would.”
When the annals of IT history are finally written, it will not be the actions of individual companies that count so much as the combined effects. The longer-term necessity for WAP is already being called into question. While Geoworks’ position is understandable, the licensing issue may prove to be the final straw for an already weakening standard.
(First published 11 October 2000)
Cloud Pro
Posts published in Cloud Pro.
2012
Posts from 2012.
June 2012
06-08 – The Problem Is Not Whether Breaking The Law
The Problem Is Not Whether Breaking The Law
Cloud Society: Case Law and the Cloud
Can legal frameworks respond to the TMI society?
When Hunter S. Thompson first wrote, “In a closed society where everybody’s guilty, the only crime is getting caught,” he clearly wasn’t taking into account the impact that technology would have over the following forty years, on both crime and its consequences.
Crime and Punishment
Crime is not absolute, and it never has been. We laugh at the incongruities of older legal frameworks and raise an eyebrow at the punishments still present in some societies, all the while feeling assured that whatever our cultures disallowed in the past, however draconian the remedies, we are (oh, so!) much more civilised now.
The fact is, however, that the human animal is no more criminal, nor noble, than it ever was. Each country creates its own legal frameworks according to factors including what its citizens feel able to tolerate at the time, the impact of any incidents and, indeed, the ease with which perpetrators can be caught.
In the UK, for example, it wasn’t all that long ago that it was illegal to be commit adultery, or even more recently, to be openly gay. Homosexuals didn’t suddenly stop being criminals in July 1967; rather, society (as represented by the government of the time) widened its goalposts to accommodate what it perceived as acceptable behaviour.
Meanwhile we have ‘illegal activities’ such as speeding – in quotes because of their utter variability by country. The motorway speed limit is set at 130kph (about 80mph) on the continent, or 110kph (70mph approx.) if it rains. Everyone breaks the rules. Last year the UK government proposed to raise UK speed limits 80mph on the basis of “restoring the legitimacy of the speed limit.” Yet critics agued it would simply mean more people would drive at 90mph – both standpoints highlighting the perception that the limit is seen as a guiding hand, rather than an absolute indicator of criminality.
Crime and punishment go hand in hand and similarly, punitive measures available to judges have changed over the years. We no longer have the death penalty in the UK, and the measure remains rightly controversial in countries where it is practiced. Punishment, be it a fine, incarceration, community service or whatever, aims to serve consequences to the convicted, recompense the victim and discourage anybody who might be considering a similar path.
As such, a punishment can increase for a similar crime if a judge, or government deems that additional discouragement is necessary – a situation we saw clearly (LINK: http://www.guardian.co.uk/commentisfree/libertycentral/2011/jan/24/deterrent-sentences-edward-woollard) at the time of the 2011 riots.
Technology – the amplifier in the sky?
What’s all this got to do with technology? The fact is, against such a many-faceted, context-sensitive and interpretation-based background, we are quite suddenly able to catch a great many more folks at it. At the same time, society’s voice has become several decibels louder – and more reactive – by virtue of the all-amplifying Internet.
The former point goes to the very roots of how our legal system was set up. Of course some crimes lie way beyond the threshold of what might be considered ‘minor’, and the role of technology in taking the bad guys down is not to be underestimated. Equally, many minor crimes are already categorised as such, and dealt with accordingly.
A broader category of illegal behaviour exists, determined largely by society’s desire to influence the activities of its citizens. Speeding is one example; use of certain substances is another – as illustrated by various attempts to reclassify cannabis, for example. Some acts are not actually damaging in themselves, but pose a perceived risk – incitement, for example, or negligence. No absolutes exist in such cases; rather, a value judgement needs to be made which will tend to reflect the times.
Which brings to society’s increasingly loud voice. That individuals can make their views heard is generally applauded – activist sites are now able to case a much brighter light onto despots, dodgy dealings and indeed, miscarriages of justice. The media also pick up cases and run with them, whipping up interest based on the judgement of editors as to what their readerships might want, for better or worse. Faced with such a clamour, it is unsurprising that judges feel obliged to respond.
We’ve seen a number of examples in recent times that highlight the dangers of these converging factors. The case of Paul Chambers for example, whose ‘joke’ on Twitter about Robin Hood airport led to him losing his job and gaining a criminal record. No damage was intended, nor done, nor indeed likely – yet Mr Chambers was used as an example to others, for deterrent effect. The verdict is still in the courts, being appealed again after a High Court decision.
Or Jacqueline Woodhouse, jailed for 21 weeks having launched a racist tirade at an unfortunate victim, all of it captured on a mobile phone. The fact she was drunk at the time did little to help her defence. Or Liam Stacey, whose wholly despicable (and apparently alcohol-fuelled) remarks about critically ill footballer Fabrice Muamba earned the student a 56-day sentence.
Or indeed, Jordan Blackshaw and Perry Sutcliffe-Keenan, the likely lads jailed for four years each for incitement to riot, having created the Facebook event “Smash d[o]wn in Northwich Town” in August 2011. The fact that no such event took place was seen as irrelevant by the judge; according to Eric Pickles, then Communities Secretary, “exemplary sentences” were necessary – his statement reflecting the general horror of the time, that such rioting took place.
Can we handle the truth?
While each of the above decisions is the topic of debate – indeed, that’s why we have courts of law, to enable such discussions to take place – they share several characteristics: that they only came to light thanks to the technologies we have in place; that they involved outbursts or misguided statements; that sentencing took into account the times in which the statements were made.
Another consequence is that – by design – the sentences influence society’s behaviour down the line. Surprising it may be to the interconnected, but many people are yet to discover the joys of social networking and are still to learn what is a nascent moral code about life online. We have still to agree, for example, what is acceptable and what is not – as illustrated by the tirade of unprintable, highly abusive language directed at Joey Barton following his recent on-pitch aggression, or indeed Louise Mensch simply for having the audacity to say what she believed.
Nobody was prosecuted for their participation in what amounted to blatant online bullying – no cases were brought. Maybe they should have been. But meanwhile, the prosecutions of others, sometimes simply because their statements can be interpreted according to laws drawn up long before the existence of the Internet, that most public of places, need equally careful consideration.
Not least because we haven’t even scratched the surface. Technology isn’t finished – indeed, in fifty years time we will likely look back and chuckle at just how primitive we were. New developments in face recognition, data aggregation and analysis, the increasing use of embedded cameras, each innovation offers new ways of finding things out, of capturing and sharing the moment, of linking one piece of information with another. Even today, for example, mobile phone companies hold all the information anyone needs to determine when any citizen has been speeding. All it needs is a sub-poena.
It’s not Big Brother we need to worry about, it’s the scurrying mass of little sisters, each with a tale to tell. Right now, many people still act like such innovations had not yet happened but it would appear that ignorance is no defence. To return to the case of Mr Chambers, it’s not just whether his act was ‘criminal’, but also the impact it has on anyone else who now feels a little less inclined to express humour in case it is misconstrued.
Raising the debate
Of course we all need to be responsible for our actions, online and offline; of course we need to take into account the victims; and of course we all want to live in a safe, just and open society in which the perpetrators of bad things face the consequences of their actions. Internet trolling (LINK: http://www.bbc.co.uk/news/uk-england-kent-17900962), cyber bullying and other despicable online activities are no less crimes simply because they use the Internet.
However, even if we manage to avoid the surveillance society, we are being drawn inexorably towards a more transparent world in which the smallest actions can be logged. During this historically interim period in which technology is having such an unprecedented impact, we should be actively debating the legal side effects – in parliament if necessary – of that fact that we are all more publicly visible than we may understand, or indeed intend. Such deep questions as these require a full treatment, and we should not simply leave them to context-sensitive case law.
To end on a positive note, in the UK at least we can feel fortunate that we live in a more liberal part of the world – not every occupant of the global village is so lucky. For this very reason we need to take appropriate steps now, to ensure that we continue to benefit from such luxuries as . Otherwise, if Hunter S. Thompson is right, we’re going to need a lot of jails.
August 2012
08-15 – Sortage As A Service
Sortage As A Service
When did storage as a service fall into the trough?
So, industry advisers Gartner released their cloud computing hype cycle (LINK: http://www.business2community.com/tech-gadgets/gartner-releases-their-hype-cycle-for-cloud-computing-2012-0241167) at the beginning of August. While these models are largely aimed at large organisations, it’s interesting to reflect on what they mean for smaller businesses and consumers.
I know, I thought, why don’t I put together the hype cycle picture for the little people? It would be much simpler, for sure – probably including software as a service (or online applications), social networking, hosted email and collaboration, hosted security and storage. Most of these, I thought, would probably lie somewhere along the ‘slope of enlightenment’, that is, generally accepted by those who found them useful, ignored by those that didn’t.
The only niggle was around hosted storage. It doesn’t help that I’m just back from holiday and, as usual, I have been a little trigger happy in the photographic department. I’ll spare the maths but 800 photos at 12 megapixels, plus a number of short HD films amounts to about 7 gigabytes, barely dipping into the space available on the cards in my two cameras. However, I confess that the question from a certain online storage provider: “Did I want to upload my photos into the cloud?” was met with a good-humoured snort.
It’s not just about the time it would take to upload all those pictures; a cursory glance at pricing models suggests that anything over 100Gb of storage would cost between 200-500 dollars annually. Moving from the individual to the business, the price goes up proportionately with the number of users – even with discounts, it quickly becomes cheaper to buy a small NAS. An appropriate point, you may think, to play the ‘horses for courses’ card – of course it is important to weigh the relative benefits and costs of hosted versus in-house services, and make decisions accordingly.
However for reasons of bandwidth and cost above all, it remains difficult to imagine ‘public cloud storage’ as anything other than a synchronisation or collaboration tool for a subset of corporate content. Given that the picture isn’t much different for larger companies, it is unsurprising that cloud storage finds itself free-falling into Gartner’s trough of disillusion. While the capability is given the benefit of the doubt (it will one day reach the plateau), hosted storage vendors may have to think a little harder about the services they offer, if they don’t want to become just another niche technology.
08-28 – Tiered Cloud Storage
Tiered Cloud Storage
Cloud Society: In the future, all storage will look like this
We all have a tendency to see things from wherever we are standing at the time. This human trait applies directly to technology adoption, leading sometimes to rash assumptions about acceptability (“They’re all going to love this!”). Other times, vendors and service providers come up with new propositions without realising just how profound their impact might be.
For a recent example we need look no further than Amazon’s Glacier launch (LINK: . This new service offers extremely low cost storage – down to 1 cent per gigabyte per month (LINK: http://aws.amazon.com/glacier/pricing/) – with the trade-off that data may take a few hours to retrieve. While certainly not suitable for video streaming, it has a clear role in backup and archiving – the former for risk management, the latter to offload unnecessary data currently cluttering front-line systems.
This reasonably simplistic offering disguises its more profound nature, which may be summarised by saying: Amazon Glacier cannot exist in isolation. The downside of this statement is that the service provider is unlikely to see massive adoption, at least not initially – Amazon will not experience a sudden rush of customers who were already waiting at the door, wondering what to do with all those 50 gigabyte files they only needed to access sporadically.
As an offload mechanism, very low cost storage needs to work in tandem with other, more expensive storage types. This principle was captured by StorageTek engineers a decade ago, when they coined the term ‘Information Lifecycle Management’ or ILM. The central point was that information value has a decay curve – content and data loses value over time, with a resulting impact on how quickly it needs to be accessed. While this ‘long tail’ model may not apply across the board, it is relevant to plenty of data categories.
The result, as its advocates quickly found, was that the key to successful ILM was to ensure data could be moved as efficiently as possible. As data became less relevant it could be moved onto lower cost storage or even archived onto tape. If it became important again, or in the case of primary storage failure, it could be moved back.
Falling storage costs meant that many organisations did not quite deliver on the full ILM vision – put simply, it was easier to replace a storage tier than build an infrastructure based on migration principles; before long, marketing executives were looking for new drums to bang and the term fell into disuse. However the idea remains sound and if the runes are correct, it will no doubt see a resurgence following Amazon’s launch. For the service to succeed, there will be no option.
Indeed, the challenge goes far wider than simple archiving. The current lack of appropriate data migration capabilities could well be the reason why Gartner cites Storage as a Service as languishing (LINK: http://www.cloudpro.co.uk/iaas/cloud-storage/4367/cloud-society-when-did-storage-service-fall-trough) in the trough of disillusion. Meanwhile, the future success of synchronisation services such as Apple’s iCloud, as they grow beyond basic data to a broader range of content types, depends on a comprehensive yet low cost way of storing information online.
What are we likely to see? Almost inevitably, we can expect some kind of tiered model emerging from each provider, with integration between tiers. For example, Amazon’s S3 will need to offer a migration path to and from Glacier. Over time, the two products will become features of a larger whole, in which punters pay for access speeds and resilience features, not just volumes. That’s not a massive insight, more common sense.
Equally obvious is that we are moving towards a model in which capacity is considered to be ‘infinite’ – at least in terms of meeting requirements. Large and small businesses, employees and consumers all stand to benefit from an approach which charges according to need, rather than raw capacity.
Such a model also resolves the issue of locking in with a vendor whose costs quickly become prohibitive beyond a limited number of gigabytes. A service that charges according to what someone is using right now, as opposed to all the content they have ever created, is far more likely to achieve success.
It remains to be seen what such a service, involving automated data migration between different tiers of cloud based storage, becomes called when the marketing wonks and analysts get their hands on it. Whatever its name however, you heard it here first: get the data migration piece right and the rest will fall into place.
September 2012
09-21 – Bad Things Good Services
Bad Things Good Services
Cloud Society: when bad things happen to good services
Finally, cloud computing has had the reality check it needed, says Jon Collins
Cloud computing’s been in the news, and it’s not been pretty. First Amazon’s AWS service was rendered inoperable [LINK: http://www.forbes.com/sites/reuvencohen/2012/07/02/cloud-computing-forecast-cloudy-with-a-chance-of-fail/) at the beginning of the summer, and then GoDaddy’s hosting services (along with thousands of web sites) were brought to their knees (http://www.dailymail.co.uk/sciencetech/article-1379474/Web-chaos-Amazon-cloud-failure-crashes-major-websites-Playstation-Network-goes-AGAIN.html) by those dastardly chaps at Anonymous. Both nasty, both, might say the naysayers, both illustrations that the cloud can’t be trusted.
But such examples could be exactly what was needed. Many observers (including myself) have long argued that the cloud should not be trusted, not 100%, any more than you would 100% trust a car, or a contract, or a person in authority (and that’s relevant, given that some outages with Google services have been put down to human error).
Trust should never be an absolute, and neither should it be the only factor on which to base decisions. With the best will in the world, cloud service delivery also requires reliable bandwidth (which is frequently not the case, even in fixed locations), simplicity of administration and control of costs. None of which are guaranteed, nor necessarily anyone’s fault when things go wrong.
There’s simply no such thing as the perfect service – one which can be expected to just work, to deliver on requirements in all circumstances. Architectural decisions – that is, deciding where to locate technology to give the best results – are predicated on the fact that imperfection is part and parcel of the whole technology landscape. Falur prevention should be built into the architecture – if a service cannot be trusted, some kind of fall-back needs to be in place.
Indeed, the only people who were ever saying that cloud computing is ‘The Answer’ as opposed to ‘one potential service delivery mechanism that needs to be weighed against, and integrated with other options’ were, you guessed it, marketing executives that worked for companies promoting the benefits of cloud computing. They’re not wrong that benefits exist; they are in their rights to ignore or downplay other choices. Technology buyers and their non-tech-literate bosses would be naïve to think otherwise, and should plan accordingly.
So, yes indeed, read stories about this service or that service going out of action for a while and consider them carefully, not with a sense of disappointment but relief that a veil has been removed from the eyes of the decision making process. With eyes wide open, organisations can start to make the right decisions about where and how to integrate cloud services with their own in-house kit, to deliver a set of capabilities which – even if it can’t be 100% trusted either – can most closely meet the needs and all-too-imperfect realities of the business.
October 2012
10-01 – Cloud Society Broadband As A Barrier To Cloud Service Adoption
Cloud Society Broadband As A Barrier To Cloud Service Adoption
Cloud Society: Broadband as a barrier to Cloud service adoption
UK Broadband should focus on acceptability, not speed
Some interesting statistics lie buried on Page 40 of the Ofcom report (LINK: http://stakeholders.ofcom.org.uk/binaries/research/broadband-research/may2012/Fixed_bb_speeds_May_2012.pdf) into residential broadband speeds. Issued back in May, the detailed paper quite rightly applauded the 22% increase in UK average broadband speeds overall. “It is encouraging that speeds are increasing,” said Ofcom’s Ed Richards, as you would expect. The average is now 9.0Mbit/s, which (one would think) should be more than enough for most uses.
The fly in the ointment appears when one looks at the distribution of download speeds across the board. Over half of those tested have an average of up to 6Mbit/s, which illustrates that the headline figure is pulled up by a smaller proportion of homes which happen to have very high speed broadband connections, for example over fibre or cable. Only 37% achieve an average of up to 4Mbit/s, and 14% of those tested, distributed across the UK population, can only make use of 2Mbit/s or less.
Downloads are one thing – the “A” in ADSL2+ means “asynchronous”, and upload speeds tend to run at about 10% of downloads. In broad terms, a third of the connected UK population can upload data to the Internet at 300-400kbits/s, and a third of this group can upload at half that. To put the speeds into context, a basic video conferencing link requires 128Kbits/s each way, which uses up this latter group’s entire upload bandwidth.
In the context of cloud, the conclusion is stark. We can talk all we like about home working and the ability to get stuff done in coffee shops (the Wi-Fi in many of which will be dependent on a home broadband connection). But as with any other aspects of technology, cloud services need to ‘just work’, even as they are becoming increasingly bandwidth-hungry.
There’s a very simple factor at play here – that of lowest common denominator technology adoption. Work in a city-based office where bandwidth is no object, and you would be forgiven for believing that internal IT was no longer necessary and the future lay entirely in the clouds. Perhaps it does – but move outside of such an idyllic world and the varnish quickly dulls. If even one person in a 30-person team cannot access the facilities required to work, the entire team will suffer. So, as a result, facilities tend towards those that work, with the ultimate beneficiaries – end-users – pulling away from those that get in the way.
This isn’t a cynical rant, more a recognition that a threshold exists. Nobody’s really set this so I’ll take a flyer. According to the mathematicians at Zen Internet (LINK: http://www.zen.co.uk/business/online-data-backup/backup-solutions/data-transfer-times.aspx), transferring 1GB of data (about 15 minutes of HD video, for example) at 450Kbits/s would take about 5 hours – which is clearly too slow. Anything under an hour starts to be acceptable, i.e. about 2Mbits/s upload, or 20Mbits/s download. So, my thesis is that, for the full range of cloud services to be usable for home working, home broadband speeds need to reach 20Mbits/s for a significant proportion of this group of users.
Which, coincidentally, is about the speed the UK government is pitching as being ‘superfast broadband’ (LINK: http://www.culture.gov.uk/what_we_do/telecommunications_and_online/8129.aspx#superfast_broadband) – they’re saying 24Mbps, but its close enough. The plan is to roll this out by 2015 across the UK, but signs are this deadline is already slipping. My money’s on five years before the threshold is passed for the necessary proportion of the population. That is, to be clear, five years before many organisations can come to rely on remotely accessible broadband speeds for the range of cloud services they may want their employees to use.
Perhaps things will speed up; perhaps some innovative soul will come up with a way to use the network of copper cables criss-crossing the land in a more efficient way (for example, of the top of my head, by moving to asynchronous model above speeds of 10Mbits/s – you heard it here first). In the meantime, we shall all be forced to compromise, cloud services will disappoint compared to their promise and more gung-ho organisations my regret rushing too quickly toward an online-only approach. The UK government is already under the cosh (LINK: http://www.bbc.co.uk/news/uk-politics-19057875) to start delivering on its broadband strategy. If it really is interested in increasing GDP, the sooner it can bring the majority past a ‘minimum necessary’ threshold, the better.
10-24 – Future Of Books
Future Of Books
Cloud Society: The future of books is ‘free’ – after a fashion
Two announcements this month show how Moore’s Law and the cloud are having an impact in the publishing world, and illustrate what might be the shape of things to come as far as e-books are concerned.
The first comes from Berlin-based e-reader company, Txtr. The device vendor announced the Beagle a few days ago (http://gb.txtr.com/beagle/), a no-frills e-reader which will set punters back a paltry 8 pounds. That’s one way or another – reports suggest the device will be bundled as part of mobile phone contracts rather than being sold direct.
Meanwhile we have Amazon’s announcement (LINK: http://www.publishersweekly.com/pw/by-topic/digital/content-and-e-books/article/54342-amazon-launches-kindle-lending-library-in-u-k-germany-and-france.html) of the Kindle Owners’ Lending Library, which gives Amazon Prime members access to over 200,000 books. To feed the library with titles, the organisation is looking to self-published authors to enrol in its HDP Select pay-per-borrow programme.
Put the whole lot together and we see a clear trend towards ‘free’, at the point of access at least. The fact that the Beagle can do very little (you need to set options on your phone before transmitting books to the device via Bluetooth) illustrates how the cost of cloud-based endpoints is tending to zero.
And the both the Beagle sales channel and Amazon’s library model share the approach of offering added-value services to existing subscribers – either of mobile phone or of retail services. It wouldn’t be a surprise at all if the two came together at some point – the ‘free e-book library with your mobile phone contract’ sounds blindingly obvious.
Of course the Beagle still has to be released, and Amazon continues to come under a fair amount of stick for its relations with publishers and self-publishers. While the business models may still need a tweak, nobody is questioning the continuing fall in e-reader costs and, equally, the online library model looks here to stay.
While this might not spell the death of the paperback just yet, the ability to access any book at any moment for next to nothing is coming ever closer.
November 2012
11-22 – A Little Definition Can Go A Long Way In It
A Little Definition Can Go A Long Way In It
Cloud in Public Sector Europe? If the shoe fits…
A little definition can go a long way in IT, particularly when terms are ambiguous or given multiple meanings. Case in point: a recent article in the New York Times. Titled “European Governments Staying Out of the Cloud,” it explains how Europe is therefore a less promising market than the United States for cloud services.
While there’s plenty of good stuff in the article, its arguments are diluted by the fact it talks about a cloud as a thing, rather than a set of characteristics (LINK http://www.cloudpro.co.uk/cloud-essentials/5006/cloud-society-can-we-really-predict-future-trends-cloud). Rather than reiterating those points, it is worth considering where Cloud is making sense to central and local governments, and where it is less so.
In the UK specifically, the Cabinet Office has two programmes underway – at a lower level, the Public Services Network (PSN) which is aimed at providing a shared networking infrastructure across public organisations. Certified providers are building services on top of this, which are to all intents and purposes ‘cloud-based’.
The second initiative is of course G-Cloud, in which (once again) cloud-based service providers can be selected for inclusion in a catalogue. “We can only see an up-side,” says John Glover, sales and marketing director at online collaboration tools supplier, Kahootz (LINK: http://in.kahootz.com/blog/bid/234436/2-years-on-where-has-the-UK-G-Cloud-got-us). Which sounds pretty positive.
Broadening to continental Europe, two factors are at play. First and for the record, yes, Europeans are notably more reticent than their US counterparts. We old-worlders are less keen to accept things at face value or try things out, which goes a long way to explain why start-ups tend to fare better in the US than over here.
Equally, Europe (despite efforts for better or worse since the war) lacks a common legal framework, particularly in areas such as privacy and contract law. What is lawful in one country may be completely illegal in another, as well as potentially being socially unacceptable.
All the same, public sector Europe is keen on Cloud for all the same reasons as everywhere else. A report from the European Network and Information Security Agency (ENSIA) highlights (LINK: http://www.enisa.europa.eu/activities/risk-management/emerging-and-future-risk/deliverables/security-and-resilience-in-governmental-clouds/) ways in which Cloud can reduce risk – for example by increasing reliability, enabling business continuity and delivering levels of physical security would be lacking in many public organisations.
The same report lists risks including that all-important insider threat, given that data is accessible by a third party. Interestingly, the fear of third parties exploiting data for marketing purposes is not highlighted as a risk: while this is likely a simple oversight, it does suggest that the issue is not as troubling as the New York Times article is making out.
Of course, data confidentiality is being treated with a great deal of respect across the European public sector – not least in healthcare, but here the issue is less about whether organisations want to benefit from third parties delivering services over the Internet, and more whether they can meet local legislation and norms while delivering efficient services to their clients.
From the US standpoint, Europe may occasionally appear slow or behind the curve when it comes to new technology adoption. But in the case of the Cloud, it is a simple question of deciding when, where and how such new technology delivery models apply. Where they can, they will.
2013
Posts from 2013.
January 2013
01-01 – Cloud Society How'S Your Cloud Risk Appetite?
Cloud Society How’S Your Cloud Risk Appetite?
Cloud Society: how’s your cloud risk appetite?
It’s a new year and time for a new buzzword but risk appetite could be an important concept to companies
At the recent Business Cloud Summit held in London, I was fortunate enough to host a panel of lawyers.
One doesn’t normally see the terms ‘fortunate’ and ‘lawyers’ in the same sentence, so I thought it was worth elaborating why. As well as the expected topics around contracts, due diligence, data escrow (should things go wrong) and, indeed, involving lawyers in any large-scale outsourcing decision, one term bubbled to the surface: ‘risk appetite’.
To understand why this is so important, we need to revisit where cloud computing fits, or is going to fit, in medium and large organisations across the board. What’s pretty clear (I hope) to everybody by now is that cloud computing isn’t going to replace traditional IT. Even this perspective still treats cloud as a single thing; rather it is a sourcing option for a whole variety of service types, from pay-as-you-go hardware to advanced applications.
With an additional option in the mix, businesses have more choice as to how they are going to do things. Companies are complex, and what’s suitable for one department (hosted applications of the like of Office 365, say) may not be suitable for another. This reality will no doubt be the cause of many challenges in the future, in terms of integration and interoperability, management and support - or, in other words, the same issues that IT has always faced.
So, what’s different? One thing cloud brings to the party that differs from the past, is a reduced hurdle in terms of procurement. In olden times, the 4-8 week delay between decision and deployment created an artificial barrier which, as a spin-off benefit, meant everyone had time to think. These days, the first time IT managers may have heard of a new SaaS application may be when support gets a call complaining that it isn’t working properly.
And so, to risk, and the appetite for it. Cloud-based apps and services are not without their limitations - to revisit an old adage, “free services are worth what you pay for them” and their terms and conditions may offer little if any recourse if you lose information, as users of email services such as Hotmail and Yahoo have found out in the past. Similar devilry can lie in the detail of pay-per-use hosted services, not just in terms of data protection but also uptime guarantees and support restrictions.
All of these aspects add to the risks of taking a service on. That doesn’t mean that they should be ignored or avoided; rather, that their use needs to be tempered at the moment of decision, in terms of whether or not they offer sufficient guarantees to support the part of the business using them. Are they protecting personal information adequately? Have they safeguards in case of denial of service attack? What happens if their data centre is subject to fire, flood or theft? These questions, traditionally asked of IT, now need to be pitched at the provider. And quickly, before their use becomes entrenched.
This ability to make decisions based on a reasonably slick grasp of the risks is natural to all of us - indeed, we do it every time we cross the road. However, it is not traditionally how IT is done, and processes and procedures may actually slow down or blunt our abilities to respond. As we move into a new year, then, perhaps it’s worth revisiting how our own IT organisations deal with risks of cloud-based service delivery, and asking the question - is the current approach helping or hindering the business? If the latter, the role of IT itself may itself come into question.
01-01 – Cloud Society Online Porn Think Of The Parents, Not The Children
Cloud Society Online Porn Think Of The Parents, Not The Children
Cloud Society: online porn - think of the parents, not the children
The government has a problem with porn - it doesn’t like it. But that’s no excuse for ill-thought-out solutions
Haven’t we been here before?
Back in 2010, Ed Vaizey was telling anyone who would listen of his plan to block Internet porn, and was given short shrift by ISPs. At the time, they said it was unworkable. “There are many legal, consumer rights and technical issues that would need to be considered before any new web blocking policy was developed,” commented BT.
So, nearly three years later, the policy comes round again - this time in the shape of telling ISPs to check a check box by default. It sounds so simple, doesn’t it? As many have mentioned, mobile operators have had a similar mechanism in place for years. The other, equally “simple” demand is to block certain search terms.
Let’s review: take a look at that fantastically frustrating ‘feature’ from Vodafone, T-Mobile and the rest, which blocks, well, just about everything. It blocks Yahoo, or has been known to - it did for me. It blocks blogs and other such sites which I happen to use in my line of work. Not only that but to unlock them, you have to put in credit card details. Oh no, wait - that doesn’t function, so you have to phone a call centre on a premium support number, at which point you give up. At least, I did.
As for the blocking of search terms, there appears to be a discrepancy in terms of understanding what technology can actually do. Often it is a blunt instrument - as illustrated by the frustrations of the people of Scunthorpe about email filtering. Meanwhile, I was indeed horrified when one of my children typed “X Men” into Google at a very young age and got more than he bargained for. The fact that David Cameron thinks these issues are surmountable with “algorithms” (I wish they could be) demonstrates his (or his advisors) ignorance.
It is possible to take a stance about erosion of online freedoms, or to side with the idea that the Internet is above scrutiny and the world should just deal with it.
I’m not in either camp; I am a parent who is concerned about child protection. But realistically, if the proposed law goes through, it will add complication without delivering the supposed benefit. Call me a tired old cynic but this looks awfully like one of those times when a politician rails loudly against something to give the impression of action.
Not least in response to the findings of the “Independent Parliamentary Inquiry into Online Child Protection” published in April. It’s worth a read.
However compelling the case it makes however, it presents little understanding of how technology works - as if closing one door in some way makes the problem go away.
Not least the statement: “We also heard that parents are not only concerned about access to internet pornography but also other forms of harmful content including cyber bullying, extreme violence, self- harm, suicide and pro-anorexia websites, while the issue of “sexting” or peer-to-peer sharing of intimate images is also of great concern.”
In other words, it’s not just porn that is the problem. What do we need to stop next? And virtually unmentioned in the findings (though discussed in the evidence) is the role of video. When it does come up, it also covers the topic of salacious pop videos. Is Justin Timberlake making porn now? What about mainstream films portraying sex scenes? What of Game of Thrones?
Even if the opt-out measure comes into force, ISPs do not have the wherewithal to monitor video streams in all their forms and codecs. To get round the restriction, a content provider need only incorporate images in a video file. Or an encrypted powerpoint file. Or, indeed, in any format that they choose to invent overnight, that ISPs don’t monitor for yet. No, that isn’t giving ‘them’ ideas. ‘They’ know it already.
Perhaps most alarming about David Cameron’s lip service proposals is that they make that hugely flawed assumption that technology - and technologists - can make it all better.
The check box was only one of eight recommendations made by the panel, three of which covered provision of better education and clearer guidelines to parents. To whit: “The Panel concluded that while parents should be responsible for monitoring their children’s internet safety, in practice this is not happening as parents lack easy to use content filters, safety education and up–to-date information.”
Yes, parents should be responsible for monitoring their children’s Internet safety. Technology can be complicated - its rapid expansion has created a jungle. But that doesn’t diminish the role of parents in any way. It is one thing (as some have suggested) that parents might become complacent if online filters are established. Quite another would be for parents to say, “Oh, that’s the ISP’s job,” even as content becomes more complex and difficult to monitor.
Here’s the rub. If any parent is worried about their children accessing unsavoury material, they can and should take steps to minimise the risk. Keep the computer in a front room. Install filters. Check out ParentPort or the Safer Internet site. Pay for advice from a local computer person. Ask me - I’ll very happily point concerned people in the right direction.
01-24 – Graph Search With Big Data, Everyone Will Be At It
Graph Search With Big Data, Everyone Will Be At It
Graph Search - with Big Data, everyone will be at it
Since Facebook launched its Graph Search algorithm on January 15^th^, pundits have been delighted, disappointed and horrified in equal measure. The “delighted” camp have been saying that, at last, the company has something with which to take on Google; more sceptical commentators have noted how the capability will not necessarily deliver on the hype.
Meanwhile however, articles have started to emerge which illustrate the potential for the tool – for good or ill. Such as Actual Facebook Graph Searches (LINK: http://actualfacebookgraphsearches.tumblr.com/), which demonstrates how you can hunt for Tesco employees who like horses, or Italian Catholic mothers who like condoms. The possibilities are endless.
While may pundits (and no doubt their readers) have thrown their hands up in horror, the point that has been missed is, simply, it’s not just Facebook that can do this. Indeed, the whole premise of Big Data – to be able to use distributed processing and/or high-performance computing to run algorithmic analysis – is that it offers orders of magnitude greater insight than in the past.
It stands to reason. More data is publicly available than ever before, at the same time as organisations are collecting more and more personal data than ever before. A simple, yet compelling example was when I clicked a button saying I didn’t want to install a software product the other day. Had I not had a firewall running I would not have known that even the negative button-click sent a signal back to the vendor, which is right now logged in a database somewhere, indexed against my name, or my machine, or browser id.
They’re all at it – retailers, mobile providers, social networking sites, utilities, central and local government, indeed any organisation that has access to information is trying to get more of it, whether or not they know what to do with it yet. The insights that can be delivered may well be highly useful and beneficial.
Equally, they could be – as illustrated by the Facebook Graph Search, intrusive and potentially damaging. Consider – your mobile company has detailed records of the majority of your movements over the past few years. It knows exactly where you have been, and how fast you got there. There can be no better way of sourcing information about traffic jams than using the mobile network as a sophisticated set of sensors (shame this isn’t done). However, woe betide if you have been travelling significantly over the speed limit.
What’s to do? Even as the need for legislation is debated (and there are no clear answers here), organisations such as the Information Commissioner’s Office are relaxing the rules for ‘anonymised’ information sharing (http://www.symantec.com/connect/blogs/icos-new-guidelines-need-focus-aggregation-not-anonymisation) - in fairness, they have little choice. Meanwhile of course, the same organisation illustrated just how fallible it could be when it modified the cookie law implementation (LINK: http://www.cloudpro.co.uk/cloud-essentials/cloud-security/5137/cloud-society-privacy-law-and-way-cookie-crumbled) and left a hole big enough for a fleet of juggernauts to drive through.
While we can comforted that we live in a democracy which has the power to create laws to control any significant breaches of privacy or rights, it won’t be long before the data mountains around us can be mined on an industrial scale. The inevitable issues that will derive from this need to be understood and dealt with, and quickly.
March 2013
03-15 – Hadoop – Not The Answer
Hadoop – Not The Answer
Hadoop et al – not the answer, but they help to ask the questions
Few in the industry haven’t heard about Hadoop, that data management platform that lends itself to distributed environments. Rather than storing data in one big bucket, Hadoop enables data to be managed, stored and queried across multiple little buckets – that is, exactly the kind of architecture used by cloud service and application providers.
To say that Hadoop, together with its programming framework MapReduce, have taken the IT world by storm would be an understatement. While the underlying platform is open source, several major vendors including IBM, EMC and Microsoft have embraced and extended the platform as part of their own offerings.
As distributed data management becomes mainstream, it is taking a number of overlapping paths. In the first, the Hadoop platform is improved, optimised, built upon. We are seeing the emergence of proprietary alternatives such as Greenplum’s Pivotal HD, announced a few weeks ago, and an update to Google’s BigQuery service this month (LINK: https://developers.google.com/bigquery/).
Whatever the relative merits of proprietary vs open source (LINK: ) or pay-as-you-go, service-based solutions, the drive is towards making the data easier to access using, in general, SQL-like commands such as SELECT and JOIN. Perhaps Dr Codd was right after all (http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/intro001.htm).
More importantly, a simpler data access mechanism implies lowering the hurdles to finding big things out of the big data pile. We are faced with mountains of information, much of which is growing faster than we know what to do with. Part of the challenge is human – the effort required to construct queries exceeds the amount of time available.
This will not be the end of the story however. Waiting in the wings are algorithms developed over decades, that are anxiously waiting to be unleashed on all that data. Autonomy’s famously Bayesian (LINK: http://www.director.co.uk/MAGAZINE/2011/5_May/mike-lynch_64_09.html) approaches were first conceived in the eighteenth century, and other models and mechanisms have existed for decades.
What we have lacked has been sufficient power to conduct large-scale analysis at reasonable cost – note that tasks that took 2 months and millions of pounds to run ten years ago, now take about 2 weeks and about 20 K’s worth of processing. You don’t need to be a computer scientist to extrapolate that one.
The ability to write simpler queries is a start, but it still puts the effort onto the analyst who needs to ask the right questions. What happens when Google, EMC, the open source community et al start integrating mathematically proven algorithmics, and adding simple access and query mechanisms to boot? “Here’s a bunch of data, find me all the anomalies and let me know why you think they might exist,” could be one such request. Or, for the more advanced class, “Where should we target our efforts?”
This is neither a criticism nor a request, simply an acknowledgement of what the current round of developments in the distributed data space will enable. It’s a no-brainer to suggest that some organisations (vendor and end-user) will be better positioned than others to take advantage of the inevitable. Right now however, the playing field is as level as it will ever be.
April 2013
04-01 – Oh My God 0.2
Oh My God 0.2
Oh My God! We killed the Internet!
“No, Mr. Sullivan, we can’t stop it! There’s never been a worm with that tough a head or that long a tail! It’s building itself, don’t you understand? Already it’s passed a billion bits and it’s still growing. It’s the exact inverse of a phage – whatever it takes in, it adds to itself instead of wiping… Yes, sir! I’m quite aware that a worm of that type is theoretically impossible! But the fact stands, he’s done it, and now it’s so goddamn comprehensive that it can’t be killed. Not short of demolishing the net!”
Tales of the Internet’s demise have existed since “the net” only existed as a in tales – illustrated by this excerpt from John Brunner’s novel The Shockwave Rider, written nearly 4 decades ago.
Brunner’s story didn’t’ stop there. In good life-imitating-fiction style, Xerox Research’s John Shoch adopted the term ‘worm’ to describe a program he had created to hunt out unused processor use and then turn it to more useful purposes.
While such worms occasionally got the better of their creators, it took until 1988 for such a program (known as the Morris Worm (LINK: http://en.wikipedia.org/wiki/Morris_worm) to cause broader disruption, infecting (it is reckoned) one in ten computers on the still-nascent Internet. The worm’s unintended effect was to take up so much processor time, it brought machines to a standstill.
Since these less complicated times a wide variety of malicious software has developed with the intent to exploit weaknesses (software or human) in computer protection. From this nefarious foundation we have seen the emergence of botnets – networks of ‘sleeper’ software running on unsuspecting desktops and servers, which can be woken to launch distributed denial of service (DDOS) attacks.
As botnets have been gradually found out and closed down by the powers that be, their creators have looked to more esoteric ways of running DDOS attacks – such as that which emerged last week – which echoed John Brunner’s prescient words in more ways than one.
What marked the attack on Spamhaus (LINK: http://www.bbc.co.uk/news/technology-21954636) was not only its nature, in that it targeted the Domain Name Service (DNS) – part of the infrastructure underlying the Web. In addition, the rhetoric used to describe it. “They are targeting every part of the internet infrastructure that they feel can be brought down,” commented Steve Linford, chief executive for Spamhaus.
So, should we really be worried? The jury appears to be out (LINK: http://gigaom.com/2013/03/27/what-you-need-to-know-about-the-worlds-biggest-ddos-attack/) concerning whether the core of the Internet suffered as much as some involved parties might have suggested (LINK: http://blog.cloudflare.com/the-ddos-that-knocked-spamhaus-offline-and-ho). While the London Internet Exchange, LINX was reputedly affected, it later claimed that the spike in traffic it had reputedly experienced was down to a data reporting error. Was it simply a PR stunt (LINK: http://www.forbes.com/sites/timworstall/2013/03/29/links-29-march-was-the-internets-greatest-dos-attack-ever-just-a-pr-stunt/), as some have suggested? My big sister says its all rubbish, and she is always right.
There is no doubt that we have seen some pretty big outages across ‘the net’, but these can’t really be put at the doors of the bad guys. Microsoft, Google and Amazon have all experienced downtime in their online services, but these have generally been due to human error or system faults.
Otherwise, despite the Spamhaus episode (or indeed, as demonstrated by it), the Internet has shown itself to be remarkably resilient. Key to its success has been its distributed nature in which, like a version of David Brin’s novel The Postman (LINK: http://www.davidbrin.com/postman.html), the message must always get through.
Indeed, this is the biggest factor in the Internet’s favour. For any cybercriminal to bring down the whole net, they would first have to have access to a single element which tied it all together – even DNS is by its nature distributed, and thus hard to attack.
While such ‘single elements’ do not really exist at the moment, a potential danger is that someone will think it is a good idea to create one. This may not be so far-fetched, if we think of how a Facebook outage quite recently took a number of popular sites (including CNN, Mashable and the Washington Post) down with it (LINK: http://21stcenturywire.com/2013/02/09/facebook-outage-takes-down-gawker-mashable-cnn-and-post-with-it/).
Indeed, providers are trying to get people and businesses to adopt their services all the time – all the big players would like nothing less. As we choose to adopt one service over another perhaps we should all consider whether we are putting all of our eggs into one basket.
Engineers can make the Internet as robust as they like, but if we all depend on a single service, we not only create a central risk but a magnet for anyone who has aspirations of bringing down the whole thing. Food for though not only for the creators of such services, but for all of us.
May 2013
05-10 – Cloud First Has To Be A Good Thing
Cloud First Has To Be A Good Thing
Seven ways to make the most of Cloud First
It had to happen, of course. As reported here last week (LINK: http://www.cloudpro.co.uk/cloud-essentials/public-cloud/5561/g-cloud-iii-goes-live-government-confirms-cloud-first-strategy), the Cabinet Office has (finally?) mandated that cloud providers should be given first bite of the cherry.
We’ve also questioned (LINK: http://www.cloudpro.co.uk/cloud-essentials/public-cloud/5564/cloud-first-policy-will-it-really-make-difference) whether the policy can be quite as effective in the short term, as our illustrious leaders would like it to be.
Will it work? We hope so, we really do. The potential benefits of cost reduction, improved efficiency and so on have the potential to affect taxpayers directly in the wallets, as well as helping improve public services. With this in mind, we offer our top tips on how to ensure the success of Cloud First.
Deal with the physical. “Cloud” is such a nebulous term – well, it is – that it starts to imply that the physical elements of information technology no longer matter. This couldn’t be further from the truth – network bandwidth and latency, storage allocation and proximity to processing, suitability of workloads, all benefit form an understanding of what’s going on under the bonnet.
If we could suggest where to start, it would be to review network sizing and contracts first, based on a forward, realistic view of data transfer needs across all expected services. A lack of network headroom and a restrictive contract could bring the whole thing juddering to a halt.
Remember the hybrid. It might be Cloud First but existing systems are not simply going to vanish in a puff of smoke, for a number of reasons. First, it simply won’t be possible for many legacy applications, which were never designed for cloud-based models. The costs of migration in many cases will be greater than the benefits, and indeed, in some cases there will simply be little reason to disrupt an existing system which is running just fine, thank you.
Most, if not all cloud suppliers are advocating, if not acknowledging the fact that some technology capabilities will remain in house for the foreseeable future, resulting in hybrid architectures that incorporate both hosted and in-house systems. From the perspective of the public body, this also means that the IT department is not going away any time soon.
Upgrade skill sets. Building on this point, cloud will not suddenly render in-house IT skills redundant – far from it. However there is another reason why public bodies need to upgrade skill levels – to take into account everything that the cloud makes possible.
The massive amounts of processing and storage now available ‘in the cloud’ create opportunities to develop new applications and services, for example based on massively scalable, ‘NoSQL’ databases, that can have a significant impact on public service delivery. Authorities that understand what cloud enables will be in a better position to spot opportunities to innovate.
Sort out cost attribution. Shared service models of any form – not just cloud – will always make sense in principle, due to the overheads of procurement. Why buy the same thing three times when it can be bought once?
The challenge with pay-per-use type models associated with cloud comes with working out who actually used a service, and then how to ensure the right costs are attributed. While technology can help, the real challenge is having the right roles, responsibilities and processes in place.
Consider the data risks. The way the cloud handles data is creating a minefield, with challenges including aggregation and its potential impact on personal privacy as well as simply keeping practices aligned with legislation and governance requirements.
We use terms like ‘data sovereignty’ to suggest that as long as we know where data is, we are better able to meet requirements. However this is only part of the story. All public bodies have a responsibility to assess the risks and ensure that the way in which they are handling data is in the public best interest, whether or not current legislation is sufficient (LINK: http://www.cloudpro.co.uk/saas/5330/cloud-society-facebook-analytics-just-tip-iceberg).
Take responsibility for due diligence. Cases like the collapse of hosting business 2e2 illustrate that a listing in the G-Cloud catalogue doesn’t give a supplier some kind of magic wand to protect against business failure. Given the impetus to work with smaller (and potentially less resilient) suppliers, this could be a growing issue.
Of course one solution is to avoid the smaller suppliers but this will not help either government policy nor the SMB sector. In fact the best option is to retain a level of due diligence in the procurement process, incorporating not just factors such as financial stability but also taking into account the fact that many smaller service providers are running on, and therefore backed by the service levels of larger incumbents.
Above all, be the boss. Of course, public bodies and local authorities can simply wait for suppliers to come up with the right answers. The real opportunity cloud presents – as often dressed up in terms such as “time to deployment” – is that new services can be tried, developed, upgraded and replaced with relative ease.
It’s not hard to see the potential chaos that could ensue if an organisation – public or otherwise – is allowed to adopt services at random, without any over-arching strategy. Equally, the benefit of the cloud model to organisations that are looking to become more agile, to innovate, to benefit from new services and even offer services to others, is immense.
The bottom line is that the success of Cloud First is inextricably linked to the public sector’s ability to make the most of it. While this may sound obvious in principle, in practice it requires public organisations of all sizes to internalise that there is more to cloud adoption than picking services from a catalogue. Get this right, and we all stand to benefit.
05-18 – Hana – Sap’S Bridge To The Clouds?
Hana – Sap’S Bridge To The Clouds?
Cloud Society: HANA – SAP’s bridge to the clouds?
SAP announced its HANA enterprise cloud earlier in May. Does it cut the mustard?
The past few years must have been quite a worrying time to be in the enterprise software space. From Cloud Computing being the sort of thing that vendors such as SAP and Oracle wrote off as mere marketing, it’s become a key element of IT strategy for companies of all sizes.
As so often happens in this business, Cloud doesn’t mean a fork-lift replacement of what has gone before, however. Despite the rhetoric from some quarters, the software workhorses of the business such as ERP systems and data warehouses are not going anywhere. Big companies still run their Business Suite applications as single-instance traditional stacks, either in house or on hosted hardware.
This is unsurprising. The highly distributed, ‘elastic’ nature of cloud-based software architectures might offer a suitable basis for the new generations of highly distributed apps we are seeing built today. However it is less well designed (or indeed costed) to support monolithic software stacks, as mentions (LINK: http://blogs.forrester.com/stefan_ried/13-05-07-hana_enterprise_cloud_pro_and_cons) Forrester’s Stefan Ried: “All SAP products are available on Amazon’s AWS; however, many SAP customers use it just for dev, test, and disaster recovery.”
In other words, we are ending up with both types of application architecture. As a result, not only are new technologies such as cloud services having to learn to play nice with existing technologies – including enterprise applications – it also means the same in reverse.
It’s against this background that SAP invested in what it calls HANA – an in-memory database architecture. The technology allows for some pretty serious number crunching at the same time as offering persistence – the geeky term for assuring that data will still be around if the computer gets switched off, for whatever reason.
For in-house systems, HANA offers a simple up-lift advantage –the equivalent of putting a Porsche engine into a Volvo. No doubt about it, organisations looking to speed up their business apps and analytics functions should probably take a look. SAP is certainly “all-in” – as stated SAP CEO Bill McDermott at this month’s Sapphire conference in Orlando, the company has made HANA its strategic platform for future development.
In-memory databases shouldn’t simply be seen as go-faster stripes on existing technology stacks however. One of the most exciting – and yet challenging – areas of Cloud is the way new architectures are being used to handle complex data sets.
Under the banner of NoSQL, key-value databases such as Redis and Memcached are being put to a wide variety of uses that need large-scale, real-time data management, from social networking to event processing. Meanwhile, Hadoop and MapReduce offer frameworks and tools for manipulating all that lovely data.
Given its “wow, that really is fast” data crunching credentials, HANA provides a bridge to such applications, as it can keep up with the information feeds they provide. Compelling in the extreme – as the bridge also crosses the divide between the way customers can be reached these days, and the way business has traditionally been done.
Building on these notions, SAP recently announced (LINK: http://www.saphana.com/community/blogs/blog/2013/05/07/sunshine-on-a-cloudy-day) that it will offer HANA as a cloud-based service, upon which it will host its Business Suite and Business Warehouse customers as a managed service. “This is an addition to our strong on premise offerings and is significant in offering an additional choice of deployment to our customers,” commented SAP’s lead technologist Vishal Sikka.
The joint imperatives of building on a data platform such as HANA and shifting to a managed service model – together, referred to as HANA Enterprise Cloud – marks a pretty bold move for SAP. Not least in that some insiders are apparently having kittens (particularly those whose quarterly bonuses depend on license sales).
In addition, they move the core of the company closer to the cloud action. It may be, at some point in the future, that traditional ERP models no longer offer the best technological basis for running a business – they could be superseded by exactly the kinds of distributed, real time platforms we are already seeing achieve such success in other areas.
With HANA Enterprise Cloud however, SAP has created a conduit to whatever innovations take place. To avoid becoming an also-ran, it is of course up to the company to deliver some of that innovation from its own stable. At least, however, it is still in the race.
05-30 – Bbc Digital
Bbc Digital
BBC’s digital media fiasco – would the cloud have been a safer bet?
On the surface, the BBC’s Digital Media Initiative sounded so simple. The plan was to create a digital archive for raw media footage – audio and video – so that it could be accessed directly from the desktops of editors and production staff. This could save up to 2.5% of production costs, suggested the original plan, saving millions of pounds per year in principle.
In practice, the “ambitious” project got out of control. It was originally outsourced to Siemens in 2008 but was brought back in house in 2010. Two years later, in October 2012 the BBC Trust halted the project and kicked off an internal review. And last week, the corporation’s Director General, Tony Hall announced he had canned the whole thing. (LINK: http://www.bbc.co.uk/news/entertainment-arts-22651126)
In the event, the project cost £98.4 million over the period 2010 – 2012. The Telegraph’s William Foxton suggests that the previous costs with Siemens ran at around £150 million. (LINK: http://blogs.telegraph.co.uk/technology/willardfoxton2/100009142/we-may-never-know-the-true-cost-of-the-bbcs-latest-disaster-but-itll-be-a-lot-more-than-100-million/) While a proportion of that money may have been clawed back, even the way this was done looks grubby in that it was partly funded by ‘efficiency savings’. (http://www.publications.parliament.uk/pa/cm201011/cmselect/cmpubacc/808/80805.htm)
An external inquiry has been launched (which involves spending money on PWC) as well as another inquiry – costing even more money – planned by the Public Accounts Committee. John Linwood, the BBC’s Chief Technology Officer, has been suspended on full pay.
In all, the whole situation has been a calamity, a tragedy and a fantastic waste of taxpayers’ money. At this juncture it would be far, far too trite to ask whether things would be different if the Beeb could have benefited from Cloud Computing, wouldn’t it? Surely an unfair question, given that five years ago cloud models were still in their relative infancy?
While big-budget IT projects may still have been the default course in 2008, by the time the project first hit the ropes in 2010 the potential of the cloud was much clearer. Elastic, scalable resources, pay only for what you need, accessible anywhere, the core features of cloud would seem to be tailor-made for the needs of broadcast media management.
The cloud model does work for broadcasting – as NBC found last year, when it streamed 70 live feeds of Olympic footage for direct editing by staff based in New York. Netflix has built a global business on it. Meanwhile, at the time of Margaret Thatcher’s funeral, the Beeb was forced to transfer videotapes by hand using that well-known high speed network – the London Underground.
We could give the BBC the benefit of the doubt and suggest that the cloud might not have been a valid choice for the BBC’s videotape archiving and access needs back in 2010. However, according to a National Audit Office report however, “Although it took the Program technology development in-house, the BBC did not test whether that was the best option.” In other words, nobody bothered to look.
In purely common sense terms this move was particularly dumb. To expect a technology specification to still be valid two years after it was conceived suggests a shocking failure of judgement. To consider its validity extending a further two to three years into the future, particularly given the amount of change going on at the time, beggars belief.
We are where we are, however. Of course – as has been written about extensively – cloud computing isn’t some magic bullet. In the broadcast scenario it would appear to hold many advantages over in-house systems, however. Not least that its elastic nature would allow for piloting of a new platform before broader roll-out. Even if it had failed, it would have done so at a much smaller cost.
Indeed, even if the project had bet on the cloud and still failed as spectacularly, it wouldn’t have to deal with the fact it has been left with a pile of near-useless hardware and software. The millions of license fee payers who have seen their annual fees thrown into the abyss might hope that the now-redundant technology might have some value on eBay. According to the BBC Trust’s Anthony Fry, it is good for little else. (LINK: http://www.bbc.co.uk/bbctrust/news/press_releases/2013/dmi_letter.html)
Will the BBC at least consider the cloud as an option going forward? This is not yet clear. James Purnell, BBC’s newly-appointed director of strategy and digital has stated that, “In the future we are going to rely far more on off-the-shelf technology.” Which is a start.
Bandwidth may be considered the main obstacle, suggested the Beeb’s Chief Enterprise Architect, Harry Strover in November last year. “One file could take up to a couple of days to move into the Cloud,” he commented. This shouldn’t really be the sticking point – with today’s network architectures, the same bandwidth is available to both cross-site and Internet connections, at much the same cost.
Over the past few years, we have seen much written about the potential for cloud failure – no technology approach is perfect. But even while we are doing so it is worth sparing a thought for the alternative. As illustrated once again in very stark terms, the notion that building an in-house system is in any way less risky should be quite firmly put to bed.
June 2013
06-21 – Dpa Hiding
Dpa Hiding
Cloud Society: Is the Data Protection Act the silver bullet for cover-ups?
The tragic saga of an alleged cover up (LINK: http://www.bbc.co.uk/news/health-22997705 ) at a hospital in Morecambe Bay has brought the Data Protection Act (DPA) into stark focus. That the Act itself was misused in this incident is quite clear, as illustrated by the Information Commissioner’s own statement on the matter. “What appeared to be going on yesterday was a sort of general duck-out,” he remarked.
This is far from being the first time, however. The Act has been used repeatedly to hide information or give institutions a get-out-of-jail-free card, in examples ranging from the complex to the simply ridiculous.
In December 2012 for example, Durham County Council cited the Data Protection Act in court to object to providing the names of residents in a care home, who may have been able to provide witness evidence in an abuse case. The objection was overturned on appeal. (LINK http://www.qualitysolicitors.com/abneygarsden/news/2012/12/abuse-lawyers-win-ground-breaking-case-in-local-authority-cover-up)
Clearly in this case there was a need to balance the privacy considerations of care home residents with the need to ensure a fair trial. Equally however, the local council involved was seen as trying to cover up allegations of abuse by citing the DPA.
Some other examples are, quite simply, laughable. Consider for example how in 2010, the ICO had to explain (LINK: http://www.ico.org.uk/upload/documents/library/data_protection/practical_application/taking_photos.pdf) that it was not illegal for parents to take photos at school sports days, as – quite simply – the law was “unlikely to apply”.
And right back in 2001, the then-Ministry for Fisheries and Food was castigated for not providing data about numbers of livestock slaughtered following the Foot and Mouth disease outbreak. (LINK http://www.telegraph.co.uk/news/uknews/1331279/Maff-hiding-behind-law-to-conceal-true-picture.html)
Outside of government cover-ups, consumers and citizens are faced with repeated examples of DPA misuse when they have to engage with call centres. “Sorry, I can’t tell you that, Data Protection,” goes the mantra. This can lead to absurd interactions around consent, which can be used as a reason to hide a multitude of shortfalls in service.
In a recent, anecdotal but very real example, a person was told by one part of an institution that it could not access records held elsewhere in the institution. The need for a signed consent form wasn’t the issue – more the fact that the question hadn’t even come up for over a year, and now it was being used as a reason why no action had been taken.
It does all beg the question – has anyone actually read the DPA? Or are people – even at the highest levels – simply reading the term “data protection” and applying to it whatever meaning they see fit?
In a nutshell, the DPA refers to data handling and management processes – largely to do with computer systems, though this is not an obligation. The principles are here (LINK http://www.ico.org.uk/for_organisations/data_protection/the_guide/the_principles). The main focus of its 7 principles is:
- Does the organisation actually have a need to store the information?
- Is it only storing the information it needs to do the job?
- If so, is it keeping it safe, accurate and up to date?
Perhaps the greyest area around the Act is Principle 6, that “Personal data shall be processed in accordance with the rights of data subjects under this Act.” It is here that an obligation to assure a person’s privacy can be seen to apply, particularly if holding or processing information might cause the person damage or distress.
It is the ultimate irony however, that more damage and distress seems to have been caused by misuse of the DPA to cover up or withhold information. Quite clearly, we need more than simple ICO opinion on this matter. If the privacy rights of those in powerful positions continue to be seen as more important than the rights of the people they are supposed to be looking after, we are in a sorry state indeed.
July 2013
07-12 – Why Can’T Our Financial Institutions Build Clouds
Why Can’T Our Financial Institutions Build Clouds
Cloud Society: What’s preventing our financial institutions from building IaaS clouds for themselves?
An industry event a couple of days ago, I bumped into an old friend who happens to be an active and experienced technology architect (his title is CTO, no less) working for a major investment bank. We discussed an interesting question: why aren’t top finance houses building their own Infrastructure as a Service clouds?
Now there’s cloud and there’s cloud, of course. We weren’t just talking about using virtualisation on commodity hardware for internal systems – plenty of that is going on already. Rather, the conversation turned to building what the likes of Amazon, Microsoft, Facebook and Google have been able to build for years – that is, fully, dynamic, distributed infrastructures that can be turned to whatever use is thrown at them.
Behind the question lies an important constraint – that externally provided IaaS platforms can’t be used for the core functions of investment banks, for a variety of convoluted but important governance reasons. So if banks want to build on similar platforms, goes the argument, they will have to do create them themselves.
So, what’s stopping them? The answer, it would appear, is the conjunction of scale and complexity. To ‘do’ IaaS in any useful way requires creating a certain size of infrastructure - but at the same time, organisations have to take into account existing infrastructure and applications.
One thing banks have been very good at traditionally is creating new IT systems. Back in the glory days, I remember being told, it was simply easier to build a new system than try to adapt an old one. While those days may now be over (at least for the time being), the result is a smorgasbord of systems, an unmeasured quantity of which are mission critical.
The challenge for IT in the finance sector has always been for management to keep up with delivery. As we discussed, one of the biggest issues faced in finance organisations today is asset management. To paraphrase another old friend and colleague, analyst Tony Lock of Freeform Dynamics, “The first problem is that people don’t know what IT systems and services they have.”
This lack of knowledge derives from a historical lack of adequate process. Asset management isn’t the only problem for banks: in IT terms, there are systems to migrate, data to classify, DNS rules and firewalls to configure, the list goes on. And then the sheer numbers of people who need to be consulted along the way, each with their own strategic goals and even personal ambitions.
When we look at the operations of the ultra-efficient data centres employed by cloud providers, we see a zealot’s attitude to IT management – everything is controlled, managed, labelled to the N’th degree. In a way, Amazon got lucky (http://itknowledgeexchange.techtarget.com/cloud-computing/amazons-early-efforts-at-cloud-computing-partly-accidental/) as it implemented its elastic cloud infrastructure relatively early in its existence.
Given that such an approach isn’t so straightforward for decades-old institutions, what’s the answer? Perhaps it would be best to create an entirely independent infrastructure. That would require a massive amount of investment of course – as banks struggle to respond to the new capital requirements imposed by Basel III, funding looks increasingly unlikely. Indeed, exceptions such as Deutsche Bank and National Bank of Australia only (http://www.americanbanker.com/issues/177_117/public-cloud-or-private-banks-map-a-path-towards-both-1050188-1.html) prove the rule.
So, why do finance organisations not build clouds? Despite their ability to build what were seen as some of the most powerful computer systems on the planet the answer, quite simply, is because they can’t.
2014
Posts from 2014.
January 2014
01-01 – Cloud Society Internet Of Things Do Sweat The Small Stuff
Cloud Society Internet Of Things Do Sweat The Small Stuff
Cloud Society: Internet of Things - do sweat the small stuff
The so-called Internet of Things could be with us today - if we want to do the easy things first
As I headed back from the Cisco-underwritten Internet of Things World Forum event held in Barcelona this week, I was struck by some incongruities between what was being presented as the vision, and what is happening on the ground.
When IoT was being discussed in principle, it was positioned as an object in its own right, a solution to an existing problem, a lofty destination to aim for. Top-to-bottom architecture diagrams were drawn, interspersed by pictures of the sun rising over a city skyline and an uplifting soundtrack.
Whatever this IoT thing is, suggested the presentations, it would be huge. The term “Smart City” was used many times, implying visions of a plexiglass-domed Megacity One in which everything, down to the paving slabs and the hinges on doors, would be part of an exciting, citizen-friendly, safe and sustainable whole.
Yet, from this starting point, the conversation would turn to specific examples which tended to be on a much smaller scale. And quite rightly too, might I suggest.
Don’t get me wrong - I’m all for the Internet of Things. I wrote a report about it after all, and I am in the middle of writing another. But I find myself wondering about the old adage, “If you want to get there, don’t start from here.”
Frequently the debate would turn to the kinds of issues that such an all-encompassing vision of things might cause. Privacy for example - and indeed, Big Brother was suggested to be alive and well in the Metropolis. Equally, challenges of integration, of management maturity, of securing such a complex beast were hotly debated.
While the reality might be much more mundane, I would suggest that it is still just as exciting. A far better starting point for thinking about the Internet of Things is not as a tangible destination, but as a way of enhancing of what we already have. Rather than the starting point of “How can IoT be applied to healthcare” which results in discussions of smart hospitals, of patient privacy and so on, what about - “What would be the benefits of connecting monitoring devices that already exist, and augmenting some physical objects with sensors?”
I’m speaking from a little bit of experience as I’ve seen a similar approach being adopted before, back in my consulting days. When centralised network management was still emerging, network elements - routers, switches, terminal servers and so on - were largely controlled locally, for example via a console connected by a serial cable.
The fun we had dredging through component catalogues to enable boxes to be wired together, to screen scrape, to add capabilities and update firmware so that all such devices could be brought under central control, I can’t tell you the half of it. Most importantly the benefits were huge, for example in terms of reduced time spent travelling between network elements, plus a centralised hub enabled decision making to happen faster.
With IoT we’re seeing similar, but now we have the ability to process events far more cleverly, using modern derivatives of log management software such as Splunk. The base principle is the same however - start joining things up and reap the rewards. Only this time we can apply the same principles not only to the machines that go ‘ping’ but also to trolleys and wheelchairs, crutches and meals. And cups of tea - let’s move stories of cups of tea going cold because the patient couldn’t even reach them into the past.
The power of the Internet of Things is right at our fingertips. We don’t have to imagine use cases - they’re right in front of us. Forget digging up roads - every plug connecting every appliance should be a smart plug, centrally controllable. Every heater should be a smart heater. Every car should be able to broadcast its location to its owner, every concert ticket should be impossible to forge, every tractor building a nutrient map of the field. All such examples can be explored without the need to consider, say, privacy implications.
For sure, the Internet of Things is about far more than simple remote control - but it’s a good place to start. I have said that ‘true’ Internet of things is predicated on three pillars: smart devices, low-latency networks and a scalable processing, analytical back end. On top of these we will build all manner of clever services - that’s where real innovation will happen. Right now, one pillar is looking decidedly weak compared to the other two, however. Many of our devices simply aren’t yet smart enough: the only thing preventing them being so is cost, and this is falling quickly.
So, here’s my top tip. For sure, enjoy the visionary take on IoT, and if you have a spare few million quid to spend on a new factory or office building, by all means kit it out with all manner of clever-geekery - a widening pool of vendors will be only too pleased to help. Keep going with the municipal pilots which can demonstrate the potential. But if, like the majority, you find these too high a mountain to climb, simply look around you and think about what you could do better with some sensors, some remote switches, some clever apps.
Perhaps one day we will all be living in smart cities but right now, already, we are seeing a groundswell of smart, as prices fall and capabilities grow. It doesn’t cost much to participate and the benefits can be immediate. Even as we look to the future, let’s not forget the benefits of IoT can be found in the here and now.
2016
Posts from 2016.
September 2016
09-29 – Big Data Magic
Big Data Magic
Looking for the magical power of computers to harness data? You may wait a long time!
Amidst all this evangelism and hype (together with pop star examples of startups taking the world by storm) It’s sometimes worth assessing how things actually are, and why they are as they are. As I am doing so, currently, following a day’s session getting the latest from HPE Software on its strategy and approach for Big Data and Information Management. In HPE’s world, this means how it deals with structured data analytics and unstructured data management respectively, with overlaps in between.
Now, I’ve been monitoring the impact of technology for fifteen years now, having spent a similar period working in IT. Call me a kool-aid drinker but I’m left with an overwhelming feeling that it really has had a profound effect. At the same time however, some things have stayed exactly the same. We have seen companies come from nowhere, take the world by storm and then abruptly vanish. For every Kodak there is an Alta Vista, for every Blockbuster a Digital Equipment Corporation. And even as we are wowed by the Amazons and Übers, nobody knows which will still be around in 10 years, with the rest no doubt acquired out of existence.
Truth and fiction in big data analytics
To whit: big data analytics, machine learning, artificial intelligence and all that clever stuff that’s going to rock our worlds, if it isn’t already. According to the rhetoric, we’re heading towards a moment in which decisions will be automated out of existence through the use of smart algorithms. But just how true is this? Of course someone has to present such a singular (if you’ll forgive the pun) view of the future. It gives everyone else something to triangulate against, as do any luddite positions or disaster scenarios.
As with any set of polarised perspectives, logic would suggest that the answer lies ‘somewhere in the middle’, which is where prediction gets a whole lot harder. Part of the challenge lies in the fact that we are seeing changes not only in how IT is delivered but also in the kinds of business that result. We will always need manufacturing, power generation, transportation, healthcare, haircuts and manicures and a whole bunch of other industries, products and services. But rare is the industry that is not worried about the effects of so-called ‘digital transformation’ right now.
Insurers are concerned about the threat of data-oriented companies to their core underwriting business; retailers are pushed to the edge by online-only companies; banks face the buzz of fintechs that exploit their core services to deliver far better customer experiences. Meanwhile, utilities are losing the fight to control the increasingly smart home, and traditional car companies seem only to flounder in the face of a smarter generation of vehicle manufacturers.
Where’s the truth? Should established companies seek to get more out of their vast pools of data, or would such exercises amount to fiddling while entire industries burn? These are tricky questions, and nobody has a monopoly on the answers. HPE Software is taking a pragmatic view: it believes at least part of the response lies in what the company is calling ‘augmented intelligence’, which is as much a manifesto as a glib marketing phrase. It is our intelligence being augmented, you see – technology exists to serve our needs as (therefore) smarter beings, who can then build upon what the insights they are offered.
It’s about the information (and not the data), stupid
To understand better where things are going, I believe we can start from a reasonably solid foundation – that all companies are information companies. Indeed, they always were, ever since Joe the blacksmith developed his skills and knowledge about what shoe to put on what horse, and Freda learned how to discern a good from a bad payer.
Over recent decades, we have been generating data like it is going out of fashion, but so much of it is preventing us from being actually informed. We spend person-years of corporate time and millions of dollars trying to pull together disparate data sources, in the hope we might unlock the value that lies within. All companies thrive or survive based on the quality of the information they maintain about their customers, their back-office processes, their finances and supply chains. And therein lies the challenge, as this data-rich world is also and increasingly information-poor.
Speaking of Kodak, the digital camera analogy isn’t bad. We have gone from taking 24 carefully planned shots of a two-week holiday to snapping hundreds, or even thousands of photos, which we either painstakingly file over many hours or leave languishing on hard drives. It’s the same for insurers or retailers looking for the elusive single customer view. “If you can’t measure, you can’t manage” goes the adage, and rare is the company today that can successfully measure. If it can, chances are any such metrics will quickly be out of date.
Perhaps this issue will remain unresolved, at least for as long as we see generating increasing amounts of data as good, or inevitable. At the same time however, we can discern the characteristics of the ‘better-informed’ organisation. First, given the deluge of data faced by any organisation (large or small), just being able to make sense of it is already a good start. Success in this area amounts to basic hygiene factors, delivering capabilities in data integration, quality and structure. A failure in this area possibly also amounts to a breach of regulation, so it is where much attention is focused.
Second comes the ability drive the organisation with whatever the information is saying, not least just understanding the data, but then also being able to conduct more detailed analytics and start to make more predictive decisions. It strikes me that this is where many organisations are struggling with the wrong mindset, as they see information as something ‘out there’ which should be consulted on occasion, like the oracle (sic) up the mountain. The fact is however, that the oracle has come down the mountain, available for consultation at any point.
This gives us an additional hygiene factor, based on a choice: do you use data in your decision making, or do you still hit and hope, based on what you believe might work? As noted HPE ‘Distinguished Technologist’ Chuck Bear, “Look at two gaming companies with the same idea: the one that does A/B testing will get more market share out of it.” All companies have a choice – to make decisions based on the information they have freely available, or to increase the risk of doing the wrong thing. Organisations really can be their own worst enemies.
The third area is then to use information to learn and improve, to change and become more dynamic. Information is not static but is constantly changing, meaning yesterday’s insights could well be inaccurate or, indeed, wrongly framed. In healthcare for example, no doubt it made sense at one point to measure ‘bed occupation’ as an indication of utilisation – what it failed to take into account was the fact that as soon as the measure was used, it became skewed due to the changing behaviours caused by its measurement. For different reasons, the same consequence true for all industries, remarked HPE Software’s marketing VP Jeff Veis, “Most metrics are backward looking - we often see companies creating dashboards that put them out of business.”
To really mess things up will always need people
To be able to benefit from information, therefore, requires a level of human savvy that doesn’t look like it is going to go away: to put it bluntly, we can be remarkably thick when it comes to how we use information, and no amount of algorithmic automation is going to change that. Does this mean all existing businesses are doomed, and that start-ups will mop up? Not necessarily: even the most disruptive startups are going for lower-hanging fruit, exploiting the fact that big business, bizarrely, can’t engage with its customers in any new way without spending aeons of time in meetings about it. Yes, it’s dumb, but that’s where we are.
Equally, while some such ripe pickings may generate such huge revenues that they can create global businesses out of a relatively tiny investment, they are a symptom of the times. Because of the exponential nature of complexity, the higher fruit (and to switch food-oriented analogies, the bread and butter of big business) may be way further up than such examples suggest. What this means in consequence is that people, not computers, will remain a significant factor long after technology has commoditised – and that hygiene factors in how we use information will differentiate successful organisations from the less successful. To be clear, when we all have the same tools, the least dumb will win.
Perhaps one day computers will exist that can make absolute sense of the pools of data we continue to generate. And yes, machine learning and even machine deciding will increasingly come into the picture. But, as noted Chuck Bear, “There’s no magic algorithm that given garbage data, will give magical insights.” And we have plenty of the former right now, with more coming on stream all the time. “It’s very easy to get the simple stuff wrong, that will be true five years from now and in 500 years from now.”
So, yes, the shorter term will be more about augmenting our intelligence than replacing it, offering situational awareness and empowering people to make the right choices – there’s just too much in the mix for things to be otherwise. We are constantly processing more information than we ask machines to do, and we will continue to do so, even as we can stand on the shoulders of such giants. To think otherwise makes the worst assumption of all: not that computers can become as intelligent as us, but that they can prevent us from being stupid. That really would require some magical algorithm.
Freeform Dynamics
Posts published in Freeform Dynamics.
2006
Posts from 2006.
October 2006
10-03 – Aligning IT security with the business
Aligning IT security with the business
Outside in or inside out, threats have to be protected against both in business and IT. Organisations today are looking to adopt more holistic approaches to balance IT security delivery against the risk management needs of the organisation, explains Jon Collins.
Where next for the security industry? The more cynical of pundits (and I include myself in this) have referred to it as an industry of fire extinguisher suppliers. No! Unfair! Or is it? While this glib titling may have an element of truth, we have to see this in the context of information technology at large, and most importantly, of how organisations procure their IT assets.
The fact of the matter (and this has been proven many times over in our industry research) is that organisations tend to buy technology in a number of quite defined ways. At one end of the scale we have major infrastructure investments, such as the construction of a new data centre, or the acquisition of an enterprise application like SAP. Such high-visibility procurements require the full support of the company board, and indeed, need to be run as change programmes in their own right as numerous users and data sources are migrated to the new platforms.
At the other end of the scale we have more tactical investments. While IT budgets may be created for the year, money will tend to be allocated on a quarterly basis. Like charities, there are plenty of worthy ways that the money could be spent, each of which will undoubtedly benefit the people at the receiving end – the business end-users of technology. But equally, there is only so much money to go around so some projects go unfunded, generally these tend to be the ones that are harder to justify. Quite frequently, it is in this tactical area that we find security purchasing: it is any wonder then, when we ask what organisations already have in place, we see the simpler stuff (antivirus protection, VPNs and the like) at the top of the list, and the more complex much lower down? Put bluntly, organisations treat security acquisitions like fighting fires, so should we be surprised that their actions are partially to blame for spawning an industry of fire extinguisher suppliers?
The good news is that there are signs this attitude and approach is changing. Many companies and public bodies are starting to see security within the broader context of risk, and this is having a positive impact on both attitudes to, and procurement of, security-related technologies. This sea change is for a variety of reasons: not least, the US-led compliance wave, which has grown into the broader “governance, risk and compliance” movement visible in all parts of the globe, particularly but not only in the Financial and Public sectors. As organisations become more savvy about business risk, they are recognising that IT is a two-pronged fork: not only can it be the cause of many risks that ultimately impact the business, but also, it is the source of a certain set of tools that can help organisations mitigate risks across the board.
The result, of course, is that IT security is being taken far more seriously by many companies. This does not mean that the wallet for security products has been prised firmly open, however. Traditionally, IT security has dwelt upon issues that make the best headlines: such incidents have generally come from the outside and more likely than not, IT security companies and their PR representatives have leapt on them as examples of exactly what their products can protect against. Meanwhile however, we have seen from our research that the potential for some kind of misdemeanour coming from an insider is just as great. We know (largely from anecdotal evidence and extrapolation) that companies are loathe to tell the world when they have suffered an internal security breach, which means that story- hungry journalists are not generally aware when such things happen (indeed, in certain US states, it is now illegal to keep mum – a fact that security companies have been quick to capitalise upon). While such incidents may not be discussed, this does not diminish their importance, however.
The insider threat does not have to be some kind of corrupt or malcontented employee determined to bring his organisation to its knees. While this can happen, far more likely is that something will go wrong due to an error of judgement, or a bunch of fresher staff members goading each other on, or an accidental eavesdropping that leads to a confidentiality breach. There are plenty of potential examples – but it is perhaps easiest to think back to one’s own experiences – of leaving a mobile phone on a park bench say, or accidentally deleting that important file. Such issues, writ large, demonstrate the real nature of the threat that the organisation’s own employees – that is, thee and me – pose to the business. Because it can boil down to mistakes that are made in the course of doing one’s normal work, clearly, it becomes difficult to defend against them. How can one easily guard against an employee deleting data, when said employee has every right and privilege to do so?
Equally clearly then, the expanded nature of risk to cover both illicit outsiders and unfortunate staffers means we can no longer think about solutions in technological terms alone. We are seeing success where organisations are taking the broader view, not only of the risks but also of the protections – setting workable policies for acceptable computer use, say, or ensuring that mobile workers are also given security awareness training when provided with corporate kit. Yes, absolutely, security technologies have a part to play, but only within this wider context, and not diminishing or replacing it. To many of us this means breaking the habits of a lifetime. We are all (particularly in IT) attracted by the latest gadget or software package, but we must resist the temptation to see such baubles as “the answer” in its entirety.
Even if we manage to adopt the broader view, we are not out of the woods. Security – or to use a better term, risk management – is not something that can be done only once and then left. This is for a number of reasons, not least that organisational behaviours are changing almost as fast as the technologies supporting them. Simple changes can completely blow a set of security policies out of the water. The lowly USB stick for example, is both a huge asset to individuals wanting to share files, but at the same time poses an equally large risk, having reached a size where they could hold a company’s entire customer database several times over. Or consider blogs, which are a thoroughly useful collaboration mechanism, but can equally provide a mechanism to thrust corporate secrets into the public domain. The same dichotomies can be found in the business at large, for example as organisations try to achieve the benefits of managing multiple outsourcing suppliers or allowing flexible home working. Such establishments may well be looking to open up their corporate boundaries, not lock them down – but all the while, they will need to manage the risks inherent in doing so.
So, what’s the answer? Ownership is key – and that stretches from the individual acknowledgement that people are both part of the problem and part of the solution, right up to ensuring there is somebody in an appropriately senior position to take responsibility for all risks, both business and IT. Some organisations are already achieving this, and many others are looking to see how they might develop such a role, with such an authority, in a way that works for their own businesses.
The bottom line then, is that we cannot rely on simple solutions, technological or otherwise – but then, this was never a simple problem. Many organisations still have a way to go, but a good first step is to recognise that it can only ever be part of the answer, and for the rest we will need to take a long hard look at ourselves, and at our organisations. The goal of aligning IT security with business risk management may not be easy to attain, but at least we shall be facing in the right direction.
2007
Posts from 2007.
August 2007
08-09 – Thanks for the memory…
Thanks for the memory…
I’ve just bought some more RAM for my laptop, a 2 GB upgrade to be precise. Now, before I get into the ramifications (no pun intended), here’s some context. I’m of the generation that recalls trying to fit software into the smallest possible of orifices, and while I never really got on with assembler myself, I could understand the elegance of a well-put-together piece of code. Some of my then-colleagues continue the embedded programming challenge, these days trying to fit large-screen video decoding into the RAM equivalent of a matchbox, and managing to squeeze in a copy of space invaders to boot.
And so it is - you know what’s coming - that I find myself looking at my laptop screen and wondering exactly how Window Vista could manage to fill an entire gigabyte of memory, almost by itself.
Perhaps that’s a bit of an exaggeration, but its worthy of a look. Right now, my wonderfully handy sidebar gadget is telling me that my computer is consuming 713MB of memory - which is on a fresh boot with only Notepad running so I can type this - Notepad takes up a meg, by the way. The other 712 is used up by processes that I have not invoked personally.
This is all a far cry from “640K ought to be enough for anybody,” as Bill Gates is reputed to (and later denied, but where’s the fun in that) saying. Indeed, that wouldn’t even support Notepad! But what exactly is the rest doing? A cursory glance at the task manager (5 Meg) tells me that, as a user, I am taking up:
- 25 MB for Skype (“Take a deep breath,” it tells me)
- 20MB for that “wonderfully handy” Windows Sidebar
- 10MB for Windows Explorer
- 8MB for Bluetooth
- 8MB for CSRSS - an RSS service perhaps
- 6MB for MSN Messenger
- 3MB for Samsung battery and display tools
- 3MB for Groove
Together with the nearly 10MB of various bits and bobs, that’s 93MB by my reckoning. Meaning that the other 600MB plus, is taken up by non-user processes. I could strip out about 30MB for the AVG antivirus and shield I’m currently running, there’s probably some processes that are part of other apps I’ve installed but the rest does look like it is part of the core OS.
I would say “phew” but I’m mostly sanguine about this. My processing is down at 3-5% as I type, meaning that whatever’s being stored, it’s not necessarily clogging up the system. There is the whole debate about poorly written, bloated OS code, which shouldn’t be ignored but equally, I’m sure there are a number of things I could switch off and save a goodly percentage; also, I am using certain features that have an understandable overhead, such as the search indexer (26MB). Ultimately however, while RAM may be cheap, I do feel slightly frazzled that quite such a large quantity of it should be required just to keep the lights on, and I’m not absolutely sure why “tuning” should be seen as a geek pastime, and not a core capability.
What other options exist? “Get a Mac” of course, and I am seriously considering this, not just to satisfy my feelings that memory should be treated as a precious commodity but for a number of other reasons. “Get Linux” is another possibility, but that’s usually back to the geeky-tweaky thing and it raises a whole bunch of compatibility issues. “Get a life and stop worrying about it” is where my thinking is at currently, particularly now I have gone through the “Get more RAM” option. All the same, I do find myself wondering, or indeed hoping, whether Microsoft is reaching the point where it will run out of things to add. The only thing I can think of round the corner is virtualisation, but I do believe that a well-written virtual layer should exist as part of the operating system anyway, in which case it could itself be part of the solution. Wishful thinking, perhaps?
In the meantime, should I bite the bullet and re-enable all those sexy Aero features? I think I’ll leave it just a while before I do, as I want to enjoy the feeling of that glut of memory just a little bit longer. It’ll be nice while it lasts.
November 2007
11-03 – Secure USB – the threat and the opportunity
Secure USB – the threat and the opportunity
Introduction
When the USB standard was first launched, few would have imagined the profound effect it would have on how we use computers and share information. Today however, we see a plethora of new, different kinds of device that benefit from the USB specification – not least flash memory storage, which has obsolete the floppy and all but replaced the CDROM as a way of transferring files, but also such esoteric uses as using the powered USB port to charge mobile phones and other handheld devices.
The simplicity of USB storage belies a number of quite serious security threats, however. Not least of course, that it is possible to shovel quite a load of corporate data onto such devices – which today can reach a size of 16GB plus. This may be done for malicious reasons, but more often it could be quite innocuous – an employee who doesn’t know which files he might need to work on over the weekend, may just dump the entire directory structure onto a USB stick. Woe betide, then, should he or she lose it on the way back home.
Risks such as these have led some organisations to take quite drastic measures to prevent the use of USB storage in particular, from disabling the device drivers, to (so the anecdotes go) super-gluing the USB ports on desktop computers. While such stories may be apocryphal, they reinforce the belief that USB is in some way a bad thing, and access should be prevented. This thinking leads to a dilemma in many IT shops – not least because USB storage has become an essential element of collaboration and file sharing, but also because USB has such a wide variety of other functions.
Reducing risks of USB storage
Despite being tarred with the brush of insecurity, the fact is that USB- connected devices offer opportunities to reduce a number of security risks. To catalyse acceptance of more advanced devices, we need to simultaneously quell the quite genuine concerns about the risks associated with USB storage. Let’s look at this first.
The ability to encrypt data on a USB device has been available for a number of years, pioneered by companies like M-Systems (Now part of SanDisk) and licensed via the brand DiskOnKey to the likes of HP and Apple. Essentially, such devices can be partitioned into insecure and secure areas, and an encryption chip on the device ensures that data stored on the secure area can only be accessed via a valid username and password combination. Such devices are almost impossible to hack: indeed, the circuitry is designed to burn out should attempts be made to break the encryption codes. Meanwhile there are endpoint security companies such as DeviceWall whose technologies support the encryption of data onto any device. In both cases, the result is that employees can have a single, locked down storage device both for passing of non-sensitive data, and for transport and backup of sensitive files.
There are several security benefits to such an arrangement. Of course data is protected against theft or loss (unless the thief has the password, of course!); also, it provides the basis of secure offsite backups, for example if an employee spends a lot of time outside the office it is good policy to recommend backing up data at suitable intervals. If there are concerns about the security of the computer being used (for example if working on a client site or in Internet cafes), then data can be accessed directly from the device. For further protection, some device manufacturers such as MXI are incorporating fingerprint readers directly on the device. A swipe of a finger replaces the need for a username/password, which is not only more convenient, but reduces the risk of theft even further.
Some devices can store not just data, but applications. The U3 initiative for example provides an application framework for Microsoft Windows computers so that U3-enabled applications can be executed directly from the USB stick; furthermore, when the device is removed, so are any traces of the applications and data involved. A wide variety of software is available, from apps such as OpenOffice, the Firefox web browser and Thunderbird email client, to network diagnostics tools so (say) an engineer can arrive at a client site with his toolbox on a single thumbdrive which can be plugged directly – and securely – into a client computer.
USB devices as security tools
As well as features to help us work more securely with USB devices, there are also devices that exclusively provide security functionality. As a simple example, think of token-based USB plugs: these tend to be tight on storage capacity (running into the kilobytes rather than the megabytes), as their function is to manage encryption keys rather than data. There is the Aladdin eToken for example, or indeed, the RSA SecureID 6100 smart card in its USB form factor. While both come with handy tools to manage online usernames/passwords (or Web Sign On, in Aladdin’s parlance), the real strength of such devices is in the corporate environment, when they can be used to support two-factor authentication for logging on to the corporate IT environment.
Other vendors are loading different kinds of security functionality onto the USB device. Accario for example has combined application wrapping, strong authentication, virtual private network termination and a fingerprint reader onto its AccessStick product. As an interesting example of a use case, AccessStick is targeted at Citrix environments: put simply, you can go onto any computer anywhere in the world, and access your corporate IT environment, remotely and securely. A similar idea is behind the MobiKey product from Route1.
The logical next step is for the device itself to run security applications. One company, Yoggie, has launched a firewall appliance (the Pico) on a USB stick, which is in fact a Linux-based, Pentium-class computer running a range of end point protection applications. We expect to see a number of such devices appear over the next couple of years, potentially combining capabilities such as those sported by the AccessStick and the Pico, and making use of virtualisation to enable an entire, secured compute environment that can run on a single drive. While attractive, such ideas are yet to reach the mainstream as there remain technical issues, for example around the portability of the virtualisation platform.
What about centralised management?
There are clearly plenty of benefits to be had from security-enabled, and security-enabling USB devices. We know however, that one of main challenges associated with security is its management. Many of the devices mentioned here can operate in a stand-alone mode, that is, they can be configured and secured by the user. However, certain features require a level of centralised management – not just to administer such things as encryption keys and the like, but to control the devices themselves.
This is an area in which we are currently seeing a great deal of activity. SanDisk’s Cruzer Enterprise device for example offers a range of management features, including Active Directory integration, remote password administration and centralised updates. Not only this but, if lost or stolen, the device can be configured to “ping” its presence to a central management console, from where it can be disabled remotely. A number of the companies mentioned above, including MXI and DeviceWall, offer similar capabilities. While beneficial, it would be fair to say that this is an evolving market and there are still some challenges to be overcome – for example, agreement of a common standard such that all manufacturers can be singing off the same hymn sheet.
Conclusion
In the future, USB devices will continue to add new features and find new applications (the recent release of the TrackStick USB-based GPS module offers a whole new set of possibilities, for example). With all their potential however, the one challenge USB devices can never overcome alone is down to their small size: we need to take it as inevitable that the little blighters will be lost, misplaced, dropped down the backs of sofas, accidentally crushed underfoot or otherwise rendered inaccessible. Any security policies need to build in criteria not only for their appropriate use, but also the consequences of such loss. Perhaps they will become like car keys: while we have learned to treat them carefully, equally, we should know where we keep the spare.
2008
Posts from 2008.
March 2008
03-03 – Breaking the bounds of security
Breaking the bounds of security
Breaking the bounds of security
I went to a fascinating panel session a couple of weeks ago, where I and a number of other analysts were largely witnesses to a debate around the evolving security needs of the Chief Information Security Officer. These were no small fry – around the table were security chiefs from a number of blue-chip companies, including leading pharmaceutical firms and some of the better known financial institutions. What was most interesting was how far advanced was the thinking of the panellists – not least how their views were moving ahead of conventional, industry-led perspectives.
The pervading view was that the very basis upon which IT security is defined and procured needs to be reconsidered. Traditionally, the assumption has been that the organisation needs somehow to be protected from outside threats. While this perspective is being gently eroded over time by the industry as a whole, the panellists were of the opinion that the whole concept of an organisational boundary should be consigned to the past.
There were a number of ways that these opinions were expressed. One panellist, for example, described the policies surrounding use of remote laptops, and how they were extending the policies to computers inside the corporate environment. “If it’s good enough for computers connecting via the Internet, it’s good enough for computers connected via the LAN – why have two policies when one will do?” he said. Other panellists talked about their suppliers and partners: if the business requirement is to enable access to corporate systems by third parties, security measures are often more of a curse than a blessing, disabling rather than enabling productivity.
While we are seeing some of these themes reflected in our own research, there can be no doubt that the thinking in this area is moving very fast. Back in the mid-1990’s, the UK Government cottoned on to the fact that good security was more about risk management than risk avoidance – a concept that has fed into such standards initiatives as ISO 17799. It is only quite recently that such thinking has broadened across the wider majority of sectors, aided and abetted by the compliance wave. In recent Freeform Dynamics research for example, over 40% of the 324 respondents told us their organisations were enterprises adopting a Chief Risk Officer – this figure was over 60% in the financial service sector – which is quite a leap forward from a couple of years ago.
Meanwhile, IT companies with a focus on security – Symantec, CA and IBM for example – are responding to the risk management question. But who’s to say that by the time the industry as a whole has caught up, the needs of end- user organisations will already have moved on?
There can be no doubt that it’s time to move forward not only the debate, but also the technologies available to support these evolving business requirements. If security is to be considered as threat prevention, solutions will invariably be in the form of threat removal. While such a model may have worked in the past, today’s organisations are looking for security to be more about business enablement and risk reduction – and this will require not only different technology combinations, but also different approaches to deployment and operation. There will always be a need to counter threats – but within this broader context, and not for their own sake.
These are not the views of some California-based marketing team, but the realities of the today’s global business landscape, as told by organisations on the front line. From the industry perspective, it would be wise to sit up and listen.
03-03 – Dispelling the myths about SOA
Dispelling the myths about SOA
What exactly is Service-Oriented Architecture (SOA)? In this business which sometimes seems to take pleasure in making things more complex than they need to be, it can be difficult to tell. The IT industry has frequently been compared to the motor industry, with all of its innovation, commoditisation and general impact on the world at large. Just as valid perhaps is to compare IT to the earlier days of the pharmaceutical industry, before such obligations as clinical trials and, well, actually proving a drug could do what its manufacturer said it could do. For all the flashing lights and so on, our illustrious business has retained its fair share of snake oil salesmen (and nobody is immune to this – just remember Y2K).
Against this background, it is perhaps inevitable that such complex constructs as SOA should be castigated as over-hyped and under-achieving. The purpose of this article is to pick apart some of this criticism, dispelling some of the myths to help organisations take advantage of the benefits of SOA, without being distracted by the marketing hype. First however, it’s worth reviewing a bit of history. Without spending too much time wallowing in past nostalgia (of course the mainframe guys got it all right, deal with it), let’s recall from whence SOA has come – which of course is the same as working out why we are now in the position we’re in.
SOA is the conjunction, convergence or collision of several technological trends – to the extent that the forbears of each still believe they “own” it. Here are the major themes, though of course these brief categorisations do none of them justice:
• Object orientation and component based development – from which we
have the principles of cohesive units of service, delivered through
code via a defined, externally visible interface.
• Middleware and Enterprise Application Integration (EAI) – giving us
both the understanding and the wherewithal for applications to
communicate, and dispensing with the idea of silo-ed apps that work in
isolation.
• Internet-based applications and Web services – providing the globally
accessible networking infrastructure and simplified, standardised
protocols to enable units of software (apps, components, web scripts)
to communicate with each other, wherever they are.
• Enterprise Architecture – bringing up to date and distilling the
capabilities of business and systems analysis, process and data
modelling, to understand business needs and map them onto IT
capabilities in a way that treats IT as an interconnected framework,
not as individual components.
All of these combine to provide facets of what we call SOA. Historically however, while each of these categories has seen much progress, against the background of a marketing-led IT industry, we have also seen a number of myths come into being around SOA. Let’s look at them.
Myth 1: SOA is a product
Of course (goes the cry), SOA isn’t a product, it’s an architecture, surely that’s what the “A” in SOA is referring to? This may be so, but (as should be unsurprising in this marketing-led industry that we live in), vendors have done what they can to define SOA in terms of what they can package up – otherwise of course, they have nothing to sell. That wouldn’t be a problem in itself necessarily, but the result is that SOA becomes a competitive stance, with one vendor’s SOA being better than another’s, and so on. In this case vendors and other pundits have chosen the wrong three letter acronym to try to hype.
Rather than trying to pump up SOA as a product, an alternative strategy can be to pick a product category and sell it as the SOA silver bullet: unsurprising that the four trends listed above are the source of many such products. We’ve seen this for example, with the Enterprise Service Bus (ESB), which is an evolution of the EAI capabilities of the past. Useful it may be to support an SOA deployment, but ESB is not SOA any more than SOA can be simplified to something as tangible as ESB.
Myth 2: SOA is different to the past
As we have already seen, SOA is a point on a continuous evolutionary path, that builds on the key strengths of a number of areas of IT. While it does offer a logical next step, that doesn’t mean that SOA in some way needs organisations to throw everything away and start again – which of course, would not be an option for most organisations, even if it were necessary.
SOA offers a way to think about how application elements can communicate, a framework to enable this, and an approach to help us achieve such intercommunication in a way that will be the most useful. None of these elements are new – what is perhaps different is how we are thinking about them all in one go, rather than piecemeal.
Myth 3: SOA is the same as the past
Paradoxically of course, Myth 2 doesn’t mean that we want to hang onto everything that we relied upon in the past. The co-ordination we wish to achieve through SOA requires a level of joined-up thinking that encompasses all dimensions of IT, not just development, integration, operations or whatever. From their historical standpoints, certain groups have sought to define SOA in terms that make sense only for them – developers for example, which see SOA purely as a set of mechanisms or standards, or architects who see SOA as being more about alignment between IT and the business.
At its most fundamental roots, SOA starts and ends with the concept of “service” – which is the thread binding together all areas of IT and its delivery. For SOA to succeed, it is as much about the operational side of IT getting its house in order (to deliver services scalably and consistently, for example), as it is about developers and architects defining, building and integrating applications in a service-oriented way.
Myth 4: SOA has been and gone
This is a myth fabricated entirely by people who don’t see SOA as anything more than a marketing construct – which, for some peddlers of Myth 1 for example, it is. Overheard for example, was a conversation between IT marketing types: “What’s next? SOA is past its sell-by date, we need something new.” There have been others saying that SOA should in some way be superseded by other architectural constructs, such as (say) event-driven architecture, EDA.
Trouble is, SOA isn’t something that can go away, precisely because it is a logical, evolutionary step that brings together several strands of IT. If we were to move on from SOA, that would mean throwing away not just the term but all the best practices that have been learned over the past 40 years. And what of EDA? Absolutely it has validity – but it takes a product mindset (see Myth 1) to believe that architectural constructs should be mutually exclusive, or in some way competing.
Myth 5: SOA has to be enterprise-wide
There is a clear tendency to believe that SOA is appropriate only for the largest of strategic projects, or indeed, that to do SOA “properly”, it needs to be implemented as a broad framework across the organisation. This could partly be as a result of Myth 2, but equally Myth 1 has probably led to the belief that new, improved SOA should be the broom to sweep away all of the old, inferior architectural constructs that went before. SOA offers an alternative to silo-ed applications, but that doesn’t mean that all silos should be abolished.
This myth has been exacerbated by some IT architecture types, who would like to see enterprise-wide initiatives to define, model or otherwise capture the interface between business and IT. To counter this, a well-worn phrase in architecture circles these days is, “Don’t boil the ocean,” recognising both the impossibility and the futility of such a task. The alternative is to focus on achievable goals, using SOA as a framework to solve a specific set of challenges – the roll-out of a new application that needs to be integrated with a number of legacy platforms, for example. Setting tangible goals for SOA also sets an appropriate scope for what needs to be understood and modelled architecturally.
Myth 6: Business people don’t get SOA
IT’s habit to try to solve its own problems perhaps leads to the biggest myth of all, namely that SOA should in some way be confined to the technologists, and is not relevant or too complex for the business. Business people do “get” service orientation, if it is explained to them without being dressed up in techie terminology. Indeed, in recent Freeform Dynamics research, the majority of senior business respondents were either already familiar with SOA or thought it made sense when service orientation was explained to them.
Indeed, for SOA to succeed as more than an integration mechanism, it is vital that the business does understand what IT is trying to achieve. As already discussed, an architecture oriented around services can draw together a number of threads from all areas of the IT organisation, including its interface with the business. It is good to know, then, that as long as SOA is presented without technical baggage, we are pushing against an open door when we want to talk to the business in this way.
So, what can we conclude? While SOA may still be cursed with a number of myths, a final reality is that SOA is inevitable, a consequence of how IT is evolving. In the future, in order to achieve a successful integration between such areas as those listed above, we need (a) to think in terms of architecture, and (b) to consider the primary purpose of IT – to deliver services to its users – as the central tenet. Does IT need SOA? That’s unclear. To move forward, do we need to think in terms of an IT architecture and delivery mechanism that is, in itself, oriented around services? Undoubtedly – but rather than promulgating myths, it will only be by adopting clear vision of what SOA has to offer that we really stand a chance of achieving its promises.
03-03 – Experiences installing Linux on the desktop
Experiences installing Linux on the desktop
Experiences installing Linux on the desktop
It’s funny how you can find yourself transported back, when faced with a set of stimuli. Pick up and old book, listen to a piece of music or put on a jacket, and sometimes a wealth of memories and feelings can come rushing back. It can be slightly disorienting and it’s not always pleasant, but for me at least, it never ceases to marvel.
And so it was, as I was testing out the installations of a few desktop Linux products a few days ago. Before I go on, please note that this isn’t a feeble attempt to ingratiate myself with the Linux community – I was never that good a Linux hacker. I was around, however, for the first release of the Linux journal, and as an indication of how sad I am, it is perhaps of note that I still have it, filed somewhere.
Where the “memory thing” is relevant is that I found myself transported from the relatively cushy pastures of Windows, back to the frontier lands of Linux. The software has clearly come a long way in terms of functionality, usability, ease of configuration and so on, and it is a different world to ten years ago (when beta really meant beta!). The cultural vestiges remain, though, and so does the fix-it mindset I found myself using.
By way of example, allow me to work through my own experiences – first with Gentoo, and then with PCLinuxOS. I confess the time spent with Gentoo didn’t last long – I kicked it off first as I knew it was the one Stephen O’Grady used the most. I was also swayed by its claims of “extreme performance and configurability.” What was there not to like?
Sadly, I stumbled almost as soon as the installation started to kick up a fuss. The Gentoo installation disks provided on this particular magazine (Linux Format) offered a number of pre-built configurations, none of which wanted to install cleanly – that is, to go straight through to the graphically enhanced front end without a fuss. Like an untrained, flaccid muscle, a small part of my brain was quietly mumbling, “that’s OK, just install a cut down kernel, look at the configuration tables and compile something that’s good enough for now, so you can build it up from there, benefitting from the extreme performance without compromising on features.” At least I think that’s what it was saying, but I had spent too long in end-user-land to listen. After a few attempts, I canned it and went for PCLinuxOS.
This was more successful, unsurprisingly as it is designed more for the novice (did I really once hack Xconfig files?), but it wasn’t without issue. There were a couple of problems that were fortunately within my “novice” reach – first, that the “wizard” that was used to set up disk partitions would quite jovially continue on its way, even if no disk had been selected. As would the entire installation in fact, and it was only following the final reboot at the end of the process, that the system confessed that no operating system had been installed. For any Linux newbie, this would be a complete showstopper.
Having worked past this hurdle (and feeling justly proud, dare I say), the entire installation was pretty seamless. I could set up a user, configure and run programs, and generally compute what needed to be computed. As a comment, from the novice perspective there were perhaps too many things that I could do – I was interested but a little surprised to find that the OS came with a complete set of developer tools, for example. To me, the conundrum is this: if I were a developer, the chances are I would already be in a position to install a more clever Linux distribution than this one; if I was not, such capabilities only serve to distract and would make me wonder what purpose they serve.
The only thing not present was an Internet connection. I rather foolishly (so I thought) assumed that, while I might try to get my Belkin PCI wireless card installed, the chances are I wouldn’t be able to find a Linux driver for it. I was both right, and wrong: a “wrapper” driver was supplied, within which I could install the Windows driver for the card. A bit of tweaking and I could see the network; a bit more time spent playing with routing tables, and I could access the Internet.
This last point is not a trivial one, and again it goes to the heart of the difference of approach between the two mindsets, Windows and Linux. In the Windows world, internet connectivity should just work – if it doesn’t, the user has every reason to feel a bit miffed and call in the heavy guns. In the Linux world however, there seems to be an assumption that whoever is in front of the computer will have both the desire and the wherewithall to open the bonnet, run up the command line and have a bit of a tinker. For me it was both a delight and a challenge – the former as I already knew the right commands, and the latter as I had absolutely no idea what the command line switches should be. However, all of this assumed that I even “got” certain principles like routing tables, default gateways and so on.
There is no right or wrong in all of this, but even as I browsed, impressed, through the available programs, I was left with the feeling that desktop Linux still displayed its heart just a little too clearly on its sleeve. I know, this is only one distribution among many – but this is also part of the problem, as a first-time user stands as much chance of being turned off from the whole desktop Linux concept, as being turned on to it. I’m not talking about wholesale adoption strategies by enterprise IT shops: as I understand it, the growth of Linux acceptance in particular, and open source software in general, is equally dependent on a viral adoption approach – in other words, through “novices” like me trying things out and making a decision, yay or nay.
I’m in no way downhearted – I remain impressed by the comprehensive set of facilities I now have on the computer sitting beside me. I shall continue to explore and test things out, for a start I want to have a go with a couple of other distributions (I have a SuSe DVD on my desk, for example; Ubuntu is the obvious other), and see what additional facilities Linux offers over and above what my office worker mindset is used to. Just for now I shall stick with running Windows on my main PC, but never say never!
03-03 – First Look Blackberry April 2007
First Look Blackberry April 2007
I’ve been a reasonably happy user of the HTC Universal for some time now. OK, its built like a tank and weighs about the same, but it is the first experience I have had of a device that really does do everything I need. It also runs Microsoft Pocket PC, which is important yet not essential – I was a perfectly content Palm user for many years, and a Psion user before that. Still, I have grown rather attached to my clunky-yet-functional Universal.
Last week, I was given the opportunity to test one device I have not yet been exposed to – the Blackberry. It’s sitting next to me now, quietly confident, as any device would be if it had had the whirlwind romance with the business community that the Blackberry has experienced. Right now however, I’m feeling unconvinced that it will nuzzle its way into my working life, pushing the Universal out of the nest with a flick of its scroll wheel.
Why is this? I thought it would be best to write down what my first impressions of the Blackberry were, compared to the Universal, not least so I could review them in a few months time. If you’re interested in joining me on this journey, read on.
First off, the positives. Gram for gram, the Blackberry is significantly lighter than the Universal, sitting comfortably in the shirt pocket where the U feels like it would tear the thread if I left it there too long. To state what is perhaps obvious to all Blackberry users, clearly a great deal of effort has gone into design and usability – lessons learned from the Zen of Palm perhaps. It sits comfortably in either hand, and does what it is supposed to do (in this one’s case, read/write email and make calls), in a straightforward manner.
There are some great, innovative design features as well. Put it in its holster and it switches itself off automatically, presumably responding to a sensor in the case. The oft-touted thumb wheel is clever and simple to get the hang of, and the screen is clear enough and bright enough. Put simply, it does what it says on the tin. The contacts database is similarly simple to use and speedy in response, etc, etc. That’s not to say I found the interface totally intuitive, however. There’s one click more than I expected to open an email (small issue, but grates after a while), I find the keyboard a bit too small for my fat fingers, and there’s other, similarly small issues, but nothing that would prevent me from using the device.
What lets the Blackberry down is its versatility. While it is a brick, I was delighted with the U, to finally have a single device on my person. Its not just the device but the paraphernalia – chargers, connection cables and so on: when it came out it replaced my phone, my MP3 player and my PDA, and was sufficiently functional to allow me to leave my PC at home (I can view PDFs and Powerpoint presentations, for example). If I were to adopt the Blackberry, would I then need to get an MP3 player? What would happen when I needed to be hands free and I received a call in the middle of listening to music, on my bike say?
Meanwhile, I have my trusty Universal. Okay, the battery life is poor (though better since I was told to take out my cheap, power hungry SD card), it is slow to open the address book and needs an occasional reboot for no reason whatsoever. The email facility sometimes fails to connect… and yet, I can access multiple accounts (keeping work and home separate, say), I have access to a much broader range of applications (including Skype over the built-in Wifi), it has a properly usable keyboard, the Today screen tells me a summary of what’s coming up, and I can use it as a 3G modem with my PC. It should also be mentioned that the Universal integrates more smoothly with Microsoft Exchange – what you see in the pocket email application is what you see on the desktop, whereas the Blackberry’s main email window is a hotchpotch of emails that are in reality stored in a variety of folders.
In conclusion, the Universal really is the Swiss army knife of pocket devices, compared to the slick, James Bond gadget of the Blackberry. One is generally functional, the other excels in a much smaller range of functions. Perhaps it boils down to the kind of person I am – or put another way, do I really need all the features of the Universal, and am I prepared to forgo the simplicity of the Blackberry? Or, will I find that I reach for the Blackberry first, whatever I believe my general needs are? It’ll take a few weeks to find out; for now, and having finished this article, I’ll just be glad I don’t have to carry both of them around.
03-03 – Information at your Fingertips?
Information at your Fingertips?
Information at your Fingertips?
A long, long time ago, I can still remember… when, at University, we were taught about how computers were going to help people have all the information they needed, quite literally at the speed of electricity. Hum. It’s now twenty years later, and I don’t feel any more like information is at my beck and call, than I did back then. Indeed its the other way around – I feel beholden to information, rather than feeling it is beholden to me.
I don’t think I’m alone. In fact, I know I’m not – a research study we conducted at the beginning of the year showed that information access remains an area of weakness for many organisations. Quite ironic really, given that we supposedly work in “information technology”, that is, the technology of information. Someone, and I suppose we all need to put our hands up for this, isn’t doing a very good job.
But is it an impossible goal? Think: if you could tap into whatever information you needed right now, what would it look like and how would you access it? Its not an easy question to answer, and indeed, it is difficult if not impossible to do so without considering what facilities are already available to us. Every now and then I have a deep insight into my own information needs, for example when I am in a strange town, there’s nobody around and I really, really could do with a curry. Weren’t mobile software vendors telling us years ago that such a problem had already been solved? Perhaps its just me – the rest of the world are enjoying fine curries, laughing into their Singhas at having managed to keep the secret – but I doubt it.
It’s the same for business information. Whatever the reasons, many (if not all) organisations still struggle when it comes to pulling together whatever information is necessary for day to day activities. Again, there have been many promises over the years of how (say) we would be able to access a single view of the customer, or manage product information over the lifecycle. But, lets face it, if it is still a challenge to organise meeting room bookings – and indeed, in many places it is – what chance do we stand in achieving more esoteric goals. Even this is a simplistic view of the real requirement, as anybody who invites a potential customer for a meeting, only to be turfed out of the room by some irate jobsworth, will know.
So what’s the answer? To be frank, at this stage in the game, there isn’t one, not a simple answer anyway. Technology is changing too fast, and the complexities are just too, well, complex. With all the cool-yet-disposable gadgets, software packages and supporting chicanery we are the beta testers of the information revolution, in which the real battles are yet to happen.
This is not to be downhearted. One day, I believe, we will reach the tipping point of making technology work for us, rather than the other way around. First however, we must decide what we actually want.
03-03 – IT management gets personal
IT management gets personal
IT management gets personal
Here I am at the Symantec Vision user conference in Las Vegas. This morning we were treated to the keynote pitch “Ten Oxymorons of IT management”, its ironic that shortly after the Veritas acquisition (which I wasn’t fully happy about, from the enterprise perspective) I believed that the very term “Symantec Vision” was itself an oxymoron. I’m pleased to report that this view is changing, however.
Symantec has been viewed as a company that talks like an enterprise IT company, but acts like a consumer IT company. When I put this point to the CTO, Mark Bregman, he made the rather good point that it was increasingly important for the company to balance the needs of both sides. It makes sense: for security as with other areas of IT, our actions as individuals (i.e. “consumers”) can often have a profound impact on our IT capabilities as an organisation. It could be that we choose to complete a report on our home computer, or plug the iPod into the corporate desktop, or treat our personal mobile phone as a business tool – clearly, we need to marry consumer with corporate technologies if we are going to minimse the risks of such combinations.
It’s not just about avoiding the problems. During an animated lunchtime discussion, some of the potential benefits of marrying individual tools with corporate capabilities became clear. At the table were several Yankee Group analysts, Simon Robinson from The 451 Group, yours truly and Mark B. A conversation around data classification technologies (think: clever software that can work out what the data is for, and make decisions about it accordingly) moved onto the social networking technique of tagging (think: people labelling data according to what it really is for, avoiding the need to second guess) – would it be out of this world, for example, to imagine the backup software making backup decisions based on specific, user-assigned tags?
This raises several questions – not least, might the human factor introduce the potential for error, and might this increase the risks rather than reducing them? The answer to be frank is, I don’t know, but it would be a very useful conversation to define where people might be able to make individual decisions about their data, and how these decisions might be of benefit. Expecting humans to be perfect is not the answer, so introducing the human factor should not be treated lightly. However, the alternative – to expect a future, fully autonomic IT environment to be able to make all the decisions – is probably not the answer either. As said Andrew Jaquith of The Yankee Group, “humans are sloppy, but perhaps we should be embracing sloppiness.” With all the caveats of not leaping before we look, personally, I agree with him – and if Symantec can marry the best of humanity with the best of automation, good luck to them.
03-03 – IT security in a changing world
IT security in a changing world
IT security in a changing world
Predicting the future is always a challenge, particularly when it comes to IT. The history of computing is littered with attempts to define how things might look: the only characteristic they share is an innate level of inaccuracy. If we want to define IT strategy however, we must take a stab at what we think will happen. There are some, more obvious trends that can help us here – use of mobile technologies, say, or use of virtualisation. The harder ones to judge are more dynamic areas such as social networking and Web 2.0, or trendier, issues-based areas such as going green. Trying to decide whether such things are just a flash in the pan is perplexing, to say the least.
For security professionals, every big, new thing brings with it a raft of big, new risks. Gone forever are the days when security was discussed in terms of air-gaps and offline storage: the brave new IT environments of today are always-on, and always connected. The result is a conundrum: oftentimes, organisations choose to risk the potential dangers rather than battening down the hatches and constrain business. A simple example is the USB-attached storage device (a.k.a. the flash drive or iPod), which these days has the capacity to haul off a significant amount of corporate data, if not all of it. One CIO told us when his department conducted a USB port scan, they found over 600 different types of USB storage device attached to corporate desktops. Blocking access to these, however, would restrict how people share information, as well as being an administrative nightmare to manage.
While organisations may be choosing to accept the inherent risks of many technologies, this doesn’t mean they are blasé. In many case, companies are finding themselves restricted by a lack of security. Indeed, according to a recent Freeform Dynamics research study, they are limiting the adoption of such business-beneficial approaches as teleworking and workforce mobility, because they are concerned about the risks. So, the absence of a warm feeling about how risks are being countered, is actually holding organisations back. Not just this, but more and more these days security breaches are being treated as a failure of business governance, not just of IT.
What to do? If a lack of understanding is at the root of the problem, the solution is perhaps to mitigate this, rather than having sleepless nights about what might be. There are several things an organisation can do, none of which are specifically to do with technology: these include defining ownership of the problem, delivering workable policies that can change with time, and separating concerns between those that do and those that monitor. Most importantly, IT security needs to be treated seriously at board level, not for its own sake but because the lack of security is an inhibitor to business growth. A business-led, co-ordinated approach to IT security need not be difficult to implement, if it is pitched at the right level, and for the right reasons.
03-03 – Locking down to open up -- taking control of the enemy within
Locking down to open up – taking control of the enemy within
Locking down to open up – taking control of the enemy within
IT security tools have traditionally focused on preventing what we could loosely call “the external threat” – hackers, viruses, worms and so on. From the perspective of customer organisations however, this is only one part of the story. It is just as likely, for example, that an attack would come from an insider, a lowly clerk or disgruntled programmer having a quick browse round the HR file share to see if there’s any interesting files left visible. We know this is the case – when we spoke to 715 senior IT managers in October last year (“Enabling the trusted workforce”), they told us that inside jobs were almost as likely as indiscriminate pestering (viruses etc) and even more likely than targeted attacks from the outside.
While this may come as no surprise to the reader (who probably works for such an organisation, and who may well have experienced such incidents), it does beg the question why this has been less than attended to, in the past. We discussed this here (link to prev article), and noted that IT vendors are waking up to the issues and indeed doing something about them.
Employee-related risk is a moving target, however. The fragmentation of corporate systems makes it harder to keep control of confidential data, an issue exacerbated by the availability of portable storage such as USB sticks and MP3 players. Of course, it is possible to prevent such devices from being connected to corporate equipment, but that creates problems of its own – USB sticks sometimes offer the only way to get a file from A to B (say, from one person’s desktop to their laptop, in the absence of network connectivity); furthermore, actively switching off USB ports is an operational nightmare and is difficult to do without blocking access to other, perfectly valid devices (printers, say).
As new generations of IT offer new ways of working, they also create new security headaches: consider mobile devices such as the Blackberry, which may be a powerful asset but equally, can be easy to leave around. IT is unsurprising perhaps, that security technologies seem to be forever in catch-up. Equally clearly, the problem can never be solved by technology alone – even the most secure environment needs to be managed by somebody, who may or may not have their own fingers in the pie.
Put bluntly, few organisations are doing everything they could to ensure that the IT risks associated with their own staff are minimised. This is as much about procedure and policy as it is about technology. Only about a third of enterprise organisations actively screen their own staff as part of the recruitment process of example, and this number drops still further for smaller organisations. We’re not advocating a police state – the goal is understanding and management of the real risks, rather than trying to create jobs or undermine the rights of employees. However, one wonders if technology is sometimes being used as an avoidance tactic, as it is easier to go through the motions of locking down systems than it is to ask hard questions of one’s peers and direct reports. This is reflected in the research, as nearly 70% of respondents commented that policy and process related challenges were holding them back.
This is an important point, as the upshot of all of this is less to do with ending up with a nicely secure organisation – security cannot be an end in itself. Rather, it is more about reducing risk to the extent that the organisation feels comfortable to push its own boundaries, into domains such as remote working, better use of mobile devices, closer relationships with suppliers and so on. It is perhaps overstating things to say that security is an enabler; however few deny that concerns surrounding security are preventing organisations from moving in such directions. It is not ironic that those companies that seem to lock things down, are actually those who are better placed to open themselves up to new opportunities. In a world in which business agility, the ability to move with the times is becoming ever more important, that’s a fact worth remembering.
03-03 – Maid April 2007
Maid April 2007
A couple of years ago, I was hosting a session at Storage Expo. One of the presenters was from a major financial institution, and he was extolling the virtues of what was then a brand new storage technology, called Massive Array of Idle Disks, or MAID. Truth be told, there wasn’t all that new about the technology – it still involved racks of hard drives, big boxes with flashing lights, and all that. Where it differed was in one simple but quite fundamental way: that the hard drives, when not being used, could be turned off.
I had heard little about MAID between then and now, and indeed, had I given it any consideration I would probably have written it off as yet another technological also-ran (like RLX, the server blade manufacturer whose blades could also be switched off when idle). The premise seemed to be based around the word “massive” – i.e. you’d need lots of disks and a great deal of data, most of which you’d never use, before you’d start to see the benefits. And while companies might be able to identify specific applications to take advantage of such a configuration, the majority would probably be happy with what was more widely available.
Much can change in two years, however. For a start, the continued erosion of disk costs and higher capacities, together with the more general availability of Serial ATA drives (more bytes per buck than Fibre Channel) has caused many companies to think again about how they tier their storage – in particular, whether application owners can be convinced to migrate over to the cheaper disks. Second (and again due to wider availability of cheap disk), there has been the growing uptake of virtual tape – that is, disks that appear as tapes to backup and archiving applications. Third we have the wondrous power of the compliance wave (what a boost to the storage industry that was), requiring data items of various types to be stored for longer, and rendered more readily accessible.
Last but not least, and hot on the heels of Messrs Sarbanes and Oxley, we have the latest green fad. Don’t get me wrong, I’m all for anything that helps keep climate change at bay, and often more power efficient solutions are good for company economics as well. In marketing terms however, there is a green-painted wagon rolling down the hill, and its worth jumping aboard right up to the point that the wheels come off and attention gets turned elsewhere.
From a MAID perspective, all of these factors conspire to turn around what was once a technology looking for a killer application. The need to maintain large pools of data, much of it will never be accessed but equally, which does need to be kept reachable in case of demand, coupled with continued application consolidation and a need to keep power costs as low as possible, results in MAID becoming quite an attractive proposition. Virtual tape is a clear opportunity, but so is the more general requirement for longer term, accessible storage of content. There may still be application-specific uses, but there is plenty to suggest that MAID is more than a niche capability.
Indeed, there are other stated benefits, such as increased reliability (a counter to the unreliability of disks due to their moving parts, is to ensure the parts move only when necessary), the ability to pack the disks more densely (less power means less cooling) and so on. While the mathematics in favour, or against these benefits are still to be fully ironed out – for example, see the recent study (http://labs.google.com/papers/disk_failures.pdf) by Google – they should at least be investigated – particularly by storage managers who fear that turning disks off could increase the risk of disk failure. As it happens, it is power cycling that Google cites as having the potential to increase failures, which should be against the principles of a well-designed MAID system anyway.
MAID shouldn’t be seen as the panacea to all storage ills, and it is still a work in progress, particularly when it comes to assuring that the disks are used as efficiently as possible. Companies that deliver MAID such as COPAN Systems may have the hardware and base software in place, but they still need to address areas such as virtualisation, policy-based data movement and intelligent indexing. Most likely, these areas will be covered by the larger vendors (EMC, HDS) and through collaboration with specific software suppliers (Datacore, Njini, FAST Search and so on). Neither is tape, or even optical disk dead – both of these offer significantly better energy profiles than racks of constantly spinning disks, though with the management overheads of media handling. All the same, however, just as all of these technologies have their place in the storage architecture, so should MAID.
03-03 – Mapping my way to organisation
Mapping my way to organisation
Mapping my way to organisation
I’m probably showing my age by recalling that classic Rowan Atkinson after dinner speech, which starts: “Where … are we going?” After a number of increasingly convoluted questions, he terminates with: “And have we got a map?” Given the fact he was playing a crusty old buffer it may be fair to say that the sketch was timeless; as, funnily enough, was the advice he was giving.
It will come as no surprise to those who know me to say that I’m not among the most organised of souls. In Belbin terms my preference tends more towards the creative plant than the completer-finisher – though truth be told, this could be as much down to the relative ease at which one can arrive, sprinkle a few ideas and depart quickly, before the hard work of actually achieving something really begins. When it comes to knuckling down, I’m no shirk – but let’s just say easily distracted. Like Rimmer in Red Dwarf, I have been known to put as much effort (if not more) into perfectly crafted, multicoloured revision timetables, as doing any actual revision.
Always on the look-out for labour-saving devices, it can come as some surprise when one of them actually works. And so it was that I stumbled across mind mapping a few years ago, my first, jackdaw-like tendency to seize the opportunity to draw some more pretty pictures overwhelming any thought that they might actually be of help. After a preliminary stab, it was only when I listened to a couple of tapes by Michael Gelb that I really grasped the potential – and discipline – of mind mapping. With his smooth American tones I have the feeling that Mr Gelb could probably explain the art of fish filleting to seals quite convincingly, but whatever. I was hooked.
When I first dabbled in mind mapping, there was no real software tool that cut the mustard – which was fine, I had my multicoloured pens. I did try out a couple of packages at the time: there was MindMan for Windows, which at the time was little more than a drawing tool; there were also packages that enabled outlining of ideas – not least Microsoft Word, but also BrainStorm from David Tebbutt and programs like BrainForest for Palm – a product that I found so useful, it could well have seen me relying on the Palm platform to this day. Unsurprising for a flighty mind like mine however, I have never stuck with any single product, preferring to try new capabilities as time passed.
Mind maps can be used for a whole variety of things, but where I have found them the most useful is in getting my own life organised. I have recently been playing with the latest version of Mind Manager, version 7 (which happens to be the successor to MindMan), and I am rediscovering the strength of the core concept – the mind map – as a highly scalable graphical device. If (perish the thought) I suddenly remember a bunch of things I am supposed to be doing, I can add them to a map with relative ease, and use this as the basis for prioritisation. The same principle has applied when I have used maps for structuring reports or defining problem solving approaches: the map is a very efficient way to grow a corpus of information.
In practical terms, right now I have a complete picture of everything I’m supposed to be doing. There are a couple of features of the new product that really help me with this – the first is a single key combination to add priorities to map elements, and the second is a very intuitive map filter. If I just want to see priority 1 items I can do so, avoiding the more general clutter. It’s not perfect – it lacks the ability to review priorities in the light of what I should really be getting on with, rather than what I find most interesting – but it would take more than a software tool to enable that!
While I may be back on the hook, the question is – will I wriggle off again? The main weakness I have found with such products in the past is that they were great at visualising ideas, and outlining, but lacked capabilities when I wanted to grow them in new directions – such as moving from individual to even more complex maps, from personal to team organisation, or integrating better with the other tools I use to do my job. I understand these are the issues Mind Manager 7 is seeking to address, so the proof of the pudding will be how far I progress with the tool before I find it becoming a constraint.
To my mind, the “killer app” for mind mapping remains that it is a personal productivity device – I would advise against trying to roll it out (as a capability or as a tool) for anything broader, in the first instance. While I do believe that initially, individuals need to discover the potential of mind mapping for themselves, I can see the benefits of broader application, across the team or even the organisation – information can be presented in a map succinctly and readably even to the non-initiated, for example. Will companies become suddenly more profitable as a result of mind mapping? I doubt it, but then, in this increasingly socially networked world we live in, perhaps mind mapping techniques could offer at least part of the answer.
It’s always fun to speculate about greater things, but for myself, right now, there is only one question. Will I stick with it? To be honest I don’t know – but for the time being, it is exactly what I need.
03-03 – ODF vs OOXML -- views from the coal face
ODF vs OOXML – views from the coal face
ODF vs OOXML – views from the coal face
Standards Wars?
Long, long ago, in an industry far, far away, a number of large companies struggled to decide whether to adopt a document interchange format as a global standard. Debate was heated, not least because the format, OOXML, had been proposed by a single vendor, which already occupied an incumbent position. Some others claimed OOXML was bloated and buggy, proffering the ODF specification instead. Proponents of OOXML countered the claims of the ODF lobby by saying it was incomplete and underspecified.
And so it went on, until resolution was reached. The major camps reached agreement over key issues, the standard was ratified and finally, the IT world could return to normal.
Now, call me a cynical old hack who has probably spent a year too many in the industry, but I do believe we’ve seen it all before. Rather than use this rather privileged soapbox position to rant about commercial and other vested interests however, I’d like to take a different tack. Instead of asking, “Who cares?” in an offhand way, I thought it might be worth collating a bit of gen about how standards debates in general, and the ODF/OOXML debacle in particular, might influence the average corporate entity.
So, is it all just a fuss and kerfuffle created by IT vendors or is there some genuine requirement driving this? Based on your experiences in the past, what impact do you expect this latest locking of horns to have? We’d love to hear your views, and of course, we’ll feed what we discover straight back to you.
-
First, a general question about the importance of standards initiatives. How would you respond to each of the following statements, in your experience (1-5, 5=strongly agree):
a. Standards initiatives are an essential part of an evolving industry
b. Standards debates come and go, but have little impact on how IT is evolving
c. Most standards follow the vested interests of specific vendors
d. Standards efforts should take place outside the commercial realm
e. Standards are a harmless distraction, just let them get on with it
-
Turning specifically to the issue of document interchange formats, is there an initiative to adopt a specific standard within your organisation?
a. Yes, we have already adopted a standard
b. We are looking to adopt as soon as the appropriate standardisation bodies have agreed a standard
c. We are considering adoption, but there are no firm plans as yet
d. We may consider adopting a standard at some point in the future
e. We have no plans to formally adopt a specific standard for document interchange
f. Don’t know/not applicable
-
What do you see as the business drivers for adopting such standards (check all that apply)?
a. Our regulatory framework demands we adopt such a standard
b. We require this for accessibility reasons, e.g. for staff with disabilities
c. Standardisation can make us more productive as an organisation
d. We require this to enable interchange with our suppliers and customers
-
Specifically, do you have any plans to adopt any of the following interchange formats:
a. Open Document Format, ODF
b. Office Open XML, OOXML
c. Other (please state)
-
Finally, have you already adopted a corporate standard (de facto or otherwise) for office automation applications?
a. MS-Office 2007
b. MS-Office (older versions)
c. OpenOffice
d. Other (please state)
-
Have you any intention or plans to moving away from your current office platform?
a. Yes, to the most recent version of MS-Office (2007)
b. Yes, to a non-Microsoft product such as OpenOffice
c. No
d. Don’t know
-
Usual geog, ind, size
03-03 – PPC vs Blackberry -- second thoughts
PPC vs Blackberry – second thoughts
PPC vs Blackberry – second thoughts
Certain events can cause the need for a quick decision: it is strange how a single incident can completely upturn one’s perspective on things. It all started quite innocuously, as I was looking into changing mobile service providers. On offer was a Blackberry as part of a very attractive deal – even more attractive given that I could still use the new SIM in my existing, unlocked PPC device. Furthermore the Blackberry on offer was the 6800 – that’s the one which comes with all the bells and whistles such as an MP3 player – this appealed greatly to my inner magpie. As I saw it, this scenario offered the best of both worlds, enabling me to continue my “evaluation” indefinitely and get a sexy new device in the bargain.
But then, disaster struck. As I opened the cupboard in my office, it caught the cable on my Pocket PC and caused it to drop to the floor, landing awkwardly and, to my utter horror, wrenching off the charging socket from the device. It was the gadget equivalent of stepping off the pavement and breaking an ankle – and it immediately forced a complete reassessment of what had, until then, been a largely academic exercise. OK this isn’t quite the assassination of Archduke Ferdinand, but you get the picture. What, I asked myself, were the deciding characteristics between the two devices for the business user? What would I miss, or couldn’t I live without – and what would I be prepared to compromise on?
The factor very much in the favour of the Blackberry is its usability when it comes to mobile email. I receive – and this is more of a curse than a boast – about a hundred emails a day, many of which require some kind of action. Having worked my way around the BB’s shortcut keys, I have reached the point where I can read, respond, file and delete emails quite speedily. The result is that, by the end of the day, I have managed to stay on top of the pile.
There’s a second “positive” to come out of this, which I’ve only just started to appreciate: that of the interactive nature of push email. I’ve long been impressed, if not slightly confounded by some of my colleagues that seem to be able to respond to emails as soon as I’ve sent them, leaving me floundering as I would be hoping that there would be a window of time before they got back to me. It’s a bit like playing multiple games of tennis at once – but some opponents are just too damn efficient at hitting back the balls. I now understand that there are a number of factors at play – not only are such people impressively organised, but they’ve also got into the “push email groove” which is an inevitable consequence of getting onto the front foot with their incoming messages.
The downside of course, is the reason why I had never enabled push email on my Pocket PC: it is always, inescapably there. The Blackberry has a clever feature that it can be set to vibrate when it is in the belt holster: the end result could be likened to a combination of electric shock therapy and Chinese water torture. Every now and then – counted in minutes – when a piece of email comes in, it is acknowledged by a discomfiting buzz to the hip. When sitting in a meeting, one is tempted to get the device out and take a look, which can be slightly off-putting for other participants – a bit like answering the phone half way through talking to someone else. If left, the device will bide its time before the next email causes it to buzz again. Try doing that for more than a half hour stretch and see if you don’t go mad.
There is a second issue with the Blackberry’s email functionality, namely that what happens on the Blackberry doesn’t always synchronize with what is on the server. The issue comes when making changes in the Outlook client on the desktop – if, say, an email has already appeared on the Blackberry and then it is moved to a different folder, it doesn’t always follow suit on the device. This is frustrating – the workaround is to copy all emails into a temporary offline folder in Outlook, delete them from the Blackberry and then re-instate them – but that’s hardly a long-term solution.
All this being said, when it comes to simple, effective management of incoming emails, the Blackberry wins hands down. Even with “push” enabled, the Pocket PC device I’ve been using has a clunkier user interface, with more key presses required and less predictive functionality (for example, the Blackberry will suggest a folder to file an email, based on previous choices). It is bizarre but true that I have had more problems downloading emails to the Pocket PC from Exchange than synchronising the Blackberry – indeed, to my mind this is unconscionable. I don’t care how clever is the protocol between device and server – if it doesn’t do the download, any clever features are pointless.
The other vital area that the Blackberry is unquestionably stronger, is in making phone calls. The best way I can think of putting this is that the Pocket PC acts like a computer, whereas the Blackberry acts like a device – the former requires me to keep tabs on running applications and ensure there is enough RAM free, whereas the latter just works. Dial a number, make a call, its not that hard – but the Pocket PC seems to make it so. These two things, put together, have swung the pendulum largely in the direction of the Blackberry: even with the synch issue and death-by-vibration, I have had to face the fact that for business use at least, the Blackberry provides the two essential functions better than I could expect on the Pocket PC. That’s not to be said that I won’t miss the additional capabilities of the latter, but on this occasion at least, productivity will have to win over power.
03-03 – Security as an inhibitor?
Security as an inhibitor?
Security as an inhibitor?
In the past there have been moves by the IT industry to promote security as a business enabler. Looking round many organisations, this is clearly poppycock – but what of security as an inhibitor?
The lack of security is inhibiting business growth
What to do? Its all very well protecting against threats, be they internal or internal. But, if there are fears about weaknesses in the existing infrastructure, it is unlikely that these can be solved with some product or another.
If we want a joined up approach to security, this needs to have a centre, a locus of control. Even if it is not centralised, it still needs co-ordination.
We need to move the debate forward.
03-03 – The Days Must Be Getting Shorter
The Days Must Be Getting Shorter
The days must be getting shorter – or perhaps we are getting older, but it doesn’t seem like yesterday when email was still considered to be just an option, certainly in the business environment. Like many technologies that have grown by individual demand rather than organisational imposition, email has crept up on corporations by stealth, becoming an essential tool before anyone cared to notice.
Which is a shame, because it’s pretty rubbish really, in security terms at least. Email is the IT equivalent of the ferret – useful in its place but never quite to be trusted, capable of the unexpected even when treated with respect. It (email, not the ferret) was invented in a time when computers existed in closed environments, and it has thus far proved itself to ill-prepared to cope with the brave, new yet dangerous, interconnected world.
But still, we use email. We have no choice, for so does everybody else, even to pass messages around the office rather than standing up and asking questions over the tops of the partitions. Of course email enables a great many things, but equally, it needs to be handled with care. It’s not just that it can act as a carrier for viruses, Trojans and other malicious content (malware): we have the issues of unsolicited messages (that can be connected to phishing attacks), unacceptable use (think: sending dodgy images to one’s mates) and fraudulent communications, as it is all too easy to set up an email account in someone else’s name.
While there are technologies to help protect email from the worst of the threats, we need to co-ordinate such deployments with our own efforts – things like awareness training, and the simplest of protective measures (setting a PIN code on your mobile device, for example), go a long way towards reducing the risks. If you want to know more take look at our email security primer, which collates the best of our experience from the field. If you have any stories to share, do let us know.
03-03 – What characterises a service?
What characterises a service?
What characterises a service?
A couple of weeks ago I was party to another event, this time hosted by IBM. At short notice I was asked to facilitate a session on defining services, which was interesting in the extreme as it very quickly became clear in the earlier sessions that there was no clear definition of what a service was – particularly as there were two types of people in the room – business analysts and technical architects.
So, I decided to take a different tack. Rather than trying to fix a definition of service, I thought we would go round the room and ask what characterised a “good” service. Here’s what we came up with:
-
Value – the benefits of accessing a service should outweigh its costs
-
Reusability – it should be possible to access the service repeatedly, with the same level of interaction and service quality
-
Meaningfulness – it should be possible to describe the service in clear, relevant terms
-
Autonomy – the service should be cohesive, i.e. clearly bounded
-
Independence – the service should also minimise dependencies with other services
-
Contract – the service should offer its own guarantees in terms of what it delivers, and how: such terms should be subject to prior acceptance by the service user.
-
Uniqueness – the service should minimise overlaps with other services
With hindsight, there is possibly some honing that could be done with the above list – the difference between “Autonomy” and “Independence” for example, is not all that clear. What was interesting however, was that even as the debate raged around what a service should look like, there was relatively little controversy about what separated the wheat from the chaff. For organisations looking to develop their own service strategies, this would appear to be a good place to start.
03-03 – What on earth has IT got to do with gardening?
What on earth has IT got to do with gardening?
What on earth has IT got to do with gardening?
When Nicholas Carr posited “IT doesn’t matter” in 2003, he was making the point that current technologies are commoditising and therefore available to all. While this might be true, it is dependent on everything working correctly. There is another side to IT however; in many organisations, IT is more a bottleneck than a strategic tool, as layer upon layer of complex legacy has resulted in environments that restrict, rather than enable business activities. This leads to the question - if IT isn’t actually helping the business, what exactly is it there for?
While the ‘what’ of IT may be a commodity, the ‘how’ of it, specifically how it supports and sustains business activities and functions, absolutely is not. The killer is that business value can only come through the orchestration of whole ecosystems of technologies and service providers. As these become more and more complex, the focus needs to move away from the “what” and towards the “how”.
It is this emphasis, towards a sustainable model for technology delivery that considers the whole ecosystem of business and IT, and which works across the value chain incorporating technology suppliers, systems integrators, outsourcing and service provision, that led us to the central themes for our new book. Traditionally, technology funding separates projects from maintenance by an iron curtain: once a project is complete, there is little consideration how to fund necessary enhancements to ensure it delivers the necessary value. The assumption is that, once complete, deployed technologies will need only minor tinkering.
This assumption is fundamentally flawed. There has been much talk in technology circles about considering large scale technology projects is the same way as construction projects – building bridges or cathedrals, for example. While there is plenty to be gained from such parallels, the two paradigms diverge quite fundamentally once the project is complete. Apart from the ongoing checks that all is in order and the occasional coat of paint, a bridge is fixed in time as a monument to its builders, whereas for IT projects, the journey is only just beginning.
Like, for example, a garden. Gardens of all sizes benefit from being given due consideration at the outset, then being planned, architected and laid out as though for all time. You don’t have to be a gardener however, to know that things will start to change from day one – individual plants will grow and evolve of their own accord, even as they thrive and fall back with the seasons. Gardening is about so much more than just digging the ground and planting the seeds: it’s an ongoing, sustainable process of nurturing and pruning, allowing things to grow and cutting them, all the while ensuring that they continue to meet their central purpose – to carry fruit, provide a dash of colour or offer an attachment point for the hammock.
If the goal is sustainability, we believe there is as much to be gained by considering technology delivery as a garden: in doing so, we are quire deliberately setting our stall apart from the builders of cathedrals and bridges. Our theme is sustainable alignment between business and IT, and this needs more than just the project management skills or architectural focus that we can learn from construction projects. In addition we need to maintain excellent relationships with strategic suppliers, to focus constantly on the relationship between IT and business (not just at project kick-off) and to consider the whole ecosystem as a portfolio of value- adding capabilities, not just a set of discrete systems.
So, what do we actually mean by alignment, or by adding value to the business? In practice, we see three key focuses, namely improving operational business efficiency and effectiveness, managing risk and compliance, and supporting innovation. Each brings the same challenges - not least to balance investments to deliver the right mix of short-term and longer-term paybacks. Sustainability is indeed the key: in our research for the book “The Technology Garden,” we learned a great deal talking to business executives, CIOs and senior IT management and front line technologists, about how organisations can successfully achieve and sustain alignment between IT and the business.
What we ended up with could colloquially be considered “the six habits of highly effective IT.” Some will come as no surprise - the need for good IT governance, for example, or the effective practice of enterprise architecture. Others, like those mentioned earlier, can perhaps be seen as common sense – but only in hindsight. The difficulties come in knowing where to start, so we incorporate a roadmap for organisations that want to start down the track, together with an assessment model to help organisations understand where they are starting.
IT is not going to get any simpler: indeed it will become anything but, as applications become ever more distributed through use of service oriented architecture, as infrastructure benefits from virtualisation technologies and as access mechanisms become ever more mobile. However, we believe that the principles we espouse are timeless. IT matters a great deal to many companies, and as complexity grows, the need for principles such as those we espouse in The Technology Garden will only grow in importance.
April 2008
04-04 – Microsoft Bloat, Green and the Vista opportunity
Microsoft Bloat, Green and the Vista opportunity
Microsoft Bloat, Green and the Vista opportunity
Microsoft’s always going to have a hard time presenting a convincing green story for desktop computing. Its not that the story itself is un-sound: power-saving features are useful as far as they go, and Microsoft as a company is keen to be a good corporate citizen. The elephant in the room however may be summed up in a single, horrible word – bloat.
Microsoft’s story has been a fascinating one, one of the great success stories of the IT industry. There have been several key bets made along the way, which Messrs Gates and Ballmer have stuck to doggedly. This is not the place for a full précis of the Microsoft story, but it’s worth highlighting one of the bets: Moore’s Law, the principle (to paraphrase) that processor capabilities would continue to double ad infinitum.
In practice, this has been characterised by the long-standing truth well known by anyone who has spent the past couple of decades in the industry: that if you want to take advantage of the latest Microsoft software, you’ll have to upgrade your machine. The conversation has repeated with the same regularity as Moore’s Law itself – the bemoaning of how slow everything is running, and the wry nod from those who have seen it before.
Of course, this self-fulfilling prophecy has been of huge benefit to both Microsoft and its hardware partners – companies such as Intel. I very much doubt whether the Wintel alliance was deliberately stuffing software into the operating system just in order to shift more processor units, but one thing’s for sure – neither side was calling ‘stop’. We have even lived through the office bloatware wars, where Microsoft, Lotus and WordPerfect duked it out to see who could out-bloat the competition. (Microsoft won, as we all know)
The attitude throughout from Microsoft – and I know this very well, having asked them on various occasions – has been, “If you want to take advantage of the latest innovations, you’ll need to use the latest technology.” I remember a very public debate I had with Martin Taylor, Microsoft’s ill-fated “Get the Facts” General Manager where he told me that most desktop users wanted far more than just email and word processing. It wasn’t true then, and it isn’t true now.
And so, to Green. While Microsoft might not have been underhand in promoting the “new and improved” – it’s a technology company, after all – neither can the company claim to being particularly green. Fundamental to this is the fact that the power consumption of a device is only a small percentage of its overall carbon footprint. Bottom line: replacing or upgrading a machine undermines any benefits that can be had from ‘new’ power saving features.
What can Microsoft do about it? Well, perhaps that operating system that has been derided as the most bloated of the lot – Windows Vista – could hold the key. At the heart of Windows Vista lies a perfectly sound operating system. There are two issues however – the first is in disk space taken up by installed, never to be used apps; and the second is in the memory requirements for unnecessary run-time services. It should not be beyond the ken of the bright sparks in Redmond to bring out their own tools to monitor what’s really necessary, and strip out anything that isn’t?
Sounds simple, doesn’t it? Trouble is, it goes right to the heart of Microsoft’s core philosophy, and fear – that people might stop buying its software if there is insufficient “new and improved” about it. That’s a fair worry – but it’s happening anyway, as we see Microsoft having to extend support (yet again, with hastily invented acronyms no less) for Windows XP. The same principles could be applied to Microsoft Office – which has already seen a usability overhaul with 2007, now, how about a performance boost? What additional benefits can be achieved offloading tasks to Windows Live services? Etc, etc, the list goes on.
It’s a changing world we are in. While Moore’s Law may continue to apply, many organisations are finding they have more than enough processor power on their desktops to do their day to day work. If Microsoft is really serious about greening the desktop, it has an opportunity to use its position to drive some fundamental changes. The question is, does it have the strength of character to do so? The alternative may be business as usual for Microsoft, but it certainly won’t be green.
04-09 – Securing SOA
Securing SOA
Securing SOA
Given that I’m currently at IBM’s SOA Impact conference in Las Vegas, I though it would be appropriate to post a blog about the security aspects of SOA (Service Oriented Architecture). Judging by the amount of coverage given to security in the keynotes and analyst sessions, one might be led to think that security was not an issue; meanwhile however, one only need to have a cursory understanding of Murphy’s Law to know that security should not e taken for granted, in SOA or elsewhere.
So, what are the risks? There most obvious of these lies in areas such as confidentiality breach and service compromise. At its heart, SOA is about developing and delivering distributed computer systems - that is, software applications and components that communicate across a network. The potential for a confidentiality or service breach is therefore quite high, particularly if communications take place on the wrong side of the corporate firewall.
Risks such as these are well known and frequently documented, so I shall say no more here. Meanwhile, however, there are the risks involved in the development process itself. There has been talk about the “insider threat” in terms of the users of computer systems – people who, through stupidity as much a malice, can cause a security breach. Less consideration has been paid thus far on the developers of said systems. In SOA development environments there are a number of risky areas – not least of course on the code, which needs to be constructed in such a way as to minimise the potential for compromise.
There is also the data involved in the development process. SOA requires interface definitions to be published, but such interface data may well contain information that should b kept confidential – for example, specific business rules or even the need for them. And finally we have the security of the development process itself. A number of drivers including the continued reliance on subcontracted development resource requires vetting and monitoring of developers, such that the resulting code has not been written to be compromised.
What’s all this got to do with SOA? Truth be told, some of these aspects are true for all kinds of development, not just SOA projects. What SOA brings is the increased risk due to the distributed nature of the resulting software. In traditional development, the resulting application silos would likely exist behind the firewall, but with SOA there is no such guarantee. As many organisations start to move down the SOA track, they will need to ensure that they have the security bases covered in advance of application delivery. Should security be treated as an afterthought, as it has so often been treated in the past, it may already be too late.
04-09 – The Future of IT security
The Future of IT security
“What a nice, easy topic,” I thought to myself as I read through the email from those good people at Computing. “The Future of IT security” – how simple it would be to write something focused, straight and to the point, given that remit.
Nothing could be further from the truth, of course. There’s a quite straightforward way to write such a piece, along the lines of, “There will be threats, and they will be big threats. Organisations need to get their act together, understand the risks and implement mitigating strategies if they want to keep ahead of the bad guys.” Trouble is, such articles preach to the converted. There are three categories of organization: those who get security and who have ongoing risk management activities in place; those who get it but struggle to prioritise and implement appropriate measures; and those who think that if they keep their heads down, the bad stuff will pass them by.
For most, the future of IT security will be much like the present. We know for example that the bad guys are now following the money rather than the kudos: sure, there will always be people out there who prefer to call themselves ‘Cobra’ or ‘Hummerkazi’ who spend their waking hours decoding encryprion algorithms and looking for back doors into telephone networks. Meanwhile, there is an evolving economy building around the market value of credit card details and the ability to launch denial of service bots from unsuspecting (and generally poorly configured) home computers. While this needs to be taken seriously, to be honest it doesn’t look awfully different to last year, and neither will it change much in the next.
Meanwhile, we have the risks caused by our own employees, be they trough malice or stupidity. Strangely enough, internal staff have always posed the biggest threat to computer systems, even before product categories such as “data leakage prevention” were posited (ain’t it funny how the pundits only get round to recognizing a problem on the back of IT companies happening to have developed a solution, but that’s not important right now). I would suggest that, unless we all become RFID-tagged and have probes inserted into our brains to read our darkest thoughts, we shall continue to find it difficult to counter what some term “the insider threat.” And, thankfully, my vision of the future of IT security does not include any such mechanisms.
So, what does it include? As a starting point to think about the future of IT security, its worth reflecting on the future of IT as a whole. There are a number of trends that are driving how organizations are developing, deploying and operating their IT systems that will have a direct impact on security, including:
-
Outsourcingand offshoring. The offshore resourcing market continues to mature, with Indian companies such as Wipro setting up in the UK and traditionally local companies continuing to expand their offshore operations. Risks range from difficulties in vetting offshore staff, and challenges of maintaining business information at offshore locations.
-
Hosting and Software as a Service (SaaS). We are not seeing wholesale mass adoption of the SaaS model, with reason as it is still maturing for example in areas such as data integration with other corporate systems. Risks are similar to those of outsourcing, which from the data’s perspective is what it is.
-
Service Oriented Architectures and Web 2.0. Both of these topic areas share the risks of being distributed systems architectures which may extend beyond the corporate firewall. As well s being open to confidentiality breach and denial of service, there are also questions around the publishing of interfaces onto corporate systems. In some instances, the interface itself may be company confidential.
-
Virtualisation and data centre automation. Virtualisation offers a quick win for many organizations, which can consolidate applications down onto a reduced set of physical servers (numbers up to80% reductions are touted; we have seen 60% reductions in physical servers). While the centralized management of preconfigured virtual servers can reduce security risks, equally there is the risk of virtual server proliferation, and indeed the potential for mismanagement which could leave virtual servers more open to breach.
-
Mobility and unified communications (UC). While UC may currently be an oxymoron, vendors are working hard to deliver on the idea of enabling us to communicate with each other a simply and seamlessly as possible. Like with any technology, UC is a two-edged sword and it is not hard to think up scenarios for its exploitation, for example UC-spam calls.
-
Social networking. We are already seeing some of the security challenges that social networking can pose, in terms of privacy and identity issues for example. There are other risks nobody has yet (to our knowledge) pulled together composite identities of individuals across social networking sites, but this will no doubt come. While these are personal security issues on the surface, they have corporate implications for example in terms of duty of care.
What this list demonstrates is that “continued vigilance” is only part of the answer. So too is risk management: a good process for which, starting with business objectives and considering IT security in that context, should be a fundamental part of any organisation’s security strategy. But neither is ris=k management going to be enough, by itself. If there’s one thing that all of the above trends share, it is that they affect all parts of the IT architecture. These are not risks that can be mitigated by tactically acquiring some appliance, and plonking it into the server room.
So, what to do? If IT security is to be characterized by having a far-reaching impact, so we need to consider how the roles responsible for IT Security have a similarly far-reaching remit. We are already seeing this in some organisations: HSBC for example, has combined its IT security function with its business fraud function, enabling it to deal with both the business and IT issues from the same point.
Where does this leave the IT security industry? I have often characterized this as a fire extinguisher industry, which makes sense, if all people are doing is fighting fires. Challenges such as those above will require us to move to more of a prevention-based approach rather than a series of poorly funded coping strategies. And frankly, given that the trends are happening whether organizations want the to or not, the sooner we can get there the better.
July 2008
07-11 – New Wine, Old Skins?
New Wine, Old Skins?
New Wine, Old Skins?
Managing the risks associated with the new wave of technology innovation
What an exciting period of technology we are in. Following the millennium bug debacle and the dot-com bust, it almost seemed that the technology wave was slowing down, leading some pundits to suggest that IT was becoming a pure commodity, adding little business value. From this standing start, IT has become exciting again: as we discuss below, there are a number of developments that are significantly impacting organisations large and small. However, technology is a sword that can cut both ways. The purpose of this article is to consider what impact new technologies are having on our business lives, and what we might do to address the inherent risks.
To start with, we need to revisit what we mean by risk. There are many types of risk, and while IT may have some specific risks of its own (technical risks associated with incompatible or poorly patched software, for example), these often translate into risks that are directly felt by the business, such as:
-
Financial risk – where business is lost, or unnecessary costs are incurred as business users are unable to get on with the job.
-
Compliance risk – where the organisation becomes liable with respect to regulation or corporate standards.
-
Reputational and brand risk – where an organisation faces bad press, with potential loss of business as a result.
The risk landscape is more of a jungle than a green field, however. Several examples of high-impact failures have not had the effect that might have been expected – while the Wifi blunder that caused the leak (or to be frank, the outright flood – 45 million records were taken) of credit card data from TK-Maxx may have resulted in $128M of losses to the company, but the retail organisation has not seen any significant impact on its trade. Meanwhile, the lessons from the HMRC incident where two CD’s of child benefit records (25 million of them) are still to be learned – to quote UK Information Commissioner Richard Thomas in April 2008, "It is particularly disappointing that the HMRC breach has not prevented other unacceptable security breaches from occurring.“
It is worth mentioning both of these examples to illustrate that the discipline known as ‘risk management’ is far more complex than that simple term might imply. Such complexities include understanding what exactly is meant by risk, where the real likelihood of business damage is to be felt, and perhaps most importantly, the fact that no organisation, large or small, is operating in isolation from what is a much larger, rather shaky ecosystem. It is against this background that the smaller business needs to decide its strategy for how it uses IT.
Despite the somewhat gloomy picture pained by examples such as these, there are still a great many potential benefits to be had from how technology is evolving. Right now a number of trends are driving considerable change across many businesses, such as:
-
Broadband mobile technologies such as 3G wireless are becoming more affordable and therefore prevalent, these are running alongside higher bandwidth fixed broadband to homes and offices, and enabling a wider variety of home working and remote working practices.
-
The Internet is increasingly becoming a platform for collaboration, with consumer-oriented social networking sites such as Facebook and Myspace sitting alongside more business-oriented sites like LinkedIn and Ecademy.
-
We are also seeing a resurgence in Web-based applications and online hosting facilities, driven by such organisations as Google. Microsoft and Amazon.
Behind all of this, no doubt catalysed by such advances but equally driving them, is a more demographic change across business, towards increased collaboration and open-ness. In general terms, we are seeing organisations of all sizes increasingly looking to move away from the traditional employer-employee hierarchical model towards more of a network-based approach where organisations collaborate on the creation and delivery of business services.
In retail this may be a more familiar story, where business to business (B2B) and supply chain activities are seeing more of an evolution than a revolution. In other sectors, such as the research side of pharmaceuticals, the collaborative approach is far newer. While smaller organisations are more likely to be brought into a collaboration rather than acting as the hub, there are still a number of new business opportunities to be had, aided and abetted by the use of IT. New opportunity yields new risks, not least because existing systems and processes were unlikely to have been designed with the future in mind.
Keeping up with technology change, and integrating the new with the old, is a massive challenge across the organisations we research. At a recent security event in London, a number of Chief information Security Officers (CISOs) from different verticals expressed that one of the biggest problems they faced was how the new risks caused by new applications and infrastructure, now had to be treated in the context of increased business collaboration.
We therefore need to consider risk both in terms of the changes we are driving, and the changes in which we are mere participants. In smaller organisations, the challenge is further exacerbated by the fact that there are less likely to be the skills or the bandwidth in-house. So, what to do? Given that there are no easy answers, how should such businesses put their best foot forward and reap the rewards of the business opportunities while still addressing the risks, so you don’t come unstuck in the process? From our research and conversations with a wide variety of organisations we have distilled a number of guidelines, as follows.
Make IT work anywhere. There are two technology-related consequences of the open, collaborative world we are moving towards. One is that much activity is taking place outside the organisation’s IT boundary, and a two-tier system which treats connected devices and applications differently depending on which side of the firewall they sit is becoming increasingly untenable. The second concerns the ‘who’ rather than the ‘what’ – business partners, subcontractors, consultants and so on are requiring access to corporate systems and data. This may bring up a number of questions around data protection, vetting and so on, but it is better to treat these up front, than sleep walk into later problems.
Manage access by identity. The increased fragmentation of the corporate boundary makes it necessary to implement other control mechanisms than just, “he’s inside the building, so he must be OK.” Increasing attention is turning towards the management of individual identities, and provisioning access and facilities based on role. This does not translate into buying an identity management system, as realistically, this is one area where technology is lagging behind the need. A good start however is to start documenting who has access to which systems and data, whether they are employees or partners. If you find this a challenge, then you may already be at risk.
Classify information by risk. The traditional approach to risk management involves gaining an understanding of risks, then treating them accordingly. Given what we have already seen however, this is akin to mapping the jungle: it is still important but it can only be one part of what is done. At the same time, activities around reviewing what information exists, how important it is to the organisation and how well it is protected, provides a different viewpoint which enables the subset of high-risk information to be treated as a priority. Some organisations are adopting a traffic light system as a starting point: what information do you have that would cause the business to fail if it were compromised?
Bring in the experts. The brave new world of online collaboration can also be applied to how we understand and reduce risk. It will always be necessary to run and maintain some security capabilities in house, for example, but it can equally make sense to look to how security can be delivered ‘in the cloud’. Equally, there is no honour in ignoring security risks, nor shame in bring in third parties who have a better grasp of how IT is moving forward. From our research we understand that the key to successful use of third party services is in the decision making process: adopting a business perspective enables more light to be shed on what skills add significant business value and should be kept in-house, compared to what should be sourced from third parties – the same applies to risk management.
More best practice, less policy. While policy is important, it often focuses on the ‘what’ rather than the ‘how’. Best practice is more about the latter – and in this context, we’re thinking about how risk management best practice applies to all aspects of IT management and delivery. This often boils down to review checklists – has a given application been security tested before it is deployed, for example, or has the code been peer reviewed? Has the consultant who has been brought in for a day, been cleared to access the information he or she requires? It is all too easy for what are often very simple checks to be forgotten; far simpler is to ensure that the default behaviour involves running through the appropriate checklist.
Guidelines such as these are not exhaustive, but they offer a good starting point for thinking about some of the challenges. At the same time, they should not distract from one of the key principles of risk management: that is, to be vigilant. An uncomfortable truth about IT-related risk is that whatever new innovations the technology world may come up with, each will in some way be exploitable by a spectrum of ‘bad guys’ – there are the relatively innocuous practices of building a picture of a prospective client by browsing around the Internet, and at the other end of the scale, seasoned hackers are using leading-edge technologies to launch focused attacks on high-profile individuals. Perhaps this yields the biggest lesson of them all: that, even if it ever was, doing nothing and hoping the risks will go away is simply no longer an option.
September 2008
09-25 – THERE IS MORE TO STORAGE THAN TECHNOLOGY ALONE
THERE IS MORE TO STORAGE THAN TECHNOLOGY ALONE
THERE IS MORE TO STORAGE THAN TECHNOLOGY ALONE
In principle, things are clear enough about storage efficiency. F, for example, cut through a lot of the rhetoric about Green IT in general, and Green Storage in particular, and we see a picture of reducing power, cutting costs and removing operational overheads. In some ways it was ever thus, but from San Diego to London’s Docklands, data centres are coming up against some hard stops. The problem has been building up for a while: “There’s no point in such-and-such vendor coming and telling me about their latest blade servers,” said one CIO to me a year or so ago, “when I can’t even get enough power into the room to run the things. I’ve tried to tell them but they won’t listen.”
The message is loud and clear. Having in many cases run out of capacity, IT needs efficient servers and efficient storage, and vendors are queuing up to demonstrate their own credentials. For a start, we are seeing a most welcome level of competition between vendors in terms of the cost- effectiveness and power efficiency of their base kit. In addition, there are a number of specific technologies that are enabling better use of storage hardware – we expect spin-down of disks to become a more general feature of the larger arrays for example, and solid state disks will likely also play a part in the storage architecture.
But there is far more to efficiency than a power-optimised hardware platform. Making the best, most efficient use of storage assets, needs software that can manage both the hardware platform, and the data that resides there. There are a plethora of options, from data classification, migration, archiving, indexing and de-duplication, all of which play their part. We also have storage virtualisation – this has seen limited success thus far, but looks set to achieve wider adoption over the next couple of years; we also expect more widespread use of provisioning technologies, such that storage can be allocated as it is required from the a wider pool of capacity.
Such technologies are all well and good – but what’s preventing organisations from adopting them? This is very much a carrot and stick question. When a business case for a new technology is being put together, either one must present a compelling explanation of business gains (the carrot), or an equally compelling reason why the procurement is unavoidable (the stick). Despite best intentions and long-term benefits, efficiency- oriented technologies must fight for priority with all the other possible procurements an organisation might wish to consider at any given pointmoment. And even if an efficiency-related technology may look attractive in principle, it then needs to be deployed in what is unlikely to be a green field (i.e. brand new) site.
On this last point, it has repeatedly been shown that without the right people and processes in place, even the best technology in the world can go to waste. This is not just about getting it right at the top: the rank and file have a major role to play when it comes to efficiency improvements (and furthermore, there is generally the will from front line staff to embrace such changesdo so, particularly if there is a perceived Green benefit). Where organisations sometimes fall down is in ensuring that employees are fully appraised of what is planned, at which point the front linestaff can feel ignored, and may offer up avoidable, unnecessary resistance as a result.
All in all, of course it is worth looking into what technical innovations exist that may make for a more efficient storage architecture. But there is more to storage than technology alone. By considering efficient storage within the wider context of more efficient IT service delivery, organisations are more likely to reap its benefits.
Jon Collins, Service Director, Freeform Dynamics leads a panel on Architecting for Effectiveness and Efficiency at Storage Expo 2008 www.storage-expo.com
December 2008
12-11 – Does Agile Development have a place in your organisation?
Does Agile Development have a place in your organisation?
‘Agile’ is such an uplifting, positive word, its no wonder everybody wants to use it. Manufacturers want to be agile, businesses want to be agile, and indeed, IT wants to be agile. In this latter context the term has been adopted to define a group of software development approaches that are, of course, agile. But beyond a general warm feeling about building more flexible systems faster, what exactly does agile development bring to the party?
We should start by being a bit more precise about Agile Development – but not too precise. Historically speaking, software development approaches have tended to fall into one of two camps – ‘structured’ and ‘the rest’. Those familiar with the developer side of the house will recognise the more structured approaches, which generally involves:
-
Starting off with a requirements document and/or functional specification
-
Doing some kind of software design
-
Programming against the design
-
Integrating and testing, first in parts then as a whole
Fair enough, perhaps. But, the detractors say, such approaches are far too slow and unwieldy: the rationale is that by the time the resulting two-year development cycles are complete, the world (and indeed, the system requirements) will likely have moved on. And so has evolved a counter-culture of alternative, ‘agile’ approaches, which tend to share a number of characteristics – proactive liaison with users, and frequent delivery of working software, being among them.
The value of agile approaches is clear in principle. When we undertook some research (http://www.freeformdynamics.com/fullarticle.asp?aid=460) into agile development earlier this year, we found there were certain types of project that would benefit from Agile approaches over and above traditional, structured approaches – notably those which have faster changing requirements, and for which rapid delivery is a priority.
All the same, there is no massive perceived difference in the quality of results from Agile projects, nor their timeliness, when compared to structured development projects. Rather, benefits are articulated in terms of increased collaboration within development teams, better awareness of timescales and indeed, more highly motivated developers.
What of the downsides? For a start, agile approaches are not just something you can pick up and run with. There is little room for error: timescales are divided into variously named short periods (e.g. ‘scrums’, or ‘timeboxes’), which are kicked off with a prioritisation exercise as to what is to be delivered, and finished off with a delivery. If something goes wrong, there can be a domino effect on other parts of the project. To all intents and purposes, agile approaches can be intense and rewarding, but the one thing they are reliant on is a level of structure and co-ordination.
Perhaps it is for this reason, one of the main criticisms of agile approaches is that they can’t scale. No doubt the advocates of agile approaches are reading this and shaking their heads – but given what we have just said, it is unsurprising that the challenges of keeping things going can become greater as projects get bigger. According to our research, having good communications is absolutely the key success factor for scaling agile projects beyond tens of developers.
To close then, here are some things to keep in mind as a CIO. While neither type of approach can claim to ‘win’, they both hold their own – and they certainly wipe the floor compared to less formal approaches. So, we have a situation of ‘horses for courses’. There are undoubtedly places where agile approaches fit better than more traditional structured approaches. However agile development is as much about a culture change as anything, however, and should not be undertaken without due care, indeed training and mentoring can help a great deal here. Agile is more a series of short, carefully linked steps than a journey, and it is worth putting one’s best foot forward from the start.
2009
Posts from 2009.
May 2009
05-11 – Moving IT security up the governance agenda
Moving IT security up the governance agenda
It has sometimes been said that in Information Technology, the emphasis has been too much on the technology and not enough on the information. This has been particularly apparent in IT security, that part of the IT industry seemingly constructed around telling us about all the things that could go horribly wrong, before rather handily proffering a portfolio of products to resolve the issues.
It could all be so different, we might like to think… but could it? Back in the day, when I had an IT department of my own, I learned just how hard it was to win budget for security related purposes. The question the finance director (and my boss) asked was very simple – “Do we really need it?” And if I didn’t have a cast-iron rationale, then the cheque book remained firmly closed. That was then, but the business case for IT security remains as difficult to articulate as it ever was.
Coupled with the fact that there really are bad guys out there – and yes, their attacks really are becoming more targeted – it comes as no surprise that the IT security industry exists as it does. From our research we know that the products that are more likely to be deployed, happen to also be the ones that are simpler to explain – either because the ‘threat’ really is obvious, or because certain products are generally accepted as being a good thing – anti-virus software is one example, firewalls are another.
All of this does distract from the fundamental point however, that IT security exists to mitigate business risks, not just counter specific external threats. For example, over the years we have repeatedly determined that the biggest threat comes from inside the organisation, either through ‘malice or stupidity’ of individuals. It does seem strange therefore that so much of security vendors’ attention is paid to what happens on the outside, even despite ‘technologies’ such as data leakage protection being rushed out in response to dropped balls such as the HMRC breach.
While the IT security industry’s priorities may be skewed towards what will sell, there are some distinct signs that the industry itself is moving away from the point product model. Arguably, the questions that IT security vendors originally set out to answer, such as what-needs-to-be-secured-and-how, are largely answered today: new technology such as virtualisation and software-as-a-service may bring new threats, but not significant new categories of threat. The result is that such vendors are spending more time working out how to make things work better, rather than what problems need to be solved.
Meanwhile, the ongoing consolidation of security vendors has resulted in integration challenges – both in terms of how security products integrate, and how they work with other systems and applications. Take IBM, for example. Until a year ago the company did not actively market its security technologies as an integrated portfolio, but the picture is very different today. We’re seeing similar stories from EMC with RSA, From Google with Postini, from Microsoft, Symantec and McAfee, and from smaller vendors such as Mimecast.
What we can draw from all of their stories is an acceptance that IT security is not an end in itself. Rather, it offers a means, whereas security of business information is the ‘end’. So, what are the outcomes we should be aiming for? It’s worth dusting off an old acronym which will be familiar to IT security professionals – Confidentiality, Integrity and Availability. Old it may be, but equally clear is that there is more to C-I-A than can be delivered by IT security alone.
Consider integrity. A couple of weeks ago, I hosted a panel on data integrity at Infosecurity Europe, and what became very quickly clear was that it was almost impossible to separate integrity from a security perspective, from more general concerns about data quality. Similarly, we can look at availability (report). While it may be academically possible to distinguish between service downtime from a denial of service or from a systems failure perspective, the distinction is moot for those poor people who can’t get to their data.
Cleary, we need to think beyond IT security if we are to consider – and mitigate – all the risks around business information. But where to start? The answer may well lie in the keyword ‘governance’ – that word which refuses steadfastly to be defined. Despite its elusiveness, governance appears in discussions around a number of IT topics, not least IT service management and information management, as well (of course) as information security. Governance also pops up in numerous conversations around business management, of course.
It is early days for integrating general principles of governance into business in general, and into IT in particular. However, it is clear that IT security stands a better chance of succeeding if it is treated as one element of an IT governance framework, which in turn needs tight alignment with business governance if it is to succeed. This may sound like a glib statement but it really isn’t. Having been involved in plenty of conversations through the years about how to raise IT security’s position on the agenda, the conclusion reached is that as long as it is seen as an end in itself, it will be doomed to fail.
Don’t get me wrong: there will always be a place for technologies to limit security threats, just as there will always be a place for door locks, seat belts and car immobilisers. However, in isolation, such things do not make us better drivers, nor prevent the occasional vindictive attack. It is only by seeing IT security within the overall context of IT and business governance, that it can succeed.
June 2009
06-24 – How secure are your applications?
How secure are your applications?
Locking the stable door before the horse bolts
Let’s be blunt. The fine heritage of application development has not traditionally incorporated the pre-emptive creation of secure code, i.e. programs that are built from the ground up to be secure. There are a number of potential reasons for this – not least that in the old days, before every system was connected (either directly or indirectly) to some kind of network, a certain code of conduct was assumed between developers, operations staff and users, that nobody would try to break anything. This ‘club rules’ spirit continues even now, despite repeated proof that such mindsets are, with the best will in the world, outdated.
Of course there are plenty of examples to the contrary. Military systems have long had to take security into account, and Commercial Licensed Evaluation Facilities (CLEFs) existed over a decade ago, whose task it was to try to break into bespoke applications using a variety of penetration testing techniques. These days, in the UK we have such certification schemes as CHECK and CREST (link: http://www.cesg.gov.uk/products_services/iacs/check/index.shtml), which are very much a continuation of this theme.
However, we are still far from the situation where such a thing as a ‘secure application’ is seen as the norm, rather than the exception. For a recent example of inadequate protections being baked in from the outset, we only need to look at Spotify (link: http://www.theregister.co.uk/2009/03/04/spotify_breach/) but to be sure, there will be plenty of internal examples that are quietly swept under the carpet.
It would be too easy to have an alarmist rant at this point about the scale of the threat, the naivety of the people involved, the absolute need to respond to the issues right now – but that’s not really the point as change is in the air anyway. There are a number of reasons for this, which (as usual) boil down to a combination of legal changes (e.g. PCI DSS) drawing attention to the risks, vendors getting their acts together in terms of tooling, and the community at large warming to the idea of addressing the problem.
In a recent conversation with Tim Orchard at security consulting firm Activity, in answer to an open question, I was told that “We are definitely seeing a rise in demand for services around secure application delivery.” While the will may be there, the knowledge levels are patchy – “Some organisations that are better informed than others,” said Tim. This lack of understanding translates to a lack of will to build security in, at the outset of a development project. Of course, it’s not just security that gets short shrift – we saw a similar factor at play when we looked (link: http://www.freeformdynamics.com/fullarticle.asp?aid=373) at availability requirements (or lack of them).
It would be great to think that all security problems could in some way be magic-ed away through the use of security tools from the likes of IBM, HP, Fortify, Secerno or Qualys. Some of these tools help developers spot security weaknesses in code, whereas others look for run-time vulnerabilities. While there is undoubtedly a place for tools, they can only go so far – a common complaint is the generation of false positives, which then mask real issues when they arise.
Perhaps there really is no substitute for human intervention. “Tools are never as good as a manual pen tester,” Tim Orchard told me, “particularly when it comes to application logic flaws.” While he clearly had a vested interest to say so, I know from my own experience that he probably had a point. The issue is one of money – of course, we’d all love to get some top notch experts in, but in many cases the funding just won’t be there.
So, what to do? The answer probably lies in facing up to security as early as possible in the application lifecycle. I’ve said before that ultimately, security is a business issue – combining reputational risk and financial risk – and by considering applications in this context, it becomes more straightforward to identify what might go wrong and what should be dealt with as a priority.
Funding will always be an issue but engendering a more security-conscious mindset doesn’t have to be that expensive: it’s worth noting too that there are many free security tools out there, either built into development suites (e.g. Microsoft Team System) or downloadable from the Web. Security tools vendors would of course say that free tools are no substitute for what they offer, but they are certainly better than doing nothing at all.
July 2009
07-14 – Delivering on identity and access management
Delivering on identity and access management
Does authorisation provide the link?
At a recent panel event involving senior security managers and chief information security officers from a number of industries, Identity and Access Management came up as one of the top priorities. Participants gave a number of reasons, which can be distilled into three categories: more efficient and less fragmented delivery of IT services; facilitation of better collaboration with customers and partners; and the necessity to demonstrate compliance with data protection legislation.
All of which would be great - if only identity and access management hadn't proved impossible to implement. As one participant stressed, "We're five years down the line, and it still hasn't delivered." Worthy of note: that's not meaning, "delivered on its promises" mind, that's, "delivered at all." So, what is it that makes identity and access management so hard to do?
We can consider this question from both the IT and the business perspective. Considering IT in general, and security-related topics (such as identity and access management) in particular, the challenges faced in deploying such capabilities cannot be helped by our innate desire to concentrate on the 'how' rather than the 'what'.
Identity and access management requires two complex areas to be understood at the same time - first, the (rapidly changing) roles and requirements of a large number of individuals; and second, the diverse hotchpotch of new and legacy systems that may be in place. Wouldn't it be great if there could be a system to just resolve all of that? The answer has to be 'yes' - but while the requirements for such a system may be obvious in principle, the practicalities of implementation have defeated all but a minority of organisations.
It is understandable, and indeed desirable, that the IT industry has looked hard at how to resolve some of the technical and philosophical issues around delivering identity and access management. Initiatives such as the claims-based architectures espoused by the likes of Microsoft, together with cross-industry efforts such as OpenID, are to be welcomed. However the one thing they cannot do, is circumvent the complexities at the heart of deploying identity and access management on an industrial scale.
So what are the options? One can turn perhaps to the business, and insist that it gets its own house in order before IT automates what is required. After all, surely identity and access management systems exist to support such manual processes as role definition, access provisioning and the like?
It's here that we reach the nub of the matter from the business perspective: in a word, churn. As soon as any understanding is reached of who has access to what and why, such information almost immediately becomes out of date. Organisations large and small are dependent on pools of contractors and temporary staff; management roles change; hirings, firings, maternity provision and job shares are part of the fabric of modern industry.
Meanwhile, the boundaries of organisations public and private are today more like semi-permeable membranes than fortress walls. We see this in pharmaceutical companies creating innovation ecosystems, just as in borough councils and local health authorities looking to provide services and "deliver individual outcomes" to what is often a highly transient population.
From this standpoint, it is not absolutely clear what is the answer. However, we can perhaps identify where it may be found - not in the complexity of systems and applications that need to be accessed, nor in the constantly changing pool of people and roles that need to be identified and trusted. Rather, we could look at the interface between the two - or more importantly, the specific event during which authorisation is made between a specific individual and a specific service.
This is not proposing any magic bullet. Rather, by recognising the importance of this irreducible event, we can gain a level of clarity about what is at the heart of the problem – and the potential solution. Either it is possible to manage the event taking place at the moment of service provisioning, or it is not - but if the latter is true, no amount of clever technology is going to help.
Managing authorisations may hold the key, and for added impetus, we only have to look to legislation. Both industry-applicable regulations such as Sarbanes Oxley and national laws such as the data protection act require that authorisation of access to a given data set is in some way controlled. In other words, organisations have no choice but to make this happen: it's the law.
In order for identity and access management to succeed, then, we need to look at both the technologies available to facilitate provisioning of access, and the authority we vest in individuals to provision. For identity and access management to work, such capabilities need to operate as close to the point of authorisation as possible, such that the event itself can be captured.
This may be no easy task, but at least it provides a starting point - and let's remember, it isn't necessary to deliver everything, all at once. In conclusion, it is perhaps time to dispense with ill-fated attempts to provide blanket policies and approaches, however attractive they may seem at the outset. Instead, we can focus first on how to enable responsible individuals to provision access to higher priority systems and higher risk data. If and when this has been enabled, then other, lower priority systems can be brought into the fold.
From our research we know that there are no easy answers to complex issues such as this. But we would welcome any feedback, particularly concerning how to successfully deliver identity and access management. If you have any experiences to share, good or bad, we'd love to hear them.
October 2009
10-12 – Breaking out of the virtual pilot
Breaking out of the virtual pilot
Starting Gates or Flood Gates?*
The IT industry is full of bad habits, none greater than treating new technology areas as faits accomplis – which of course makes things a tad complicated eighteen months down the line, when organisations are just about deciding to adopt, but marketeers have already moved onto the next ‘big thing’. I remember overhearing a PR executive, a good few years ago now, saying how SOA needed a new name largely to overcome this issue of allowing people to actually do something with it without making vendors appear in some way backward. It was ever, and no doubt it always will be, thus.
Case in point: virtualisation. If the hype were to be believed, most organisations would be done by now – servers would be consolidated into neat arrays of racks and blades, drawing on a similarly neat configuration of fit-for-purpose storage operating as a single, flexible pool. What’s not to like? Nothing in principle, but the practice might take a little longer (probably a good thing for VMware, to be fair – if we were all finished, they’d have nothing left to sell).
In reality, most organisations are still to leave the starting blocks when it comes to virtualisation. Thinking specifically about server virtualisation, for example, while over 50% of larger (250-employee-plus employee) organisations may have adopted it in some shape or form, the majority of deployments are non-critical workloads or pilots.
So, we need to be careful. Virtualisation in general, and server virtualisation in particular, is not a done deal. We think we can speak with confidence when we say that it will be accepted as a mainstream technology, as (for a change) adoption is being driven as much by organisational ‘pull’ as by industry ‘push’ – not only because of the relatively immediate cash savings in terms of hardware, power and licensing that virtualisation can bring, but also for the operational flexibility and provisioning benefits (“Want a new physical server? No. Want a virtual server? Yes.”).
But let’s not run away with ourselves. Just because over half of the world’s organisations have some form of virtualisation in place, doesn’t mean it is already a mainstream technology. A number of hurdles still need to be overcome, some of which will be down to what happens when organisations do start to do more with it, and some to do with what vendors then provide as a response.
In a nutshell, much of this comes down to good, old fashioned configuration management. For example, lessons from early adopters suggest that the problems of physical server sprawl (which server virtualisation is reputed to resolve) can very quickly be superseded by virtual server sprawl – we can imagine similar scenarios for virtual storage, thin provisioning or no thin provisioning.
Thinking about management in the broader sense, there remains a way to go as well: there are management tools for virtual environments, and management tools for physical environments, but the twain don’t yet meet. Feedback welcome on this, but common sense suggests that for an operationally quiet life, it will be necessary to manage physical and virtual machines as a single pool using the same tools. This is not yet the case, which makes me feel just slightly jumpy – I know, they took away my screwdriver a long time ago, but stories about how virtualisation is being rolled out without due consideration of security, patching, asset management, licensing, business continuity and so on can’t help but do so.
This isn’t meant to be a stake in the ground so I can say, “I told you so,” in 18 months’ time. Rather I’m looking forward to when virtualisation moves from what it enables now – simplifying the existing environment – to what it can enable, namely offering a stepping stone towards more dynamic use of IT resources. In principle, a virtual server, and its associated virtual storage, need only exist for the time required to run a given workload – this could be years, or equally, days or hours.
To achieve this vision, requires first that the platform of virtualisation and management tools are in place, working together seamlessly. What’s the missing piece? When I spoke to David Greschler, Microsoft’s director of virtualisation strategy recently, he summed it up in a single word: ‘orchestration’ – that is, software running above the management layer, which can make policy-based decisions about what should be running where.
Before I get castigated in the comments let me say yes, indeed, this is precisely what is known in the mainframe world as ‘resource management’. And indeed, there are plenty of good reasons why a mainframe could operate as a server virtualisation platform, just as there are plenty of good reasons why an x86 platform could do the same. The difference now is the ubiquity of virtualisation – which will require orchestration to operate across computer systems, and indeed across data centres if the cloud vision (currently very much ‘push’) is to be believed.
So, virtualisation does indeed hold much potential. Hopefully it should be just like riding a bike, as hardware and operating systems evolve in such a way that it becomes part of the fabric (indeed, perhaps virtualisation will really have succeeded when we stop talking about it). But it is important we don’t get panicked into thinking we should already be racing along, before we reach the point it is safe to take the trainer wheels off.
2010
Posts from 2010.
January 2010
01-12 – Delivering on IT services
Delivering on IT services
ATFQ and other truisms
How easy it is to say that ‘IT should start with the business’? It’s one of those statements, like ‘customer is king’ or ‘always wash up as you go along’, that may indeed save time in the long run but which can be a lot harder in practice than theorists would like.
Like many truisms, this doesn’t make it any less important. But when faced with the hustle and bustle of keeping IT systems up and running day to day, it can be difficult to keep in mind that IT is a means to an end. Business users don’t necessarily make things easier – while tough to please, they can also find it hard to say what they want. I paraphrase but I can recall being told on more than occasion, “Requirements? Isn’t that your job to work them out?”
To be fair, medium sized organisations do have less of a challenge in trying to understand business needs than enterprise companies, simply because the former are less complex than the latter. This doesn’t always mean IT and the business are in for an easy ride.
I can remember one of my formative first jobs as an IT consultant, when I was brought in to help write a Service Level Agreement (SLA) for a public body. What I didn’t know before I arrived, was that the designated IT department representatives weren’t actually talking to their business counterparts anymore, in the IT-business equivalent of a marital breakdown. I wish I could say I saved the day, but sadly all I did manage to coax out of the situation were a few response time targets.
Even in the best of cases, SLA’s are only one element of service delivery. We all know whether or not we are getting what we would perceive as ‘good service’, be it in a restaurant, a shop or indeed from IT. The challenge (as I’m sure anyone who has worked in hospitality or retail would profess) is knowing what to put in place to support the delivery of services, whatever their shape or form.
In IT terms this boils down to the infrastructure itself, and how it is managed and operated. In a previous article [link] we considered the architectural aspects of IT and how to deploy an infrastructure platform that supports the service requirements. What we haven’t yet talked about is how to decide what services are necessary, and how to ensure they are maintained.
While all organisations are different, business activities tend to fall into one of two categories. The first is activities that are, or can benefit from being treated in a structured manner; and the second is those that cannot.
Considering more structured activities first, these have been called various names at various points over the past few decades – including workflows, business processes and so on. Methodologies abound to identify and restructure such activities, with the goal of making things more efficient. And they also serve as the basis for understanding what can be automated – each task may or may not benefit from use of technology.
Forgive me if this is egg-sucking stuff, but in reality things don’t always work out that way. How many times have we seen applications that are used only in part, if at all, because they don’t actually fit with the way the business works? Of course, the business can (and sometimes should) change to fit with new capabilities provided by technology. But still, business activities should be in the driving seat.
For IT to support structured activities, it needs to operate efficiently – that is, as cheaply as possible. Interactions are largely transactional – that is, responding to specific service requests or updating specific information items. In the structured world, flexibility is a second priority to service levels – if IT is unavailable or inaccessible for whatever reason, the cost impact on the business can be very high
The second activity type – “everything that cannot be structured” – is no less important when it comes to deciding IT service requirements. We talk about collaboration tools, portals and information sharing facilities in this context, as well as mobile devices and other access capabilities.
For less structured activities, accessibility, flexibility and responsiveness are key drivers. There is no point at all for example, in having a world class customer management system if it isn’t able to cough up a phone number to the service engineer who is working on-site.
From a service management perspective as well, the type of management required depends largely on the type of service. Transaction workers in structured environments need similar structure when it comes to fault reporting and resolution, whereas more collaborative environments will probably build in several different ways of getting to an answer (“Can’t access the file? OK, I’ll email it to you.”)
At the heart of it all, however, lies a service mindset, which is difficult to teach. I can think fondly back to a number of colleagues who just knew that the quickest way to working out the answer was first to ask the right question. “Hang on,” said one, “Exactly what problem are we trying to solve here?” Even if we think we know what the business priorities are, it doesn’t take much to check in with someone who is well placed enough to say.
Assuming, of course, that the two sides are still talking. If not, it’s going to take more than an SLA to resolve that one.
April 2010
04-14 – Taking an architectural view of storage
Taking an architectural view of storage
Data storage may not be the most sexy area of IT, but it’s certainly one of the most critical. In theory, and from the outsider’s perspective, storage should just work without needing too much intervention, to respond to the following needs:
-
Firstly, its role is to deliver data to applications and users consistently and efficiently: that is, as and when needed, at the required levels of performance and an appropriate cost.
-
Second, storage should also be able to recover from failure situations. It is one thing when things are going right; quite another if things go wrong. Here we can think about backup and recovery as well as the ability to replicate between storage arrays, and indeed across sites.
-
And finally storage needs to be manageable in a way that suits the people trying to manage it. This is not just about having visibility on what storage exists, but also to respond to changing conditions and changing requirements, preferably as automatically as possible.
Best practice suggests that such needs can be met by building a well-architected, well-managed storage environment. However, few organisations have ever had the luxury of implementing storage infrastructure from scratch. Indeed, most struggle with a number of generations of storage. While each might have been procured with the best of intentions, it has led to storage environments being far more complex than they need to be.
It’s not just about complexity, but the reliability of storage must also be held to account. Storage is one of the few areas of IT that relies on mechanical devices, and as such disk failure is a common theme in most IT environments. Indeed, some common storage technologies (RAID for example) exist largely to counter the fact that disks can, and will crash without warning.
Meanwhile of course, we are seeing continuingly high levels of data growth. Video and other forms of unstructured content, the democratisation of business intelligence, the increased demand for collaborative working and remote access, each is driving increased demand for raw storage. Data growth is exacerbated by many organisations having a ‘keep everything’ policy when it comes to electronic information. As well as being most likely illegal, this puts additional burdens on the storage infrastructure, not to mention the people and processes to support it.
Putting the complexity and reliability of storage together with the very real challenges of data growth does lead to a bit of a doomsday scenario. The question is, are any of the latest developments in storage and elsewhere, going to have an impact?
To cover reliability first, it may be that we are seeing the last generation of spinning platters, as solid state disks (SSDs) arrive at a point where they become cost effective enough to become the norm. While we are not quite there, the mood resembles that of the TV and monitor market just a few years ago, when flat screen displays tipped from being a luxury to an expectation.
Next we have storage virtualisation – which enables storage resources to be treated as a single pool, and then provisioned as appropriate. The phrase in vogue at the moment is ‘thin provisioning’, in which a server or application may think it has been allocated a certain disk volume, but in fact the storage array only allocates the physical storage required up to the specified maximum (which may never be reached). This makes for a lot more efficient use of storage.
Speaking of efficiency, another trendy term is de-duplication, in which only variations in files or disk blocks are retained, transferred, backed up or whatever. De-duplication can get quite complicated of course: for example, an index needs to be maintained of everything that is being stored or backed up, so that files can be ‘reconstructed’ as necessary. But it can have a considerable impact on the amount of disk being used – this also has a positive impact on reliability (less disks, less risk).
Other storage related developments worthy of note are on the networking side with iSCSI (bringing together block and file storage, a.k.a. storage for databases and unstructured content respectively), and at the higher end, the merging of data and storage networking using 10 Gigabit Ethernet.
While all such developments hold their own promise, they cannot solve the problems of complexity and data growth by themselves. To do so requires an architectural view of storage, but we know from various research studies we know that funding can be hard to come by, particularly when it comes to the business case for higher-availability features and management tools. In other words, the things that enable better-architected storage infrastructure to exist.
What to do? Perhaps the point is that it is never too late to get your organisation to think about storage in the right way. It may seem oxymoronic but even cost-saving initiatives can be used as a stepping stone. For example, while it still isn’t cheap (though it’s getting cheaper), implementing de-duplication might provide immediate savings in terms of bandwidth and latency reduction. However, by considering its impact a little more broadly, the bandwidth savings may now allow (for example) data replication to another site, making disaster recovery possible whereas in the past it was not.
Certain initiatives may not appear to be about storage initially, but quickly prove otherwise. For example adopters of virtualisation on both both server and desktop have discovered just how important it is to get the storage right up front. Virtual servers can quickly become the cause of storage bottlenecks, for example, and meanwhile, early adopters of virtual desktop infrastructure have learned how important it is to co-locate the storage used for user data, with that for the virtual desktops.
With such things in mind, it is difficult to see how the storage architecture will become any less important over the coming few years. The question is whether storage will remain the thing that gets thought about last, when it is already too late to do anything about getting it right.
May 2010
05-20 – Focus on business challenges, not technology innovations
Focus on business challenges, not technology innovations
The IT industry really does bring out the best and worst traits of human nature. Were we always quite so excitable about the latest big thing? It is difficult to tell, as historical records don’t tend to preserve the glee reserved for all things new and improved, whether or not they have any long-term advantage.
This is particularly relevant given that we are perhaps in one of the most inventive periods since the dawn of humanity – at least, that is how things look. While the jury is out as to the usefulness and ultimate value of many of our creations, the speed of ‘innovation’ has been accelerating consistently over the past 500 years, such that we have now reached a point where it is impossible to keep up with everything that is going on.
It is a very human thing, however, to want to keep up appearances. In the arts it is important to have an opinion on the latest show, film, or book, and our high streets are full of the latest must-have items. You can see where I’m going with this can’t you – yes indeed, our cuckoo tendencies to accumulate shiny things also spills over into how we view, and indeed select and procure IT systems.
We all know this, but many go with it anyway. Marketing departments in IT vendor companies spend their time working out how to make even the most humdrum of technologies look like the best thing since, well, the last best thing. Phrases like ‘paradigm shift’ and ‘game changer’ are used over and again, even though both speaker and listener knows that if the paradigm had shifted as often as predicted, we would have run out of games to change long ago.
Business leaders are subject to the same pressures – after all, in the words of the Matrix agent, they are ‘only human’. It was only a matter of time before a CIO would say to me that his boss had asked him when could he get some of that cloud computing. The fact that the question doesn’t make any sense (link: http://www.openreasoning.com/2010/05/but-is-that-really-cloud-check-out-this.html) is neither here nor there: businesses want to demonstrate they are up with the corporate Joneses, just as CIOs themselves want to have a few leading edge projects on their CVs. Analyst firms as well can be no better, as indeed, if things weren’t quite as exciting as everyone was making out, would we really need analysts to make sense of it all?
Don’t get me wrong, there are lots of new and exciting things made possible through the use of technology IT. It has brought the world closer together, opening up whole new ways of communicating and collaborating, and so on and so on. The danger however, is that we are so busy looking beyond where we are for the next big thing, we don’t give ourselves the time to make the most of what we already have. Organisations don’t always need the latest and greatest technology to thrive, and there is a big difference between being flexible as a business and simply changing because that is what everyone else is doing. Far more important is that their requirements are clearly understood, and that the right tool is selected for the job in hand.
As my old boss Steve, a seasoned programme manager used to say, “What’s the problem we’re trying to solve here?” Okay, his language was a bit more colourful than that but the point still stands. As we look at the waves of so-called innovation and try to decide whether they have any relevance to our businesses, let’s first and foremost focus on the challenges we face, and how best to deal with them. In a couple of years time, when the dust has settled on the latest hyped-up bandwagon, if the challenges still remain then we won’t have been doing our jobs, even if the agenda item of “keeping up with innovation” has been achieved.
September 2010
09-06 – Storage management in three years time
Storage management in three years time
For today’s IT decision makers, bombarded with advertising and marketing from all sides about how everything is going to change, it is hard not to get caught up in the hype. At a recent event I attended, a senior IT guy told a number of his peers how his boss – the CEO – had been quizzing him about some of the latest developments. “When can we get some of this cloud?” he was asked – a question which could never hope to have a straight answer.
Storage managers are not immune from the pressures caused by marketing, while being faced with much the same realities as their peers elsewhere in the IT department. Storage virtualisation and thin provisioning, iSCSI, deduplication, data centre convergence through 10-Gigabit Ethernet, storage as a service, and so on are all (we are told) set to rip up the rule book when it comes to how we design, deploy and operate our storage environments. We are heading for a brave, exciting new world, where everything is going to be so much better managed, easier and above all cheaper than in the past.
If you ask exactly when this wondrous vision is going to present itself, the answers become more vague however. I’ve been asking this question for the past decade, and the answer has not really changed much – the general thesis is that we are looking at 5-7 years or beyond. In other words, just long enough ahead not to make a difference to what is happening today.
The reality, for the majority of organisations that we speak to, is that IT is not going to change that fast, or that much. Last week I was speaking to two storage managers representing two large financial institutions, and both concurred that life in three years time isn’t going to look particularly different in three years’ time, compared to what we see now, for a number of reasons. First, storage demands will continue to grow – as reflected in our research, data growth is currently the biggest factor affecting how we architect and procure IT systems in general, and storage in particular.
Another factor is that in organisations of all sizes, IT is just too complex to be replaced wholesale. As a consequence vendor case studies tend to dwell on the few exceptional organisations that really did decide to make sweeping changes to their IT – these frequently appear to include healthcare institutes somewhere in middle America, eastern European telcos and banks with names nobody has ever heard of. Who would not wish such organisations well – but such examples do not always translate for more mainstream IT organisations.
This is, of course, if it would make financial sense to make such sweeping changes in the first place, which leads to the third and perhaps most important factor – the cost of any change can frequently exceed the shorter-term benefit of making it. We know of a number of new technology areas at the moment – desktop virtualisation is one – where it is impossible to make a business case purely on the basis of shorter-term financial savings, particularly if the costs of storage are taken into account.
It is unsurprising then, that we should not really expect the world of IT to look that different in three years’ time. Certain technologies will undoubtedly become more prevalent: server and storage virtualisation is one for example, though we are still looking for proof points for whether there is life for virtualisation beyond consolidating existing servers and storage onto a smaller pool of hardware. Meanwhile, no doubt deduplication will gain a footprint, ultimately becoming more of an expected function of enterprise storage. Other technologies, such as 10-Gigabit Ethernet will be more of a slow burn, implemented following traditional data centre refresh cycles rather than sweeping through like a forest fire.
Is IT life really just going to be more of the same, however? Given how data quantity continues to grow, many of the above capabilities are more about keeping up than getting ahead, so will we just be treading water? Thinking of storage in particular, one area that may yet have its day is data classification and categorisation. The storage managers I spoke to mentioned the continuing pressures of compliance, and their desire to get smarter about what information they were storing, and how it was managed. Classification technologies have been available for some time, but they remain fragmented – tools available to classify information for records management purposes currently sit separate from tools for data leakage prevention, for example.
While it is perfectly valid to pay attention to new technology developments, then, it is equally important to consider them against the reality that corporate infrastructure is not going to look that different in a few years time, compared to how it looks today. There will always exist opportunities to make things better, to save money, to consolidate, integrate and rationalise. However, nobody should be duped into thinking that the latest raft of technologies are anything other than that – new capabilities which can be integrated with what has gone before, so as to keep up with the increasing quantities of data we have to deal with. While this may be disappointing to the evangelists, it should come as quite a relief for the majority of front line IT decision makers.
09-09 – Why should we care about UC?
Why should we care about UC?
Why should we care about UC?
Unified comms is a broad term, which has come to mean many things depending on who you ask. In practical terms it encompass a number of different components including presence awareness, directory management, single number, unified messaging etc. What you find when you talk to large businesses however, UC is a turn-off as they see is as a broad-brush solution that can be difficult to buy into. In essence it is about this set of components and making them work together better. Increasingly, the term UC is generally understood to incorporate these things.
Businesses are increasingly faced with is a fragmented communications environment, which has arisen through things being brought in piecemeal – Voice, Email, IM, Audio conferencing, Web conferencing etc. When you try to cross from one mechanism to another, things don’t work particularly well, which makes the way we work very inefficient, not just in operational terms but in terms of individual efficiency.
Overlaying unified comms makes things work together so much better – classic example is in a call centre, call with a query, “I’ll speak to relevant people and come back to you” – after which agent literally has to go around and piece information together, potentially with several iterations. Overlaying UC means:
-
You can take the call
-
Identify relevant people to help
-
Contact them directly
-
Potentially handing call over straight away
Whole thing saves a lot of time, resource etc. and increases customer satisfaction in that scenario.
Another valid scenario is how businesses evolving with home, remote, road-based workers – how to get them together for regular meetings, communicate, co-ordinate actions etc. You want a company meeting – non-UC – send out invites, check calendars, etc, get everyone together etc. In UC environment, use videoconference, hook into directories, dial them in very simply. From a business perspective this makes a lot of sense particularly given existing pressures such as travel budgets driving down, employee lifestyle choices and so on.
For the record, unified comms underpins collaboration, but it doesn’t just deal with collaboration, it also supports better business processes.
Fundamentally, the problem UC solves is to bring together communications tools. However this problem is not seen as sufficiently blocking to make businesses see it as a priority. However it’s not destined to forever be a nice to have – a key element is that it needs an underlying infrastructure, which in UC’s case is IP telephony. Organisations that replace their PBX’s at end of their natural life are most likely to switch to IPT, following which the benefits of UC are easier to reap. So, while only a minority of organizations may see UC as an end in itself, a much larger number are likely to move into a UC model by way of IPT.
Meanwhile, other angles include videoconferencing players like Polycom who are partnering with UC vendors (Avaya, Cisco). Compelling situation is dealing with travel budget issues through use of videoconferencing – at which point companies like Polycom will recommend that it is viewed as part of a broader solution set, we’re seeing that start to happen. So, rather than just buying tactical videoconferencing, can think about comms more strategically.
We also have the mobility angle – in particular, fixed/mobile convergence. For example, take a company where people are working on shop floor/office, they have a mobile and a desk phone, with cost implications. You can bring those together on a single number which can work both on- and off-site. FMC becomes an initiative which may be down to a number of drivers, including cost. But the components involved in UC – such as directory, unified messaging – are key to the success of FMC.
In other words, UC is fundamentally the set of technology components which enable communications to be undertaken in a far more co-ordinated manner that previously. Its use will largely be driven by responses to compelling events. Other examples could be Exchange upgrade, changing comms provider, changing working policies internally.
What about UC from a business perspective? Businesses don’t look at UC and see it as relevant. A number of vendors are getting more specific about where UC fits, e.g. Avaya is looking at business scenarios, Cisco is doing the same – e.g. looking at hospital scenarios, getting doctors together. Examples such as these make UC much more relevant to specific businesses. This makes UC far easier to demonstrate value – by looking at what’s the problem businesses are trying to solve.
All well and good, but once in place, if you carry on working in same way as before, UC can start to appear like it was a waste of time/money. UC is a new thing which also demands that things are done differently – if you don’t do that, it won’t work, so there’s got to be a real push to make best use of it (e.g. travel budget reduction competitions). Business processes might not be videoconferencing-friendly for example, discouraging use of technology.
More successful companies have gone broad, embraced more of the components and involved a greater number of stakeholders in the discussions. Doesn’t have to be company-wide but has to be broad reaching to make sense. Need to build in change management from the outset, pilot to broader roll-out etc. Treat as a programme rather than a project with appropriate level of authority. Once in place you’re stuck with it, so need to be right first time.
October 2010
10-20 – Training in a Cold Climate (v2)
Training in a Cold Climate (v2)
“Our people are our greatest asset” is one of the most frequently employed oxymoronic phrases of business. Of course it is true, as everyone knows how hard it can be to get anything done if ‘people’ are not working to their potential. Equally frequent however, are examples of businesses trying to get away without investing sufficiently in the skills and capabilities of their staff.
Cue training, which is often seen as the first thing to go when budgets get squeezed. In IT terms training can appear in a variety of guises, from technical skills for IT staff, through software training to enable users to get the most from their tools, along with more general education around topics such as security, project management, compliance policy and so on. To play devil’s advocate for a second, is any of it actually necessary?
It’s worth stripping things back a little and considering what exactly we mean by training in the first place. Where there are procedures, policies and best practices to be shared it makes sense that relevant people know what they are. Fundamentally (and there is room for a football analogy here) ‘training’ becomes quite simply telling people the laws of the game, otherwise they might take shortcuts that raise costs and effort or create avoidable risks elsewhere in the business.
This principle also holds true when it comes to technology training. From a number of research studies and discussions through the years, we have seen how the load on the help desk is unnecessarily high, largely due to ‘GUI errors’ – that is, gross user incompetence. The minimum-necessary principle applies, in that it is worth investing in the smallest possible amount of training with users up front – for example in terms of password resets and backup procedures – to avoid time-wasting support calls later.
The value of training can further be illustrated when it comes to areas such as security and compliance. If all users acted sensibly of course, many risks would be reduced. It’s not that people are stupid – more that the problems are complex. A few years ago, I ran a series of security awareness seminars for a government department. “Hands up if you back up your laptop,” I said: lots of hands shot up. “Now, keep your hand up if you keep your backup disk in the laptop case where it will get stolen,” I said. Only a minority of hands went down.
One problem is that the value of training can be perceived as difficult to assess, particularly when it is considered in awkward to define terms such as ‘productivity’. When we’ve researched areas such as security education with respect to mobile devices however, we’ve found that the number of breaches drops by an order of magnitude for organisations that train their staff how to use the systems. Of course, you might hope that a lost phone, laptop or USB stick is fortunate not to have any confidential information on it or that it is simply wiped and sold on in the pub – but is this something on which you are prepared to bet your business?
The recession may not quite be over, though the fact a good friend just managed to find a job after what he terms “155 days in the wilderness” is perhaps a harbinger of better times ahead. We know from recent research that training is once again trailing at the bottom of investment priorities when it comes to IT. Perhaps, as organisations start to re-evaluate their budgets in order to grow their businesses, they will look once again to investing in training as a basis of running an efficient, cost-effective organisation that really does put its people first and one that is altogether more productive.
Gigaom
Posts published in Gigaom.
2013
Posts from 2013.
February 2013
02-22 – The Problem With Identity
The Problem With Identity
The problem with identity
Identity represents not only our relationship with technology but also how we choose to present ourselves to others, and how data about us is managed. With complexity coming from all sides, how will we cope?
Introduction
At its heart, identity is a simple enough idea. It should be possible to create a way of representing someone that is sufficiently unique and hard to replicate that it can be used for the purpose needed – linkage to data, access to services, proof of affiliation or whatever.
From this relatively straightforward staring point things very quickly get complex however. In this short paper we look at the background and drivers to identity today, and what can be done to keep on top.
Identity used to be so simple… didn’t it?
The identity challenge has largely emerged from the information age. Prior to the arrival of computers, managing the relationships between people, data and services was manageable enough – passports, medical records, census forms, birth, marriage and death certificates, accounts and company records filled the space allocated to them.
While the joint notions of ‘paperwork’ and ‘bureaucracy’ (literally, desk-based processes) suggest that the quantities of material could occasionally become overwhelming, the ability to link people with records was not seen as a particular challenge. Trust was more easily granted, for example with a letter of introduction, a seal or a signature, and despite the occasional abuse the system functioned well enough. As far as we know.
The information age spawned three challenges. First, it became necessary to authorise access to computer resources, for example through the allocation of username/password combinations. Which in itself, doesn’t sound like an insurmountable challenge: here’s an application, here’s your details, off you go.
Indeed, in the old days, people were often allocated roles which dictated what they could and couldn’t do, and given interfaces to suit – such as ‘Data Entry Clerk’. As mere, subservient slaves to the machine, such roles left people little scope for anything other than being part of the data production line.1
As computers and applications increased in scope and size, it became necessary to give people access to multiple systems – which created a raft of new problems. Single Sign On (SSO) was the term used to describe the desired goal – the ability to type one set of user credentials, and auto-magically gain access to all applications and data required.
Terminologically, SSO begat Identity Management, reflecting a recognition that both credentials and roles needed to be configured, updated, maintained. Bill Smith the Policeman needed to be treated very different from Bill Smith the social club treasurer. But still, as long as the challenge was about centrally managed computer systems, it remained surmountable.
The Internet threw a spanner in the works
Of course, we all know where things have ended up. Access to online systems (originally along the lines of bulletin boards) required people to keep tabs on their own usernames and passwords, a situation which grew exponentially when the wave of e-commerce, then the interactive web hit.
Today, people will have tens, if not hundreds of usernames and passwords. We’re seeing a continued path – more and more complex, as we try to manage multiple logins for work applications, online software and services, social sites, banking and phone PIN numbers… we have created for ourselves an impossible goal as nobody apart from Derren Brown could hope to remember every single one.
Governments are also reviewing notions of identity
The second dimension of identity is about. linking individuals with data.
The ability to link individuals to data is also a factor
Second, increasingly complex electronic records needed to be linked to the people with the minimum of error.
And third, more recently, we have seen the emergence of identity footprints.
We have each created for ourselves a drawer full of keys, without any real knowledge of what opens what door.
To resolve this we can talk about identity management… but the other side of the coin is that identity is being created for each and every one of us whether we like it or not.
But identity also emerging from how people using technology
That part still remains. But since Web 2.0, we are seeing people use technology as direct interaction tool, not just an asynchronous transport mechanism.
While such behaviours continue, an upsurge of individual behaviours are also taking place. The read-write Web and its successor, social networking have both enabled people to be far more interactive.
The relationship between people and technology is changing.
Which brings new questions about identity.
What about privacy.
And the two are overlapping.
As illustrated by the ability to log in with Facebook.
This is more than an innocuous facility, as it provides facebook with access to a broader set of interactions.
But it is also indicative of how we can take a certain persona and start to associate ourselves with it.
There exist challenges of fragmentation – it has become almost impossible to remember who said what to whom, where.. But also, of personality. It is harder and harder to present multiple images depending on where you are.
Every new person you add to Twitter has the ability to change what you might say.
When the pope tweets, are we seeing the real person? Unlikely.
Technology threatens to have a better grasp of who we are than we do ourselves.
And yet, the tools we have remain no more helpful than a transport, mere pipes. Even Facebook.
The idea of Sentiment analysis remains a tool for the advertisers and data miners, and not for the people who are using the tools.
What started as a simple mechanism to ensure only pre-defined people could access software applications and services has
They are better able to extract, reflect personality
And then to make decisions about it.
We are ending up with a sentient cloud – which can make decisions based on an understanding of people.
How to counter? If at all?
The right to anonymity. As captured by The Onion and illustrated by Facebook’s recent ‘win’ in German courts.
The right to be forgotten.
Perhaps it is not a problem but the discrepancy remains. For many these tools represent an identity crisis of the first order.
Where the various attempts to ‘do’ identity have failed, then, is in thinking that the key in some way represents the house.
Identity pervades everything we do and say online, just as offline.
The problem with identity, then, may ultimately be in deciding who we are.
-
In a way this reflects the ill-fated Stanford Prison Experiment in 1971. http://en.wikipedia.org/wiki/Stanford_prison_experiment. ↩
2016
Posts from 2016.
December 2016
12-02 – Ltdirect
Ltdirect
Case Study: London Theatre Direct, Tibco Mashery and the power of the API
A recent meeting I had with the theatre ticketing company London Theatre Direct (LTD) was a timely reminder that not all organisations are operating at the bleeding, or even the leading edge of technology. That’s not LTD itself, a customer of Tibco’s Mashery API management solution and therefore one already walking the walk. The theatres are a different story, however. Most still operate turnkey ticketing solutions of various flavours, making LTD’s main challenge one of creating customised connectors for each.
That work is now done, at least for London theatres, with the most obvious beneficiary being the theatre-going punter. “Customers could never find the tickets they wanted — they didn’t have much choice and there was limited flexibility on price, they could never get either cheaper tickets nor premium tickets,” explains LTD’s eCommerce head, Mark Bower. “With APIs in place, we can access millions of tickets. Every ticket is available, right up until show time.” As a result, more tickets are being sold, to the equal delight of producers and venues. Jersey Boys saw a 600% uplift in sales when LTD was plugged in, for example.
LTD haven’t just created a more straight forward booking facility however. This is the API economy, in which everything is a platform — so third parties, such as hotels and transport companies, can also plug into LTD’s service. These are early days but such tie-ups are inevitable. “30% of people coming to London will want to go to the theatre,” says Mark. “We can plug our service directly into in-room systems, avoiding the dark art of the concierge booking on a customer’s behalf.” And indeed, charging a premium to do so.
So far so good, but LTD believe that something that could be seen as simply online ticketing is actually far more profound. A theatre production is at its core a creative act, with no guarantees of success at the outset. “Theatre is not a one size fits all,” says Anne, marketing director at LTD. “You can’t walk into the industry and say, ‘I want this show to do this,’ that’s not how it works.” Rather, there needs to be a balance between the aspirations of the producer and the hard-nosed realities of getting punters in through the door and taking their money in return for their entertainment.
The world of theatre is not very forgiving. “Venue owners want bar sales and rent, and the minute the rent falls below a certain level, or that the owner sees bar sales dropping below a certain level, they will give two weeks notice to a show and and they are out,” says Mark. Such was the case for the feted, but short-run Made In Dagenham. The ability, therefore to generate higher demand for tickets is of huge importance, as is reaching out to previously untapped demographics such as younger audiences who would tend to purchase the less accessible, cheaper tickets.
Better ticketing doesn’t just mean an uplift in sales therefore, it also means that producers and venues are able to put on shows that might previously be seen as higher-risk. This is all before even thinking about the nuggets of insight that will lie inside the ticketing data itself — who is going to what kind of show, when, using what form of transport and so on. As we discussed this, I was reminded of how farmers are taking soil samples so they know how to target fertilisers more accurately — I couldn’t help wondering if the same principle applied to incentivisation of theatre goes to ensure all seats could be filled.
Perhaps the takeaway is that the ticket itself is a consequence of past models, which worked as well as they could in the analogue world. Even as our interactions become more digital, we have an opportunity to make them more about the very human relationship between producer, customer and venue, all of whom are looking to gain from the deal. The opportunity exists to move beyond the blunt instrument of the paper ticket and towards deepened relationship, manifested for example as event-led packages, loyalty programmes or even patronage models.
In the world of theatre and in many other sectors, technology enables us to move above and beyond the dark arts. Of course, the opportunity for abusing such tools also exists — there we face an ancient choice. But the stage is set (oh, yes) for a more direct, transparent relationships between participants. Cue applause.
—-
With APIs we can access millions of tickets.
Customers could never find the tickets they wanted — they didn’t have much choice, and there was limited flexibility on price, they could never get the cheaper tickets and they could never get the premium tickets.
[With API connectivity all that has changed.]
Every ticket is available, right up until show time.
Basket value up - people are spending more
Getting new, younger audiences that would purchase cheaper tickets
And tickets available until last minute - Anne - Marketing Director
Looking for partner opportunity - could be a train or an air company. 30% of people coming to London will want to go to the theatre.
Concierge is all a bit of a dark art - phones an agent, who phones the box office
In-room iPad - GLH - great little hotel
Can of course add it to their profile
Anne - Theatre is not a one size fits all - you can’t walk into the industry and say “I want this show to do this”, that’s not how it works.
[So above all it brings flexibility. ]
Create an event-led package, talk to producer - hotels’ getting the room etc.
Technical challenges?
With the venues - a bit control freaky
Three relationships - the producer - the vision of how it’s going to look,… they are expecting to sell every ticket! They think they have the best idea…
Venue owners want bar sales and rent - the minute the rent falls below a certain level, bar sales dropping, they give two weeks notice and they are out
Owners sell venue to producer like they can sell tickets
Jersey Boys - uplift of 600% when plugged into system.
[Get uplift once start collating data - cf Farming]
Every theatre ticket sold by ticketmaster will be going through LTD - we coudlnt’ have done it without Tibco Mashery.
We have to work with multiple venues - with multiple versions. We had to do the integration with each.
[Data brokerage?]
12-06 – Future Platform Based
Future Platform Based
AWS Re:Invent parting thoughts: The post-hybrid technology landscape will be multiplatform
As I flew away from Amazon Web Services’ Re:Invent developer conference, my first thought was how there is far, far more going on than anyone can keep up to date with. This issue has dogged me for some time, indeed every time I tried to make headway writing the now-complete book Smart Shift, I was repeatedly beset by the world changing for a thousand reasons and in a thousand ways. (As a result, incidentally, the book has morphed into a history of technology that starts 200,000 years in the past — at least that isn’t going to change! But I digress.)
Even though making sense of the rapidly changing digital landscape feels like drinking from several fire hydrants at once, such immersion does reveal some pointers about where technology is going. Across conversations I had at Re:Invent, not just with the host but also Intel and Splunk, Treasure Data and several partners and customers, a number of repeated themes started to let themselves be known.
Let’s start with hybrid and get it out of the way. Actually, let’s start with the fact that AWS are pretty darned impressive in what they are achieving and how they are achieving it, with a strong focus on the customer and a business model that really does save organisations a small fortune in running costs. This being said, one aspect of the organisation’s overall pitch stuck out as incongruous. “We were misunderstood. Of course we always believed hybrid models were valid,” said Andy Jassy at the keynote. I paraphrase but that’s about it: unfortunately I, and many of the people I spoke to, have too good memories to take this revisiting of history with anything other than a pinch of salt.
A second topic of conversation, notably with Kiyoto Tamura of data management platform Treasure Data but reinforced by several customers, was how multi-cloud models would pervade — again, despite AWS’ opinion to the contrary. While it may be attractive to have a “single throat to choke”, a clutch of reasons make two or more cloud providers better than one: many government organisations have a requirement to work with more than one supplier, for example; meanwhile past decisions, cost models, use of specific SaaS that drives deeper PaaS all make for a multi-cloud situation alongside the hybrid consequence of using existing IT.
However, even as AWS toes the hybrid line (to its credit as this is a significant pillar of its alignment to the enterprise, a point I will expand upon in a future blog), I think the world is already moving on from the history-driven realities of hybrid and the current inevitability of multi-cloud. The history of this technological age has been marked by some underlying tendencies, one of which is commoditisation through supply and demand (which directly leads to @jonno’s first law of data growth) and the second, a corollary, is the nature of providers to expand into less commoditised areas.
Case in point: AWS, which started in storage and virtual servers, but which is placing increasing attention on increasingly complex services — c.f. the machine leaning driven Alexa, Lex and Polly. To stay in the game and not be commoditised out of existence, all cloud providers inevitably need to become platform providers, purveyors of PaaS. As another corollary, this does send a warning shot across the bows of platform-enabled facade companies, such as those over-valued digital darlings AirBnB, Uber and the like. Who are quite rightly diversifying before they are, also inevitably, subsumed back into the platform.
The future is platform-based rather than cloud-based, for sure. As Andy Jassy also said, and again I paraphrase, “We don’t have to waste time having those conversations about whether cloud is a good idea any more. We can just get on with delivering it.” The conversations are already moving on from cloud and towards what it enables, and it will be enabling far more in the future than in the past. As yet another aside, it may be that AWS should be thinking about changing its mantra from “journey to the cloud” to “journey to what is enabled… but I digress again.
The platform perspective also deals with how we should think about all that pesky in-house, legacy stuff. There’s barrow-loads of it, and it is fantastically complex — we’ve all seen those technology architecture overviews that look something like a Peter Jackson-directed fly-through of an Orc-riddled mine. For many enterprise organisations, such complex, arcane and inefficient environments represent their business. But, fantastically complex as it is, existing IT systems can also be thought about as delivering a platform of services.
Many years ago I helped a public organisation re-consider its existing IT as a valid set of services in what was then called a service-oriented architecture, and the platform principle is not so different. Start with working out the new services you need, build interfaces to legacy systems based on the facade pattern, and the rest is gravy. So, if an organisation is going to be using platforms of services from multiple providers, and if existing systems can be ring-fenced and considered as service platforms in their own right, what do we have but a multiplatform technology landscape?
And why does it matter? Because this perspective takes us beyond notions of hybrid, which essentially refer to how cloud stuff needs to integrate with legacy stuff, and towards a principle that organisations should have a congruent set of services to build upon and innovate against. This is not simply thinking out loud, but has tangible consequences as organisations can think about how skills in platform engineering, architecture, delivery and orchestration will become future differentiators, and can start to plan for them now. For sure we will see service catalogues, marketplaces and the like, as there is nothing new under the sun. Most important however is for organisations to deliver the processes and mindsets that will enable them to make the most of such enablers in the future.
12-21 – Why Will Data Grow Faster Than Processing
Why Will Data Grow Faster Than Processing
Five reasons why data growth will outstrip processing for the foreseeable future.
A while back, I documented what I called, with no small amount of hubris, “Jonno’s first Law” – namely that data will always be created at a greater rate than it can be processed. This principle, I believe, is fundamental to why we will fail to see the ultimate vision of artificial intelligence (which I first studied at university 30 years ago) become reality, perhaps for some decades.
So, what is driving the ‘law’? Most simply that Moore’s Law, which states that the number of transistors on a chip will double periodically, is not the only principle at play. Other principles are economic, contextual and consequences of the way we choose to create data. While I haven’t done the maths, here are some of the reasons why Jonno’s first law will continue to apply:
- Data creation requires less processing than data interpretation. Data is easy to generate from even the least smart of sensors. It is also easy to duplicate with minimal processing and/or power, as illustrated by passive RFID tags and ‘smart’ paving slabs.
Corollary: A small modification in a complex data set can difficult to represent as the differences from the original data set, meaning it is more likely to result in two complex data sets.
- There is always more data to be captured. Current business approaches are based on gaining an advantage based on accumulating more data. Equally, human desire to progress implies higher quality images and frame rates, larger screens, more detailed information from manufacturing systems and so on.
Corollary: The universe cannot be measured molecule by molecule – it is too vast, the data set too big to capture without a similarly vast set of measures. Heisenberg’s uncertainty principle comes into play both at the lowest level and in how captured data influences human behaviour.
- The number of data generators is increasing faster than processors. For example, digital cameras and indeed, mobile phones are being used in the main for content generation. 100 million servers exist in the world, compared to 10 billion phones.
Corollary: While processing continues to commoditise, data generation continues to fragment. Cloud computing manifests the former, and consumers, the latter.
- Current models charge less (or zero) for data generation, more for processing. Consumer-oriented data generation is paid for by advertising, which covers its costs but leaves little for large-scale processing of the information.
Corollary: Consumer-based data generation is fragmented due to competitive interests, as multiple organisations (Facebook, Amazon) are growing their businesses primarily through data accumulation and secondarily interpretation.
- Many algorithms are only recently becoming feasible. Much of the mathematics around current AI, machine learning and so on is well established, but processing was too expensive to handle it until recently. The potential of such algorithms is still being worked through, therefore.
Corollary: The algorithms we use are based on human understanding, not computer understanding. This means that our ability to process information is bottlenecked not only by processing power, but also by our ability to create suitable algorithms – which themselves depend on the outputs of such processing.
We still lack an understanding of how to automate the interpretation of data in an intelligent way, and will do so for the foreseeable future. As notes Mark Zuckerberg,
““In a way, AI is both closer and farther off than we imagine… we’re still figuring out what real intelligence is.” A final hypothesis is that such a leap will be required before processing capability can actually ‘leapfrog’ data growth and bypass the realities of data growth versus processing.
2017
Posts from 2017.
April 2017
04-05 – Aws Sentinel
Aws Sentinel
AWS meets the enterprise head on
Amazon Web Services must have been a very interesting company to work for over the past five years. My conversations with AWS senior executives have sometimes been fraught — not because of any conflict or contention, but rather due to the pervading feeling that any discussion was getting in the way of activity. The organisation has been so busy doing what it is doing (and making a pretty reasonable fist of it) that it it barely has had time to stop to talk.
Any thoughts or feedback about how AWS might do things differently, about how the needs of the enterprise could be better achieved, were met with flummoxed consternation. It’s all completely understandable, in a company which measures success by the number of new features achieved or services shipped, to question any question of whether it is doing enough. But still, the question has needed to be asked.
Against this background, watching the feet is a far better option of watching the mouth. The company has come a long way since its early stance of offering an out-and-out alternative to in-house enterprise IT processing and storage and indeed, continues to work on delivering ‘the’ technology platform for digital-first organisations that need, and indeed desire, little in the way of infrastructure.
From an enterprise perspective however, and despite some big wins, many decision makers still treat the organisation as the exception rather than the norm. In part this is through no fault of AWS; more that you couldn’t just rip and replace decades worth of IT investments, even if you wanted to. In many cases, the cheaper (in both money and effort) option is to make the most of what you have — the blessing and the curse of legacy systems.
In addition, as IT staffers from CIOs to tape operatives are only too aware, the technology is only one part of the challenge. Over the years, Enterprise IT best practice has evolved to encompass a wide variety of areas, not least how to develop applications and services in a sustainable manner, how to maintain service delivery levels, how to pre-empt security risks and assure compliance, how to co-ordinate a thousand pools of data.
And, above all, how to do so in what sometimes feels like a horseless cart careering down a hill, even as the hill itself is going through convulsions of change, just one slope in a wide technology landscape that shimmers and twists to adapt to what is being called the ‘digital wave’ of user-led technology adoption. Within which AWS itself is both driving the cause and feeling the effect of constant change. Is it any wonder that such conversations may sometimes falter?
So what? Well, my perception is that as AWS matures, the underlying philosophy of the organisation is becoming more aligned to these, very real enterprise needs. This can only be a perception: if you asked AWS execs whether they cared about security, they would look at you askance, because of course the organisation would not have got to where it was without pretty strong security built in. Similarly, the AWS platform is built with the needs of developers front and centre. And so on.
What’s changing is how these areas are being positioned, to incorporate a more integrationist, change-aware foundation. For example, development tools are evolving to support the broader needs of integrated configuration and delivery management, DevOps automation and so on. Security teams are not only delivering on security features, but are broadening into areas such policy-based management and, for example, how to reduce the time to resolution should a breach occur.
The seal on the deal is AWS’ recently announced Managed Services (previously codenamed Sentinel) offering, which brings ITIL-type features — change management, performance management, incident management and so on — into the AWS portfolio. The toolset originally appeared on the radar back in June last year but wasn’t launched until December, perhaps in recognition of the fact that it had to be right. It’s also available both to end-user organisations and service providers or outsourcing organisations.
AWS’ incorporation of ITIL best practice kicks into touch any idea that AWS doesn’t ‘get’ the enterprise. And meanwhile, many other areas of AWS’ evolving catalogue of capabilities, and indeed the organisation’s rhetoric, reinforce a firming up of direction to take into account the fact that enterprise IT is really, really hard and therefore requires a change-first mindset. From the organisation’s confirmation that the world will be hybrid for some time yet, to the expansion of Snowmobile to a 100 Petabyte shipping container, to simple remarks like “many customers don’t know what they have,” all serve to illustrate this point.
Such efforts continue to be a work in progress, as IT is never ‘done’. Plenty remains for AWS to deliver internally, in terms of how products integrate, how features are provided and to whom: this will always be the case in a rapidly changing world. Nonetheless the organisation is a quick learner which is moving beyond seeing cloud-based services as something ‘out there’ that need to be ‘moved to’, and towards an understanding that it can provide a foundation the enterprise can build upon, offering not only the right capabilities but also the right approach.
With this understanding, AWS can engage with enterprise organisations in a way that the enterprise understands, even as enterprises look to make the kinds of transformations AWS and other technology providers have to offer. Finally the organisation can gain the right to be a partner with traditional enterprises alongside the cloud-first organisations it has preferred to highlight thus far.
04-21 – Retarus
Retarus
Cybersecurity is an incurable disease, so it’s time we though of it that way
A couple of weeks ago I met with the email management and security vendor Retarus. While I was unfamiliar with the company (and it appeared to have a reasonably standard portfolio), my interest was piqued because it was German, and the country is very particular about such questions as personal data, privacy and so on.
As our conversation tended towards what the organisation is calling patient zero detection — referring to how the vendor’s software looks to improve how it reacts to security attacks that have already taken place, I found myself on fundamentally more interesting and potentially valuable ground. It’s difficult to explain why I think this is so important, so please bear with me and I shall try.
IT security has had a chequered history. Back in the day when I used to manage UNIX systems, workstations and servers tended to be delivered with all technological ‘doors’ left open (front and back), so that any person with a reasonable grasp of the operating system could gain access to whatever they wanted.
Some systems were better than others — indeed, good old mainframes had an almost-militaristic attitude to their own protection, the principles of which were adopted over time by the open systems movement and then the PC wave of computing (c/f Microsoft’s late-to-party-but-still-laudable Trusted Computing Initiative, kicked off in 1999).
(As an aside, pretty much any time I have pushed back against such efforts being promoted by IT vendors, my discomfort has been driven by the presentation of well-established, accepted and required truth as something new.)
As computing moved into the mainstream, security best practices came into alignment with the broader world of contextual risk, itself well known in military and safety-critical circles. This world had taken a different path, moving from risk (and litigation) avoidance to the (more affordable) risk management philosophy we still follow today.
Having bottomed out the best practices required for managing and mitigating the probability and impact of risk, attention has turned to resolving any issues when they arise. The parallel fields of business continuity and disaster recovery are testament to these efforts, their principles later applied to IT security not least in terms of how to deal with zero-day exploits.
Here’s the ‘however’: while this philosophical path from risk mitigation to breach resolution remains a constant, it is based on assumptions that are difficult to maintain where IT is involved. Not only that decisions, once made, can be stuck with; but also the idea that by dealing with the tangible assets (bey they physical, electronic or software-based), the stuff they deal with is also protected.
In the case of IT, the ‘stuff’ is called data. When the Jericho Forum got together to discuss the changing nature of IT security, they did so because they saw the protect-the-border approach to security as being a recipe for disaster. Their focus moved to identity management as a result, proposing models now espoused by Google’s zero-trust, end-to-end encrypted BeyondCorp initiative.
And so, in this day and age, traditional asset-based security runs uneasily alongside the school of thought that says data, not devices, needs to be protected. I was faced with this dichotomy myself when I released my (more asset-centric) book on Security Architecture to pointed criticism from luminaries of the latter camp.
The truth, however, is that neither perspective is completely right — and indeed, both start from the wrong point. Specifically, neither considers what to do when things don’t work out, when (as all too often) a breach or data leak takes place. The pervading view from security professionals is “well, you didn’t listen” which is not the most helpful in a time of crisis, however accurate.
The mindset of all parties, that we are trying to prevent things from going wrong to the best of our abilities, is fundamentally flawed. The core notion (which goes back to the origins of both IT security and broader risk management) is that if we did everything right, we would pretty much ensure bad things didn’t happen.
This notion is false. It is not that bad things are going to happen anyway, in the same way that a vehicle crash might happen even with all the right protections in place. That may be true but if we do have a car accident we are typically distraught, in the knowledge that we were, figuratively and statistically, one of the unlucky ones.
A far better framing of the nature of IT security is similar to that of disease. Of course we can look to avoid illness but when we succumb, we recognise it is part of the tapestry of life. Prevention will never fully work, recovery is a necessary and well-understood set of steps.
Indeed, so it is with human weakness, in that sometimes we succumb to our less positive traits. Far from being just another analogy, this is a fundamental input to our understanding of how digital technology is as likely to be misused as used for positive reasons.
The overall consequence is that we should be accepting the consequences of such weakness as the norm, not treating any incidents as exceptions. We should also be thinking about risk management and mitigation in the same way as we think of hygiene when dealing with germs — as necessary as it is imperfect and, sometimes, counterproductive.
Thinking more broadly, even as I write this we are getting far better at using data to understand the spread of disease. It makes absolute sense that we should be investing in tools that look at how computer attacks spread virally, and how they can potentially be contained and their wider impact minimised.
It’s not for me to say whether Retarus’ product is any better or worse than any other (as I haven’t tested it), but the company’s philosophy was decidedly refreshing. Thinking in terms of ‘patient zero’ outbreak detection and mitigation is a good start for any security vendor and, more importantly, it should be part of the mindset adopted by any organisation wanting to define its attitude to IT security in this increasingly digital world.
December 2017
12-08 – 2018 Predictions
2018 Predictions
2018 predictions — the two-edged sword of technology
Predictions are like buses, none for ages and then several come along at once. Also like buses, they are slower than you would like and only take you part of the way. Also like buses, they are brightly coloured and full of chatter that you would rather not have in your morning commute. They are sometimes cold, and may have the remains of somebody else’s take-out happy meal in the corner of the seat. Also like buses, they are an analogy that should not be taken too far, less they lose the point. Like buses.
With this in mind, here’s my technology predictions for 2018. I’ve been very lucky to work across a number of verticals over the past couple of years, including public and private transport, retail, finance, government and healthcare — while I can’t name check every project, I’m nonetheless grateful for the experience and knowledge this has brought, which I feed into the below. I’d also like to thank my podcaster co-host Simon Townsend for allowing me to test many of these ideas.
Finally, one prediction I can’t make is whether this list will cause any feedback or debate — nonetheless, I would welcome any comments you might have, and I will endeavour to address them.
1. GDPR will be a costly, inadequate mess
Don’t get me wrong, GDPR is a really good idea. As a lawyer said to me a couple of weeks ago, it is a combination of the the UK data protection act, plus the best practices that have evolved around it, now put into law at a European level with a large fine associated. The regulations are also likely to become the basis for other countries — if you are going to trade with Europe, you might as well set it as the baseline, goes the thinking. All well and good so far.
Meanwhile, it’s an incredible, expensive (and necessary, if you’re a consumer that cares about your data rights) mountain to climb for any organisation that processes or stores your data. The deadline for compliance is May 25th, which is about as likely to be hit as I am going to finally get myself the 6-pack I wanted when I was 25.
No doubt GDPR will one day be achieved, but the fact is that it is already out of date. Notions of data aggregation and potentially toxic combinations (for example, combining credit and social records to show whether or not someone is eligible for insurance) are not just likely, but unavoidable: ‘compliant’ organisations will still be in no better place to protect the interests of their customers than currently.
The challenges, risks and sheer inadequacy of GDPR can be summed up by a single tweet sent by otherwise unknown traveller — “If anyone has a boyfriend called Ben on the Bournemouth - Manchester train right now, he’s just told his friends he’s cheating on you. Dump his ass x.” Whoever sender “@emilyshepss” or indeed, “Ben” might be, the consequences to the privacy of either cannot be handled by any data legislation currently in force.
2. Artificial Intelligence will create silos of smartness
Artificial Intelligence (AI) is a logical consequence of how we apply algorithms to data. It’s as inevitable as maths, as the ability our own brains have to evaluate and draw conclusions. It’s also subject to a great deal of hype and speculation, much of which tends to follow that old, flawed futurist assumption: that a current trend maps a linear course leading to an inevitable conclusion. But the future is not linear. Technological matters are subject to the laws of unintended consequences and of unexpected complexity: that is, the future does not follow a linear path, and every time we create something new, it causes new situations which are beyond its ability to deal with.
So, yes, what we call AI will change (and already is changing) the world. Moore’s, and associated laws are making previously impossible computations now possible, and indeed, they will become the expectation. Machine learning systems are fundamental to the idea of self-driving cars, for example; meanwhile voice, image recognition and so on are having their day. However these are still a long way from any notion of intelligence, artificial or otherwise.
So, yes, absolutely look at how algorithms can deliver real-time analysis, self-learning rules and so on. But look beyond the AI label, at what a product or service can actually do. You can read Gigaom’s research report on where AI can make a difference to the enterprise, here.
In most cases, there will be a question of scope: a system that can save you money on heating by ‘learning’ the nature of your home or data centre, has got to be a good thing for example. Over time we shall see these create new types of complexity, as we look to integrate individual silos of smartness (and their massive data sets) — my prediction is that such integration work will keep us busy for the next year or so, even as learning systems continue to evolve.
3. 5G will become just another expectation
Strip away the techno-babble around 5G and we have a very fast wireless networking protocol designed to handle many more devices than currently — it does this, in principle, by operating at higher frequencies, across shorter distances than current mobile masts (so we’ll need more of them, albeit in smaller boxes). Nobody quite knows how the global roll-out of 5G will take place — questions like who should pay for it will pervade, even though things are clearer than they were. And so on and so on.
But when all’s said and done, it will set the baseline for whatever people use it for, i.e. everything they possibly can. Think 4K video calls, in fact 4K everything, and it’s already not hard to see how anything less than 5G will come as a disappointment. Meanwhile every device under the sun will be looking to connect to every other, exchanging as much data as it possibly can. The technology world is a strange one, with massive expectations being imposed on each layer of the stack without any real sense of needing to take responsibility.
We’ve seen it before. The inefficient software practices of 1990’s Microsoft drove the need for processor upgrades and led Intel to a healthy profit, illustrating the vested interests of the industry to make the networking and hardware platforms faster and better. We all gain as a result, if ‘gain’ can be measured in terms of being able to see your gran in high definition on a wall screen from the other side of the world. But after the hype, 5G will become just another standard release, a way marker on the road to techno-utopia.
On the upside, it may lead to a simpler networking infrastructure. More of a hope than a prediction would be the general adoption of some kind of mesh integration between Wifi and 5G, taking away the handoff pain for both people, and devices, that move around. There will always be a place for multiple standards (such as the energy-efficient Zigbee for IoT) but 5G’s physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.
4. Attitudes to autonomous vehicles will normalize
The good news is, car manufacturers saw this coming. They are already planning for that inevitable moment, when public perception goes from, “Who’d want robot cars?” to “Why would I want to own a car?” It’s a familiar phenomenon, an almost 1984-level of doublethink where people go from one mindset to another seemingly overnight, without noticing and in some cases, seemingly disparaging the characters they once were. We saw it with personal computers, with mobile phones, with flat screen TVs — in the latter case, the the world went from “nah, thats never going to happen” to recycling sites being inundated with perfectly usable screens (and a wave of people getting huge cast-off tellies).
And so, we will see over the next year or so, self-driving vehicles hit our roads. What drives this phenomenon is simple: we know, deep down, that robot cars are safer — not because they are inevitably, inherently safe, but because human drivers are inevitably, inherently dangerous. And autonomous vehicles will get safer still. And are able to pick us up at 3 in the morning and take us home.
The consequences will be fascinating to watch. First that attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right? We can also expect to see a far more varied range of wheeled transport (and otherwise — what’s wrong with the notion of flying unicorn deliveries?) — indeed, with hybrid forms, the very notion of roads is called into question.
There will be data, privacy, security and safety ramifications that need to be dealt with — consider the current ethical debate between leaving young people without taxis late at night, versus the possible consequences of sharing a robot Uber with a potential molester. And I must recall a very interesting conversation with my son, about who would get third or fourth dibs at the autonomous vehicle ferrying drunken revellers (who are not always the cleanliest of souls) to their beds.
Above all, business models will move from physical to virtual, from products to services. The industry knows this, variously calling vehicles ‘tin boxes on wheels’ while investing in car sharing, delivery and other service-based models. Of course (as Apple and others have shown), good engineering continues to command a premium even in the service-based economy: competition will come from Tesla as much as Uber, or whatever replaces its self-sabotaging approach to world domination.
Such changes will take time but in the short term, we can fully expect a mindset shift from the general populace.
5. When Bitcoins collapse, blockchains will pervade
The concept that “money doesn’t actually exist” can be difficult to get across, particularly as it makes such a difference to the lives of, well, everybody. Money can buy health, comfort and a good meal; it can also deliver representations of wealth, from high street bling to mediterranean gin palaces. Of course money exists, I’m holding some in my hand, says anyone who wants to argue against the point.
Yet, still, it doesn’t. It is a mathematical construct originally construed to simplify the exchange of value, to offer persistence to an otherwise transitory notion. From a situation where you’d have to prove whether you gave the chap some fish before he’d give you that wood he offered, you can just take the cash and buy wood wherever you choose. It’s not an accident of speech that pond notes still say, “I promise to pay the bearer on demand…”
While original currencies may have been teeth or shells (happy days if you happened to live near a beach), they moved to metals in order to bring some stability in a rather dodgy market. Forgery remains an enormous problem in part because we maintain a belief that money exists, even though it doesn’t. That dodgy-looking coin still spends, once it is part of the system.
And so to the inexorable rise of Bitcoin, which has emerged from nowhere to become a global currency — in much the same way as the dodgy coin, it is accepted simply because people agree to use it in a transaction. Bitcoin has a chequered reputation, probably unfairly given that our traditional dollars and cents are just as likely to be used for gun-running or drug dealing as any virtual dosh. It’s also a bubble that looks highly likely to burst, and soon — no doubt some pundits will take that as a proof point of the demise of cryptocurrency.
Their certainty may be premature. Not only will Bitcoin itself pervade (albeit at a lower valuation), but the genie is already out of the bottle as banks and others experiment with the economic models made possible by “distributed ledger” architectures such as The Blockchain, i.e. the one supporting Bitcoin. Such models are a work in progress: the idea that a single such ledger can manage all the transactions in the world (financial and otherwise) is clearly flawed.
But blockchains, in general, hold a key as they deal with that single most important reason why currency existed in the first place — to prove a promise. This principle holds in areas way beyond money, or indeed, value exchange — food and pharmaceutical, art and music can all benefit from knowing what was agreed or planned, and how it took place. Architectures will evolve (for example with sidechains) but the blockchain principle can apply wherever the risk of fraud could also exist, which is just about everywhere.
6. The world will keep on turning
There we have it. I could have added other things — for example, there’s a high chance that we will see another major security breach and/or leak; I’d also love to see a return to data and facts on the world’s political stage, rather than the current tub-thumping and playing fast and loose with the truth. I’m keen to see breakthroughs in healthcare from IoT, I also expect some major use of technology that hadn’t been considered arrive, enter the mainstream and become the norm — if I knew what it was, I’d be a very rich man. Even if money doesn’t exist.
Truth is, and despite the daily dose of disappointment that comes with reading the news, these are exciting times to be alive. 2018 promises to be a year as full of innovation as previous years, with all the blessings and curses that it brings. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.”
On that, and with all it brings, it only remains to wish the best of the season, and of 2018 to you and yours. All the best!
12-18 – Do Organisations Think They Are Secure
Do Organisations Think They Are Secure
Unanswered question for the day: Why do organizations think they are secure?
Recently I’ve been asking friends, colleagues and clients what they think are the most important unanswered questions in tech. I thank Ian Murphy, who works in the security industry, for the following conundrum:
“Why do companies with little or no real security experience think they know their environment better than anyone else? That is, because it’s ‘their”’ network, they feel best placed to identify attackers (even those with advanced techniques who hide in the normal traffic noise)?”
It’s a good one. I’ve been working in IT for decades and I remain baffled how we lock up our houses, secure our vehicles, seal away our valuables and yet, in the corporate environment, senior executives still question the need for security expertise. Ignorance, it would appear, is bliss.
While the problem may be technological, I suspect the answer is inherently human. Back in the day, when I was an IT director for a subsidiary of Alcatel, it took a major security incident on my watch to trigger any release of monies from my superiors.
Now, I recognise that I am already looking guilty of transference — wasn’t I the person responsible for securing the network and servers? While this is true, anyone who has worked in this environment know just how complicated it can be to ask for security budget. I know I tried.
And indeed, I remember the feeling of “I told you so” even as I worked with my team to rebuild the previous day’s data sources from (offline - phew) optical backup drives. Suddenly the cheque book was open and we could self-authorise training courses and enforce stricter policies — it was an internal breach.
So, I’m not sure organizations do think they are inherently secure, or that it’s nobody else’s business. I think, a bit like that feeling as we head down a dirt track on a mountain bike, we simply hope that the bad things won’t happen. That might have worked back in the early 1990’s, at least some of the time.
The difference now however, is that bad things are happening, all the time. We have moved from a state of security by exception (where probability was relatively low, even if impact was high) to a situation where all organisations are under constant attack.
This isn’t the latest missive from the industry, keen to sell you some security solution, it’s a fact. The probability is very high that, right now, an automated software package will be trying to infiltrate your corporate boundary. The impact is as high as it ever was, so overall risk has increased.
Somehow however, we still retain the attitude that ignoring the problem will get us through. Denial has been a fantastically useful tool in our evolution, without which we may not have survived as a race.
Like the shell on a tortoise, however, it wasn’t designed to deal with the threats of technological age. Indeed, the smarter cybercriminals are basing their strategies on our hope against hope that the bad things will not happen to us.
So, the answer to the question is potentially not that companies think they know their environment better. Rather, that they don’t want some third party coming in and rubbing their noses in their own ignorance.
Indeed, I’ve heard of cases (perhaps we all have) where organizations have decided against an audit, lest it turn up things that will have to be dealt with. Which is quite staggering, if you think about it.
What’s the answer? Sometimes it takes a major breach to shake board-level execs out of their reverie. However, relying on this approach is possibly the highest-risk strategy of all.
2018
Posts from 2018.
January 2018
01-02 – Innovation And Governance
Innovation And Governance
What does innovation and governance mean?
How is positive value being delivered by tech and tech firms?
- Feeding the innovation flywheel
This covers leading-edge technological developments, not least the SMAIC of social, mobile, analytics/AI, IoT, Cloud. But without going in-depth on any, I see these as manifestations of where we are right now. More interesting is what they make possible in terms of (say) algorithmic retail, better online customer experiences etc. Still to come are VR/AR, NFV, 5G etc which will keep the flywheel of innovation spinning — but it is the flywheel that matters most.
- How innovation happens (and how it doesn’t)
This covers startups and enterprises alike, and how opportunities for growth can be achieved. I’ve been doing some really interesting work on business model evolution for enterprises, the platform and network economy and so on, as well as work across verticals (banking, government, retail), all of which feeds this. This incorporates management theory and economics, as well as ecosystems and collaborative models.
- Innovation in practice
Building on the collaborative theme, this covers agility and open approaches, including DevOps and other methodologies. Having worked as an agile development consultant and an operations manager I’m no stranger to any of this, it’s fascinating still how it tries but doesn’t quite succeed in balancing the principles with the practicalities. This also covers manifestations such as Robotic Process Automation (RPA), reflecting the analyst and consulting need to bottle and sell best practice and software.
- Governance by design
Technology is a two-edged sword — by accident or aforethought, it can undermine its own value. Cybersecurity, GDPR and so on are a reflection of the need to manage risk, direct and indirect, of the technological age. It’s worth being quite forthright on the weaknesses of GDPR. Equally, areas like data aggregation and ‘toxic’ combinations can be covered — it is up to every organisation to take responsibility for their actions, and no doubt some interesting tech solutions will emerge to help.
In terms of research programmes and so on, I think some areas drop out of this quite naturally. It might be worth doing something on cybersecurity — e.g. SecOps and/or applying machine learning to threat detection — some interesting vendors in those spaces. I can/should also pick up on the DevOps theme and build on the report. I’m also curious about CSR, green IT etc and latest efforts there — the overall take should be,
01-02 – To Anyone Expecting Techno Utopia
To Anyone Expecting Techno Utopia
To anyone expecting techno-utopia: don’t hold your breath
Last week, as I took some time out of all things tech, I bookmarked an article from Rick Webb about the failures of the internet, and the culpability of those driving its adoption. It’s a harsh self-indictment of all those driving the digital revolution, specifically the faith paced in the information-rich utopia that would undoubtedly emerge. “I believed that the world would be a better place if everyone had a voice. I believed that the world would be a better place if we all had no secrets. But so far, the evidence points to an escapable conclusion: we were all wrong,” he wrote.
Well, Rick, perhaps I can put your mind at ease. The good news is, you still are: wrong, that is. Even as a smaller number of suddenly powerful people in rapid-growth startups continued to present a utopian vision of the future, the rest of were treating it as it was, and is — a set of tools that can be used for good, ill and everything else in between. There is no “all wrong” just as there never was an “all right”. Even the update to the article, “ “We” is a poor word choice here. Of course there were people — many people — who saw this coming,” is starting from the wrong place, as it still assumes that the options are binary.
Now, I didn’t start this article to give a kicking to some random stranger I have never talked to directly (Hi, Rick). However a pervading notion colours the thinking coming out of Silicon Valley, a re-worked (or should I say re-imagined) version of the fact that history belongs to the winners. The only voices that have merit, goes the logic, are those of the more successful leaders, or the thinkers that guide them, at any moment in time. Any questioning of these voices simply reinforces the point.
It may be true according to one set of metrics: would Steve Jobs have succeeded if he followed any other than his counter-culture-based narrative? Probably not, and nor would any other of the series of accidental leaders (hat-tip to Bob Cringely). Such single-minded perspectives drive the innovation we see around us, but they have a sell-by date, which means that people falling in line with them may start to feel duped when they starts to creak at the seams. Yes, you were probably wrong to think that any absolute vision could be wholly true, but when in history has that ever been the case?
So, no, the Internet has not created utopia, nor could it ever have done. in order to progress, we need people who fix single-mindedly on a vision — indeed, some (like Steve Silberman) might suggest it is a built-in element of our psyche that has enabled us to survive this long. At the same time however, we need to build realism into our innovations — I’d use the term ‘governance by design’ to describe a better starting point than the frankly irresponsible approaches adopted by some of our digital heroes, with those who founded Twitter being the latest of the bunch.
Let’s not be downhearted, rather, let’s recognise that no technology can enable us to transcend our discomfortingly complex, conflicted and ambivalent nature as a species. Technology has already done a great deal of good, but meanwhile we will continue to let ourselves down; we are all individuals, with fundamentally different views depending on our own backgrounds and psyches; and even the notions of good and bad are shifting, as we understand better what our digital tools can do. To ignore or deny these truths is worse than naive, it is allowing the bad things to happen.
Picture credit: Flickr/Lucas Theis. Public Domain.
01-03 – Can Blockchain Transform Healthcare
Can Blockchain Transform Healthcare
Can Blockchain ‘transform’ healthcare? Simple answer: no.
On Carts and Horses
We’d all love to see technology improve patient care, reduce diagnosis and therapy times and otherwise help us live longer, wouldn’t we — so could Blockchain hold the key? In a recent article on PoliticsHome, Member of Parliament John Mann highlighted a couple of areas where Blockchain might offer “transformative potential” to the UK’s National Health Service (NHS):
“By enabling ambulance workers, paramedics, and A&E staff instant access to medical records updated in real-time, medical care could be carefully targeted to a person’s specific needs. The ability to upload results of scans, blood samples and test results and have them accessed by the next practitioner near-instantly, without the risk of error offers the chance to improve survival rates in emergency care and improve care standards across our health service,” he wrote.
While the Rt. Hon. Mr Mann (or his advisors) may be correct in principle, he is falling into an age-old trap by issuing this kind of statement without caveat. While it is a powerful tool (as I noted in my 2018 predictions), it isn’t true that Blockchain enables anything, any more than a chisel enables a sculptor to sculpt. Sure, it could help, but it needs to be in the right hands and used in the right way.
Of course, some might say, this point should be taken as read. If that were the case however, we would not see repeated money being thrown at technology as a singular solution to otherwise insurmountable problems, in healthcare and beyond. And then failing to deliver, to general wailing and gnashing of teeth.
To continue the sculptor analogy, if the poor fellow is asked to deliver the thing in impossible timescales, designed by committee and with conflicting expectations of what it will look like, the result will probably be a mess. As sculpture, so technology, whatever the chisel manufacturers or IT vendors might have us think.
A massively complex organisation seemingly ruled (depending on who you ask) by metrics, efficiency, litigation prevention and so on will see such criteria impact any solution — either by design, or in consequence. I’m not critiquing the NHS here, just observing that IT will always take a subordinate role to its context. It’s the same in the US or any other country.
Perhaps Blockchain could help, but so could any number of technologies — if used in the right way. Indeed, you could take the above quote and inert it into any data management capability or service from the past four decades, or indeed, mobile, IoT and so on, and it would still make sense. Indeed, as healthcare writer Dan Munro notes on HealthStandards.com,
“The technical reality is that all of the features of a blockchain – except double spending – can easily be created with other tools that are readily available – and cheap – without actually being a “blockchain.” ”
While I’m well aware of both the many potential uses of blockchain in healthcare and the dangers of simply being one of “those armed with spears” (according to my old colleague and healthcare author Jody Ranck), the horse needs to be put before the cart: tech will not, by itself, solve any problems for anyone. This is more than a glib riposte to a quote taken out of context. For Blockchain to work in the way Mr Mann suggests, it would have to be rolled out widely, across a health service. Flagging up technologies is easy, but delivering transformation is astonishingly hard: our representatives need to understand, accept and design this in from the outset, for any “transformative potential” to be achieved.
(Jon was CTO of healthcare startup MedicalPath2Safety)
01-11 – Will GDPR fail?
Will GDPR fail?
Will GDPR fail? Beyond the new regulation lies a long, hard journey
The General Data Protection Regulation (GDPR) is a good thing, right? A recent discussion with Fieldfisher lawyer Hazel Grant confirmed that, despite its voluminous and bureaucratic outer appearance, it contains the essence of data protection law as present in the UK for over a decade, combined with the current state of best practice. Given that online privacy knows no borders, it will undoubtedly be better to have a single framework rather than being linked to any single nation or jurisdiction. Word is (for example, from this round-table discussion) that it will make the backbone of privacy law globally — if you’re a multinational, goes the thinking, it makes more sense to implement it once, rather than having different grades of regulation depending on the geography.
Neither is GDPR necessarily the end of the world for ill-prepared (and, potentially, previously in denial) organisations. “The vital thing is to plan, rather than panic,” says Freeform Dynamics’ Bryan Betts, who attended said round-table. “GDPR compliance may not be that onerous, especially if you already handle customer data fairly and transparently.” Even if surveys suggest that 75% of marketing data will be “obsolete”, the 80/20 rule suggests that organisations could do without that bit anyway — ‘keep everything’ is a strategy for the hopeless hoarder, not the business leader.
Yes, GDPR may be expensive to implement, particularly for organisations who have played fast and loose with our privacy in the past, and who now face potential fines with real impact. Yes, it may involve a learning curve as people get their heads around the 260-page document it involves (quick tip: get someone in that understands it). Yes it might be a pain for consumers, who may be faced with incessant questions from online providers, each carefully worded to avoid any suggestion that the customer was duped or cajoled. And yes, organisations such as Axciom, operating in the background of marketing data acquisition, may fear their business models are under threat and respond accordingly.
No doubt it will achieve many of the goals it set out to achieve. All well and good. And yet, and yet… a looming question is, will it actually make us safer, or protect our privacy online? As in, will the potential bad things be countered, such that they are less likely to happen? Please forgive my over-simplistic language, but isn’t this what it all boils down to? For a number of reasons (most of which are speculative, this is the future we are talking about after all), this may not be the case.
Challenges of scope, consent, loopholes, aggregation, unexpected consequences and speed of innovation
First, we have unaddressed challenges of scope. Worth a read is “Global authority on adblocking” PageFair’s work comparing GDPR’s impact on Facebook and Google: simply put, Facebook needs to ask people if it can use status posts as input to its advertising engines, whereas Google does not need to know someone is — its AdWords algorithms generate information based on search requests, location and so on, without being personally identifiable. “Google’s AdWords product has the benefit that it can be modified to operate entirely outside the scope of the GDPR,” states the article.
In other words, data can be processed and people can be targeted with marketing materials whether or not they are “personally identifiable”. This (targeting) is only an issue if it is an issue, but it does seem to be one of the areas that GDPR was set up to address. Keep in mind that other providers, including Facebook, can take a similar tack if they choose. “Nothing in the GDPR prohibits Facebook from serving non-targeted ads,” says Michael Kaufmann.
Bringing Facebook into the discussion leads to the question of consent, specifically GDPR’s need for “clear affirmative action” around “agreement to the processing of personal data relating to him or her” — law firm Taylor Wessing provides a good explanation, as does the UK Information Commissioners’s Office (ICO). On the outside, Facebook has done a pretty good job of building Privacy by Design into its services, giving users granular access to what they allow on their timelines.
I don’t know about you however, but my grasp on consent is tenuous at least — I couldn’t say what it is that I have consented to across Apple, Google, Facebook, Microsoft and so on’s services over the past few years. Indeed, who remembers the “A privacy reminder from Google” thing, where essentially we agreed to whatever we had to so we could keep getting the service? For sure, we have a choice not to agree, and I am sure (having just checked) that it is all set out in nice, clear English.
But who has done anything other than following the “Scroll down and click “I agree” when you’re ready to continue to Search,” versus removing oneself from the Google-enabled online world? The consent debate becomes a Hobson’s choice: either agree, or cut yourself off from all things digital. This is crucial: by saying yes, we are acknowledging that one of the most powerful companies in the world, and its partners, can use our data. Having done so, any thoughts about meanwhile restricting access to the local Mom and Pop hardware shop becomes laughable.
And as an aside, consent isn’t always required, for example if another legal requirement needs the data to be processed (here’s an HR-based view). Given the smorgasbord of regulations out there, it’s not hard to imagine an organisation using one law’s requisites as a loophole against the those of GDPR. For example (and I welcome a lawyer’s view on this), the ability to be forgotten necessitates remembering who was forgotten and why, which somewhat undermines the principle: even GDPR could be used as a defence against the consent provisions of GDPR. I’m not trying to be picky, more illustrating how any loopholes are open to confusion or exploitation.
Building on the fact that we are dealing with not one, but multiple global corporations, and a squillion of smaller data users, the issue of aggregation looms equally large. The issue is not only around how individual data gorillas are for new ways to exploit information in the name of innovation, but also what happens when third parties plug into APIs and slurp potentially innocuous feeds in ways that unexpectedly affect privacy. Let’s say your startup creates a new learning algorithm and plug it into the Twitter and Strava APIs, which determines and then posts online provable examples of dangerous cycling. Who is at fault at that point? You? Twitter? Strava?
Or indeed, does it really matter given that you have no money and the cat is already out of the bag? What if the data is reputation-damaging, or directly usable by law enforcement? This leads to the law of unintended consequences — for example, in some cases, data may be sub-poenaed for good reason (in this case, a murder investigation) but a raft of less valid data requests are likely: indeed, these are driving the current review of the UK’s Investigatory Powers Act (a.k.a. The Snooper’s Charter). Indeed, nothing’s stopping government agencies from acting as the startup in the previous paragraph, and/or creating laws to enable that to happen, in the name of fighting crime.
Perhaps the biggest issue of all is speed of innovation, which moves far faster than regulation. Like it or dislike it, much of innovation’s value comes from ‘leveraging’ (horrible word, but more positive than ‘exploiting’) areas of potential difference — for example dis-intermediating an inefficient or costly model of working (e.g. FinTech vs traditional banking), or finding new ways to connect things (e.g. social media). Innovation and speed go hand in hand, as nobody wants tenth-mover advantage.
Innovation and bacterial mutation are not so different, in that both happen without any real chance of success (ask venture capitalists, or indeed Bill Gates): it’s only with hindsight that the next generation of winners emerge. However our governance models act as though the next big thing will happen like the last. The seemingly innocuous “let’s identify new ways of doing things” attitude is exactly what will lead providers to circumvent GDPR in ways the regulators haven’t considered. The latter group are not good at acting quickly: what we now call GDPR was first mooted in June 2011, and replaces laws adopted in October 1995.
To illustrate what the future might hold, consider this example of how voice recognition can detect ‘emotional state’ — the question of how this can impact privacy without needing to be personally identifiable, is not addressed by GDPR. Given that nobody can predict the future, we should at least have governance mechanisms that can react to it.
We need to protect people, not simply data
If GDPR cannot address these issues, what becomes of it? For a start, it becomes an expensive burden which fails to deliver on some pretty fundamental goals. It will inevitably need to be replaced, but any changes will be flawed if they follow the same splat-the rat, see a problem and try to regulate it away approach, built on a fundamental, naive optimism that law can be implemented even as the context, and its supporting artefacts, shift beyond recognition. As an interesting aside, the financial regulation world is already responding to the fact that such an approach is neither possible, nor desirable.
Tackling this challenge requires very different ways of thinking, starting from the very source. GDPR’s fundamental purpose is not to protect the privacy of citizens but to protect data, that’s the D and P of GDPR. This needs to change — it is people that need protection, whatever data is stored about them and however it is used. And indeed, whoever is in charge: right now, the most likely force that will undermine the provisions of GDPR is ourselves.
Almost three years ago, I proffered the idea of an virtual bill of rights (a couple of weeks later, Web founder Tim Berners-Lee did similar). My point then, and it remains now, is that we can’t legislate on data. Rather, we need to afford the virtual/digital world the same rights and responsibilities as the physical world. So, there is no such thing as cyber-theft, but we simply have theft; same for fraud, extortion, bullying and so on. It should be that simple — this also means that all existing laws need to be considered in the light of what is now possible. If it is to be illegal to market to me without my consent, that should be possible whether someone holds information on me or not. And so on, and so on.
Even if we arrive at a world in which everything is known, we still need to act as though we are humans. Perhaps it isn’t that different to village life — back in the day, when we all lived in each others’ pockets and it was very hard to keep anything secret, we first learned the principles of acceptable behaviour. All that really needs to happen is to accept that such age-old ideas, of courtesy, respect and basic rights (not to be stolen from, defrauded or conned, or harangued for money and so on) still stand.
For now, we are where we are. What can organisations do in the meantime? Well, get on and protect the data they hold about their customers, that much is still true. Perhaps we will see a GDPR 2, which will be far simpler, but further-reaching than the existing framework, but I could never advise anyone to wait for this, as it would be illegal (even if GDPR is subject to a ‘grace period’).
Even as you look to respond to existing regulation however, you should be looking beyond it and towards a moment when the privacy regulators recognise current approaches will never be effective. Don’t expect GDPR to make us any better protected against annoying consent-based advertising, higher-risk aggregation-based insights or the biggest challenge of all, downright manipulation. To recall what Sun Microsystems founder Scott McNealy once said, “Privacy is dead, deal with it.” Indeed, we need to deal with it, in a way that will actually deliver.
[Disclaimer: I haven’t read the full GDPR documentation end-to-end, nor do I intend to.]
01-19 – Why Is Digital So Hard
Why Is Digital So Hard
If you’re looking to ‘do’ digital transformation, read this first
Barely a day goes past in the tech press without some mention of the importance of digital transformation to businesses; each accompanied by a caveat that nobody really knows what it is. Without engaging further in this debate, what are the absolutes?
1. That it’s all about the data. All of it.
How ever we phrase things, the singular, significant change that technology has brought over the past 100 years is the ability to generate, store, process and transmit inordinate quantities of data. Whatever ‘revolution’ or ‘wave’ we might want to say we are in right now, be it digital, industrial or whatever, there is only really one — the information revolution.
This trend continues with a certain linearity: even as we double the number of pixels on a sensor for example, or transistors on a processor, our abilities increase at a more steady pace. In business terms, the challenges of integration, capacity planning or service level management are the much the same now as they were a decade ago; we are simply working at a higher level of resolution.
2. That technology is enabling us to do new things
This still leaves room for breakthroughs, as technology passes certain thresholds. We saw, for example, the quite sudden demise of the cathode-ray television in favour of LCD screens, or indeed that of film versus digital cameras. What we see as waves are quite often technologies passing these thresholds — so, for example, the Internet of Things is a consequence of having sufficient connectivity, with low-cost sensors and ‘edge’ processing.
It’s useful to compare these moments of “release of innovation” with the previous point, that many consequences are subject to evolutionary, not revolutionary impact. This dichotomy drives much technology-related marketing: a new advance can have significant specific impacts even if it does not change the world; however it will be presented as enabling the latter, even if it will only really achieve the former. Case in point — digital cameras have not made us better photographers, and nor has CRM made organisations better at customer service.
3. That we tend to do the easy stuff, as consumers and businesses
Many innovations happen through ‘pull’ rather than ‘push’. We can spend our lives putting together complex business cases that demonstrate clear ROI, but even as we do we know they are really lip service to process. In work as at home, a great deal of technology adoption happens because it makes our lives easier — those explaining the extraordinary rise of Amazon, Facebook and so on emphasise ecosystems, platforms and networks and treat our own laziness and desire for a simple life as an afterthought.
The CBA factor is of inordinate importance, and yet gets little mention: it’s like we are embarrassed to admit our own weaknesses. Interestingly, its corollary (that of “Resistance to Change”) does get a mention when looking to explain large project failures. But here’s the fact: many of the great technology advances occur because they are easier, and they stumble when they are not. The fact people still like books or printed reports can be explained as much through ease of use, as through the need to hold something physical. The perceived need for ‘transformation’ comes from the idea that against such inertia, some big change is necessary.
4. That nobody knows what the next big thing will be
As my old boss and mentor once said however, innovations are like route markers — it’s important to see them as points on a journey rather than a destination. However, doing so goes against two major schools of thought. The first comes from technology vendors who want (you) to believe that their latest box of tricks will indeed bring nirvana. And the second, from consulting firms, whose thought leadership role diminishes significantly if their advice is framed in terms of observational mentoring (a good thing) as opposed to somehow holding the keys to the kingdom.
There is no promised land, and neither is there a crevasse we will all fall into, but we still persist in looking through the wrong end of the telescope, framing business needs in terms of solutions rather than putting the former first. Sometimes this is done so subtly by marketers it can be difficult to spot: back in the days of “service oriented architecture” for example, it took me a while to realise that its main proponents happened to have a specific product in mind (an “enterprise service bus”). Doing so isn’t necessarily wrong, but it’s worth following the money.
5. That we are not yet “there”, nor will we ever be
As a species, particularly in times of great change, we need a level of certainty at a very deep, psychological level. And it is messing with our ability to act. It’s too easy to pander to the need for a clear answer, buying into current rhetoric with a hope that the latest advance might really work this time. All sides are at fault — those purveying solutions, those buying them and those acting as trusted third parties — but who wants to hear anyone say “it’s not going to work”?Each time round the cycle, we come up with new terms and subtly change their definitions — industry 4.0 or smart manufacturing might mean the same, or very different things depending on who you ask, a symptom of our desperation to understand, and adapt to what is going on (after all, haven’t we been told to ‘adapt or die’?).
Interestingly, the companies that we applaud, or fear the most, may well be those who care the least. Amazon, Uber, Tesla, the rest of them don’t know what’s around the corner, and what is more they don’t see this as a priority — they simply want to still be in the game this time next year. Rightly so, as they are were born into uncertainly, forged through some indecipherable and unrepeatable combination of factors. Why did Facebook succeed when Myspace, Bebo or any other high-valuation predecessor did not, for example? Above all, these organisations have an attitude to change, a mindset that sees uncertainty and therefore responsiveness, as a norm. Jeff Bezos’ articulation of Amazon’s “Day One” approach to business strategy offers a fantastically simple, yet utterly profound illustration.
6. Responsiveness is the answer, however you package it
Where does this leave us? The bottom line is that “digital transformation” is the latest attempt to provide a solid response to uncertain times. It isn’t actually relevant what it is, other than a touchstone term which will soon be replaced (you can thank the marketers for that). So debate it by all means, just as you might once have debated business process management, or social networking, or hybrid cloud, or whatever tickles your fancy. As you do so however, recognise such procrastination for what it is.
And then, once you are done, take action, over and over again. Transformation doesn’t matter, unless we are talking about the transformation of mindsets and attitudes, from build-to-last to do-it-fast. That’s why agile methodologies such as DevOps are so important, not in themselves (yes, that would be putting the cart before the horse again)but because they give businesses an approach to innovate at high speed. As we continue on this data-driven journey, as complexity becomes the norm, traditional attitudes to change at the top of business, or indeed our institutions, become less and less tenable. The bets we make on the future become less and less important; what matters more is our ability to make new ones.
01-24 – Cybersecurity
Cybersecurity
Still complacent about cybersecurity? The clock is ticking.
In the land of lies, damned lies and statistics, the insurance industry may be one of the more trustworthy sources. After all, it is founded on maths, its actuarial background built into every policy and claim. As purveyors of protection against all risks, insurers cares less about which risks are more important, and more about the relationship between premiums and pay-outs. Indeed, getting this equation wrong is potentially the biggest risk the industry faces.
So, when Allianz reports that cybersecurity is the second most important business risk, according to over 1,900 respondents globally, we would do well to sit up and listen. To put this in context, over the past five years it has climbed from 15th position, so why? First and simply, the number and complexity of cyber attacks is growing. This is to be expected, as it mirrors technology’s increasing impact and complexity: the bad things are dark mirrors of the good.
The organization also cites GDPR as a significant driver, not in causing breaches but in how they may result in a conssiderable fines. “Many businesses are waking up to the fact they have potential vulnerabilities, and the realization that privacy issues create hard costs will emerge fairly quickly once GDPR is implemented,” says Emy Donavan, Global Head of Cyber at Allianz Global Corporate & Specialty (AGCS).
But wait, there is more to this. The Allianz survey is global, across 80 countries. An appendix shows how Nigeria sees theft and fraud as the biggest cause of business risk, while in Croatia it is legislative change, and so on. In the USA and UK meanwhile, as well as Austria, Belgium, Brazil, Australia, India, South Africa and Singapore, cyber incidents take top spot in the risk charts. Cyber is also the number one risk in the Media, Financial Services and Legal, and indeed the Technology and Comms sectors. It’s also top risk for mid sized companies.
And, to cap it all, let’s just look at the number one business risk — business interruption (BI). “ Whether it results from factory fires, destroyed shipping containers, or, increasingly, cyber incidents, BI can have a tremendous effect on a company’s revenues.” What’s that you say, cyber incidents is one of the main causes of the main business risk? Indeed, it’s the first in the list, according to respondents, before fire/explosion or natural catastrophe.
In other words, while cyber incidents pose a significant challenge by themselves, their consequences can potentially be even greater. The good news is that organizations large and small are well aware of the challenge, are they not? Well, no, says AGCS UK CEO, Brian Kirwan. “Far from being over-hyped, the threat is under-appreciated and not always well understood.”
I’m not sure any additional comment is required, other than that the conundrum around cybersecurity remains as astonishing as ever. Behind the figures lies a simple truth, that business continuity today means data continuity. While no person is indispensable in an organization, take away its sensory capabilities and you render it useless.
On the upside, and rightly so, insurance companies such as Allianz do have insurance products, and indeed whole practices, to help organizations protect themselves against such risks. But this is missing the point. While it is difficult to get a clear answer (that’s the nature of denial) the corporate position still appears to be that dealing with cyber-threats is too complicated to address, so we’ll all just cope with the consequences.
This frontier town attitude never worked, and it is going to become even less viable really soon. We are at the start of a wave of machine learning, which will grow rapidly in scale over the next few years: you don’t have to be a guru to work that one of the softest targets for semi-intelligent bots will the highly vulnerable defences many organizations still have around their data centers. Corporate psychology will shift quickly from hoping cyber incidents will happen to somebody else, to finding that the paltry and permeable protections have already been breached.
01-26 – Trust In Media Is Collapsing
Trust In Media Is Collapsing
Trust in media is collapsing. Is that such a bad thing?
You’d have to have had your head under a bushel of wheat not to have noticed the comprehensive destruction of faith in popular media. Exhibit A is Edelman’s 2018 trust barometer, which shows media organisations as the least trusted type of global institution for the first time. Goodness knows, they have fought hard for such an accolade.
Social media is also taking a battering, according to a Verge survey of Americans from the end of last year. Despite the catastrophe of the financial crisis, the scourge of the one-percent and so on, we still put more trust in our banks than we do in Facebook or Twitter. It’s difficult to think of a greater indictment of those so-well-intended organizations.
But how much does this matter? To answer this question, we need to recognize how such findings reflect a deeper set of transitions. Back in the day, it was generally assumed that the learned, and their seats of learning, could be considered as the source of authority. We normalized this doctrine to the detriment of any other opinion: if “the doctor says so,” then the question was not for dispute.
Recent decades have kicked such perspectives into touch. The very real democratization of data has blown the doors off the ivory towers, for better or worse: an authority position is no longer enough in the face of being able to find things out for oneself. At the same time, as a good friend reminded me, opinions are like a-holes (and by extension, not all are particularly helpful or indeed pleasant).
This is where we are. For the first time in history we have very real, factual data, more than we know what to do with or understand. It could be argued that we might rely on our media more than ever, and indeed we are seeing this in terms of polarization and sometimes-unquestioning acceptance of certain sites — perhaps harking back to the need for authority in a maelstrom of uncertainty. Anecdotal evidence suggests this is less of an issue for the young, than their ‘superiors by age’, an area worthy of detailed study.
In the meantime, we are seeing an explosion of tiny epiphanies, a realisation that “just because someone in a position of authority says so” is not enough to convince. If anything is in a state of collapse, it is not the media but our own naivety. If we look at the Verge data from this perspective for example, we see simply that people see no reason to trust Facebook, or Twitter. And why would they? Until very recently, such sites have done all they can to distance themselves from any notion of responsibility.
We’ve seen such a collapse in the belief in authority before. The French revolution started as a rebellion against the complacency and arrogance of established institutions and, just as now, things turned nasty. The parallels are worthy of more detailed review, not only the more obvious (such as the death of Robespierre, as he tried to curtail the corrupt insanity that ensued) but also — did you know that the distribution of fake news became an indictable offence (in English, here)?
As with many states of trauma, the French revolution had to hit rock bottom before it could become more sensible, and society could once again function. This may also be the case for the data-driven age. We can see some causes for hope, not least from Edelman survey: “Voices of expertise are now regaining credibility,” it reports. “Technical experts, financial industry analysts, and successful entrepreneurs now register credibility levels of 50 percent or higher.”
So, perhaps our collapse in trust in the media is in fact a symptom of our increasing desire to engage with reality. While we are less likely to accept authority for its own sake, we recognize the fact that we cannot know everything: the opportunity, then, is to develop information sources that rely on provenance, on provable expertise and the ability to articulate how and why things are as they are. The fact that no mainstream news organization is taking up this mantle, leaves the door wide open for a newcomer to do so.
February 2018
02-06 – We Need New Lawmakers More Than Laws
We Need New Lawmakers More Than Laws
The current, “sudden” plague of deepfake videos is just the latest in a series of “unexpected” events caused by “unplanned” use of technology. More will occur, and indeed are already happening: in a similar vein the computer-generated mash-up videos on YouTube that care more about eyeballs than child protection; the ongoing boom in cyber-trolling; bitcoin pimping and pumping. To be expected are misuse of augmented and virtual reality, 3D printing and robotics. Wait, 3D-printing of guns is so five years ago.
As I’ve written before, such bleak illustrations are the yang to innovation’s ying: trolling, for example, is the downside to the explosion of transparency illustrated by the ongoing, global wave of #MeToo revelations (in its traditional, not salacious media sense). The present day is multi-dimensional and complex, and it is often difficult to separate positives from the negatives: so much so that we, and our legislative bodies, act like rabbits in headlights, doing little more than watch as the future unfolds before our eyes.
Or, we try to address the challenges using ill-equipped mechanisms — was it Einstein who said, “We can’t solve problems by using the same kind of thinking we used when we created them”? Nice words, but this is what we are doing, wholesale and globally: lawmakers are taking fifteen years to create laws such as GDPR which, while good as they go, are also, immediately insufficient; meanwhile the court of public opinion is both creating, and driven by, power-hungry vested interests; and service providers operate stable-door approaches to policy.
What’s the answer? To quote another adage, “If you want to get there, don’t start from here.” We need to start our governance processes from the perspective of the future, rather than the past, assessing where society will be in five, ten, fifteen years’ time. In practice this means accepting that we will be living in a fully digitized, augmented world. The genie is out of the bottle, so we need to move focus from dealing with the potential consequences of magic, and towards accepting a world with genies needs protections.
In practical terms, this means applying the same principles of societal fair play, collective conscience and individual freedom to the virtual world, as the physical. I’m not a lawmaker but I keep coming back to the idea that our data should be considered as ourselves: so for example, granting access to a pornographic virtual or 3D-printed robot representation of an individual, against their will, should be considered to be abuse. It’s also why speed cameras can be exploitative, if retrofitted to roads as money generators.
Right now, we are trying to contain the new wine of the digital age in very old, and highly permeable skins created over previous centuries. I remain optimistic: we shall no doubt look back on this era as a time of great change, with all its ups and downs. I also remain confident in the democratizing power of data, for all its current, quite messy state, and that we shall start seeing more tech-savvy approaches to legal and policy processes.
Meanwhile, perhaps we shall rely on younger, ‘digital native’ generations to deliver the new thinking required, or maybe — is this too big an ask? — those currently running our institutions and corporations will have the epiphanies required to start delivering on our legislative needs, societal or contractual. Yes, I remain optimistic and confident that we will get there; however, when this actually happens is anybody’s guess. We are not out of the woods yet.
02-07 – What Kinds Of Vehicles
What Kinds Of Vehicles
Luxmobiles and flying unicorns: how diversification and proliferation will rule the routes
While it’s all go for robotic systems in the automotive space right now, we seem to be suffering from collective linear thinking — that a driverless car is like a car, just without a driver. In reality nothing will be further from the truth, for a number of reasons not least that current designs are built around the need for multi-use, protecting people sitting in a certain configuration and so on. When you can click your fingers and get a ‘mobility solution’ along in five minutes, chances are it will be designed more around single-need, safe and efficient use.
This requirement for flexibility influences most aspects of car-based transport today, as well as logistical transportation. The latter is also impacted by restrictions caused by the notion of a driver: so, while container trucks may be able to deliver their modular loads, they still have face-forward cabins.
To understand the future of vehicular transport, we need to consider a world in which all such restrictions are removed, or at least applied differently depending on the payloads being carried. Airbags will still be vital for people for example, but not so much for pizzas or car parts.
With this in mind, what kinds of vehicles can we expect to see? With no evidence in fact (that’s the great thing about speculation), we just might see the following come to our road- and other-ways:
Luxmobiles and budget pods. As the model moves from multi-vehicle ownership to single-vehicle with the increased use of autonomous transport, it seems inevitable that people will look to brands so they can travel (particularly longer distances) in comfort. At the same time, this may be overkill when nipping to the shops or sending the kids to school. If indeed, we still do the former, given…
Pizza box scooters. For efficiency reasons, robot vehicles will inevitably tend towards being as small as possible, to the extent that they may become little bigger than the objects they are transporting. With echoes of Han Solo tripping over droids in Star Wars, this situation is highly likely to become a nightmare as our transport corridors become saturated with tiny vehicles, potentially leading to…
Drone swarms. Okay, some level of control will be needed, particularly for consignments that are to travel by air. Scheduling efficiency and autonomous transport governance may well lead to groups of robot transport moving en masse, either under their own steam or indeed, using some more powerful mechanism for the longer hauls. Indeed, this could result in…
Super Strings. I was thinking about massive people movers, then I remembered, oh wait, they are called trains. But I wouldn’t be surprised if some Smart Alexa doesn’t come up with a standardized, mechanical or magnetic coupling device that enables a string of automobiles to travel long distances with a lot less fuel. Of course, these are not restricted only to land or sky, with…
Pub-subs. Our once-disused waterways and canal systems could find a new lease of life carrying smaller loads across very long distances, very efficiently, on or under the surface. The upside is that they will have minimal disruption effect on the visual environment; however their more general impact would need to be monitored as with other, diversifying forms of transport including…
Late Night Vompods. Not every consequence of autonomous transport will be positive. As a discussion with my son revealed, the notion of being able to hitch a ride home after a few drinks (good, safer etc) needs to be weighed against the potential for spillages, or worse, by drunken revellers. A more serious aspect is how to ensure the personal safety of passengers, for example against unwarranted advances. On the upside, at least we will have…
Flying Unicorns. No reason exists why vehicles need to be restricted to roads, or even wheels, as advances in robotics offer the potential to use routes without tarmac. Legislation is currently unclear on whether a parcel-carrying bipedal robot would be permitted to use a public footpath. Or indeed, in quadruped form, a forest path. Or indeed, if fitted with propellors (or wings), whether it could lift itself over rivers and gates. Such a ‘vehicle’ would blend into the environment better if it was given more natural form (say a horse), then the question would be where to put the antenna…
While none of these examples may manifest themselves in practice, diversification is the name of the game; we can also expect proliferation, indeed this is pretty much guaranteed due to the law of unexpected volumes (a corollary of the law of lowering thresholds, where a new technology or service is used to its absolute maximum).
Yes, it’s going to be a noisy, trip-hazard-filled nightmare. But at least we will have pizza.
02-13 – Tech To Uk Gov
Tech To Uk Gov
Technology to UK Government: Nice idea, but it won’t work
In this ultra-communicative world we now occupy, part of the challenge faced by any authority is to get its message out there. It’s not enough to do the right thing, quietly and in a corner: you have to put it on a press release. No, more than that, you have to make a statement that shows you mean business.
Such as, “We’re not going to rule out taking legislative action if we need to do it,” the money shot, the soundbite that has taken the headlines about the UK’s funding of £600,000’s worth of terrorism-related image recognition.
It’s worth unpicking this action, and this statement. First a negative: the figure sounds like a lot, until you actually think about it. To put it in perspective, the governmental InnovateUK funding body has allocated £6,251,375,051 in grants over 14 years, or about half a billion a year, to technology projects. The figure is so big because £600K doesn’t actually buy you much.
On the upside, it’s a big enough figure to show more than a passing interest. The government is investing in AI, and not just that, it is spending on the kind of AI that might make a difference. Which has to be seen as a good thing.
Flipping back to negatives, the trouble is, the headline news (that the software can therefore recognize jihadist material) is subject to the law of consequences. Simply put, if software becomes very good at recognising black and white scarves, jihadists will stop wearing them.
The straightforward answer here is that algorithms will continue to evolve, to take evolving imagery into account — but this doesn’t account for the complexity and breadth of the system. For example recruiters can take a different tack (consider the funny-meme-based online recruitment strategy adopted by UK far-right group Britain First).
I’m not saying this in some kind of yah-boo-sucks-it-ain’t-gonna-work rant. Thinkers and do-ers in both governments and corporations know that you can’t just stick an AI band-aid on a complex problem. They also know that a million bucks is a drop in the technological ocean. Which begs the question — why, therefore, is this headline news?
Behind the statements are messages of intent: that public and private institutions alike recognise they have created a monster they don’t know how to control; and that they need to work together to deal with the consequences. The intriguing thing is, to talk about it, they make use of the same, democratizing yet dangerous tools that cause the damage in the first place.
We are all in the same boat, left trying to interpret what is being said wherever it comes from. Perhaps — indeed, this could be a prediction — we will soon be able to apply AI to all communications from any organization, filter out the nasty stuff and see what is behind the messaging. In the meantime, we need to recognize that what we see on the surface is as much about controlling the narrative as representing what is going on.
02-15 – The Seven Deadly Threats
The Seven Deadly Threats
The Seven Deadly Sins of the Digital World
We are all, in the words of Agent Smith and Agent Jones, “only human.“ This paradoxical quality, humanity, which powers our finest achievements and leads us to the pinnacle of artistic endeavour, is also that which feeds our less desirable traits, actions and moments of downright cruelty.
As with many religious themes, the seven deadly sins of lust, gluttony, avarice, sloth, wrath, envy and pride are rooted in earthly reality. So how do they map onto how we interact with technology, in our interactions or in terms of the services we create? Let’s take a look.
- Lust is a simple one, manifested in what we now label ‘inappropriate content’. In times past we would harbour images of each other, driving ourselves insane with our own thoughts. The printed image, then the video, and now algorithm-driven fakery leave little to the imagination and feed the monster within.
- Gluttony in the digital age maps onto the dreaded data deluge, as we find ourselves drawn into social sites through an urge to know more, to share, to play, to participate. In this age of plenty we time-waste, we procrastinate, we become unable, like a person whose desire to ingest has trumped any ability to resist.
- Avarice, or greed, can be seen throughout our culture of increasing expectation, as we all want what we perceive others having. In this take-first, ask-later culture, startups discuss ‘data monetisation’ whilst quietly ignoring whose data it was anyway. And for some, it means cybercrime in all its highly lucrative, complex form.
- Sloth boils down to plain stupidity or laziness, in each of us a belief that the bad things will happen to other people, a lack of effort to engage in simple things like doing backups. Sloth makes the bad guys’ job more straightforward, as they conjure new types of attack based on our inability to resist clicking a link.
- Wrath used to emerge every now and then, but in the pressure cooker of social media it has become a beast to be fed, to its creators’ chagrin. Trolls act as acolytes, carefully triggering indignant energies to sustain the dark animal lurking within. Knee-jerk, righteous justification has replaced calm statesmanship, seen as a behavior to be rewarded and applauded rather than managed.
- Pride is the sin to rule them all, the inability to admit when we are wrong. It is the social silence, the absence of a ‘dislike’ button, the quiet covering up of failure or shredding of the files of corruption. Unsure what the right thing is, but somehow sure we are on the right team, we become complicit in the resulting, silent inaction.
What a dark note to end upon (which harks to sadness, the eighth ‘sin’ in the original list). Our collective conscience can seem but dust, the immolated ashes of sacrifice to the new gods of the digital age. But this is part of a far greater process in which what we knew is torn apart, so it can emerge stronger. Principles that go back millennia are stronger than any wave of tooling, however insurmountably powerful and complex it may seem at the time.
02-20 – Tpus And All That
Tpus And All That
Why AI on a chip is the start of the next IT explosion
It’s game on in the AI-on-a-chip race. Alongside Nvidia’s successes turning Graphics Processing Units into massively performant compute devices (culminating in last year’s release of the ‘Volta’ V100 GPU), we have ARM releasing its ‘Project Trillium’ machine learning processor on Valentine’s Day and Intel making noises around bringing the fruits of its Nervana acquisition to market, currently at sample stage. Microsoft with Catapult, Google with its TPU — if you haven’t got some silicon AI going on at the moment, you are missing out. So, what’s going on?
We have certainly come a long way since John von Neumann first introduced the register-based computer processing architecture. That, simple design (here’s a nice picture from Princeton), which fires data and instructions into an Arithmetic Logic Unit (ALU), still sits at the heart of what we call CPU cores today, though with their pre-fetching and other layers of smartness, these are to von Neumann’s original design a souped-up, drag racing version of the original design.
Three things about this, bling-covered workhorse of a design. First, that it was planned to work really well for a certain type of maths — the clue’s in the ‘A’ of the ALU — which is great for arithmetic, but less great for other kinds of maths, such as that requiring floating point operations.
This brings to the second, based on a principle named after Alonso Church and Alan Turing: that anything you can compute with one computer of any type, you can program with another. Don’t ask me to explain the proof, but the result is that you can do any kind of processing on an ALU, it just might take a bit longer than a processor conceived for the purpose.
There’s a third point: it’s all maths, really. We talk about computing in terms of data, insight, algorithms, programming, processing or whatever are the latest buzzwords of the day, but behind it all, deep in the bowels of any piece of silicon are chunk of electronics that allow us to do something mathematical. Like add, or subtract, or multiply. It’s a corollary of the Church-Turing principle that anything you want to program, you can convert into a series of mathematical formulae to be calculated.
Why (oh why) is this relevant? Because the name of the game is efficiency. Just because a CPU can process anything we throw at it, it won’t always be the best tool for the job. You can go to school on a tractor but it will be neither fast nor comfortable. We have CPUs as the de facto mechanism because the cost of fabrication has, traditionally been so great that we have gone for a one-size-fits-all solution. The downside, however. many bells and whistles we have attached to them, is that CPUs will have an overhead when they try to do things they weren’t designed for.
Traditionally, the answer has been to create different processors. In 1987 for example, Intel released the 80387 as a floating-point processing companion to its rather popular 386 CPU: while this was later incorporated on the same piece of silicon (and exists within today’s cores), it’s still a separate processing capability. It’s also fair to say that the Graphical Processing Unit, designed to display information on a screen and therefore geared up around the mathematics of symbol processing, was never conceived for use beyond graphics — but the fact was, and is, that it can do certain maths much more efficiently than a CPU. Ergo, Nvidia’s dramatic recent success.
It’s only more recently that anyone beyond a core (sorry) of organisations has had the ability to create silicon. The costs are astonishingly high, largely because the margins for error are astonishingly small: this meant that while engineers may have desired to create specialized hardware back in the 1980s, it was not possible to beat general-purpose machines at a workable cost (”most of the training times are actually slower or moderately faster than on a serial workstation,” says this architectural survey). But over the decades (and no doubt thanks to computers), high-tech manufacturing has become more affordable: the same logic that has given us personalised Coca-Cola and the orange Kit Kat also acts in the favour of those thinking that it would be nice to make their own computer chips.
It’s actually quite fascinating (to be fair, I once worked in chip design, but I’m sure it has a broader appeal) to peruse the paper released about Google’s AI-oriented Tensor Processing Unit (TPU). Here’s a key paragraph, which says, in other words, that the elements of the processor were ’simply’ geared up around the specific needs of neural networks (NNs):
“The TPU succeeded because of the large—but not too large—matrix multiply unit; the substantial software- controlled on-chip memory; the ability to run whole inference models to reduce dependence on host CPU; a single-threaded, deterministic execution model that proved to be a good match to 99th-percentile response time limits; enough flexibility to match the NNs of 2017 as well as of 2013; the omission of general-purpose features that enabled a small and low power die despite the larger datapath and memory; the use of 8-bit integers by the quantized applications; and that applications were written using TensorFlow, which made it easy to port them to the TPU at high-performance rather than them having to be rewritten to run well on the very different TPU hardware.”
The result is that the chip can scale to workloads far beyond other types of hardware. In much the same way as the 80387, it’s designed to do one thing well, and it does it very well indeed. Back to the Church-Turing principle, general-purpose processors could do so as well, but will inevitably slower — so, if you can afford to make your own, more specific chip, why wouldn’t you? Indeed, the paper suggests that the best is still to come: “We expect that many will build successors that will raise the bar even higher.”
Ultimately, it’s easier to think of computer chips as combinations of task-specific modules, each designed around doing a certain kind of maths. We have now arrived at a point where the door is opening for those who want to design their own modules, or architect them into processors aimed at a specific purpose. So this isn’t about speed but architecture. In much the same way as home-grown app transformed the data management industry, so we can do the same with chips.
We shouldn’t be surprised that so many players are getting in on the game, nor that significant performance hikes are being seen; more important is the likely impact of diversification, as the bar falls still further on chip design. In much the same way as apps transformed the mobile device industry, we might be about to see the same with computer chips, particularly for so-called ‘edge’ devices appearing in homes, offices and factories. This industry may be decades old but the real game may only be just starting, leading to advances we have not yet even considered.
02-21 – Malicious Use Of Artificial Intelligence
Malicious Use Of Artificial Intelligence
What’s missing from The Malicious Use of Artificial Intelligence report?
Probability vs plausibility
Only a fool would dare criticise the report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” coming as it does from such an august set of bodies — to quote: “researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy.”
Cripes, that’s quite a list. But let me at least try to summarize its 100 pages of dense text.
- There’s a handy executive summary and introduction
- 38 pages cover all the things that could go wrong
- 15 pages describe ways to not let them happen
- 33 pages cover the people and materials referenced
It’s difficult to argue with any of it, on the surface at least. Particularly the overall message: there could be bad things, and we should not sleepwalk into them. While this is welcome advice, one factor is noticeable by its absence. Strangely, as the report comes from groups for whom the scientific method should be as familiar as brushing one’s teeth in the morning, it lacks any discussion, or indeed conception, of the nature of risk.
Risk, as security and continuity professionals know, is a mathematical construct, the product of probability and impact. The report itself makes repeated use of the term ‘plausible’, to describe AI’s progress, potential targets and possible outcomes. Beyond this, there is little definition.
We can all conjure disaster scenarios, but it is not until we apply our expertise and experience to assessing the risk, that we can prioritise and (hopefully) mitigate any risks that emerge.
So, without this rather important element, what can we distil from its pages? First we can perceive the report’s underlying purpose, to bring together the dialogues of a number of disparate groups. “There remain many disagreements between the co-authors of this report,” it states, showing the reality, that it is a work in progress: to coin an old consultancy phrase, “I’m sorry their report is so long, we didn’t have time to make it shorter.”
A second, laudable goal was to bring AI into the public discourse. In this it has succeeded, measured in terms of column inches — though in doing so, it is in danger of achieving no more than adding to the well-meant, yet already heaped pile of hype and anti-hype surrounding AI. Writing in the MIT review, Rodney Brooks’ The Seven Deadly Sins of AI Predictions offers a pretty good analysis of this phenomenon.
Finally, buried within its pages are an important admission on the part of “throw the doors open” organisations such as the Electronic Frontier Foundation. A priority area is stated as “Exploring Different Openness Models” — that’s right, it’s not as simple as making everything open by default, particularly if bad guys and rogue governments have the same access as good, community-spirited folks like the rest of us. To whit:
“The potential misuses of AI technology surveyed in the Scenarios and Security Domains sections suggest a downside to openly sharing all new capabilities and algorithms by default: it increases the power of tools available to malicious actors.”
So, no, the report should be thrown out wholesale, it’s collates some good, if incomplete thinking. It should however be seen for what it is: a non-scientific work in progress, an undisitilled set of perspectives from a range of academic researchers on an emerging capability. Indeed, three of the four recommendations advise more priority (and potentially more money, therefore) to be allocated towards AI-related research areas. That’s a standard tactic for academics as much as for consulting firms.
Perhaps the fourth and final recommendation is most telling, “Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.” Reports such as these will only make sense if they involve technology organizations developing the capabilities involved, as well as reinforcing the point that our policy makers need to be an order of magnitude more tech-savvy than currently.
On the upside, there will (plausibly) be as many good things coming out of AI as bad. “These technologies have many widely beneficial applications,” states the report. So there remains cause for optimism, even as we look to gain a better handle on future reality.
March 2018
03-07 – Dealing With Gdpr
Dealing With Gdpr
For most firms GDPR is an opportunity, not a threat
Many conversations around GDPR seem to follow a similar sequence as Dave Lister’s experience in the opening episode of Red Dwarf.
Holly: They’re all dead. Everybody’s dead, Dave.
Lister: Peterson isn’t, is he?
Holly: Everybody’s dead, Dave!
Lister: Not Chen!
Holly: Gordon Bennett! Yes, Chen. Everyone. Everybody’s dead, Dave!
Lister: Rimmer?
Holly: He’s dead, Dave. Everybody is dead. Everybody is dead, Dave.
Lister: Wait. Are you trying to tell me everybody’s dead?
So, yes, GDPR affects all kinds of data. Big data, small data, structured and unstructured data, online and offline, backup and archive, open or grey, digital or paper-based. It’s all data, and therefore GDPR applies to it.
This simultaneously makes the task of compliance very easy, and very difficult. Easy, because decision makers don’t have to worry about what data is involved. And very difficult, because few organizations have a clear handle on what data is stored where. That filing cabinet in the back of a warehouse, the stack of old tapes on top of a cupboard, that rack of servers which were turned off… yeah, all of them.
Because that’s not the focus of GDPR, you know, the technology gubbins, complexity and all that. The regulation quite deliberately focuses on personally identifiable information and its potential impact on people, rather than worrying about the particular ramifications of this or that historical solution, process or lack of one.
At the same time, this does suggest quite a challenge. “But I don’t know what I have!” is a fair response, even if it is tinged with an element panic. Here’s some other good news however — laws around data protection, discovery, disclosure and so on never distinguished between the media upon which data was stored, nor its location.
You were always liable, and still are. The difference is that we now have a more consistent framework (which means less loopholes), a likelihood of stronger enforcement and indeed, potentially bigger fines. To whit, one conversation I had with a local business: “So, this is all stuff we should have been doing anyway?” Indeed.
Of course, this doesn’t make it any easier. It is unsurprising that technology companies and consulting firms, legal advisors and other third parties are lining up to help us all deal with the situation: supply is created by, and is doing its level best to catalyse, demand. Search and information management tools vendors are making hay, and frankly, rightly so if they solve a problem.
If I had one criticism however, it is that standard IT vendor and consulting trick of only asking the questions they can answer. When you have a hammer, all the world is a nail, goes the adage. Even a nail-filled world may seem attractive for purveyors of fine hammers, they should still be asking to what purpose the nails are to be used.
To whit for example, KPMG’s quick scan of unstructured data to identify (say) credit card numbers. Sure, it may serve a purpose. But the rhetoric — “Complete coverage, get in control over unstructured data on premises and in the cloud.” implies that a single piece of (no doubt clever) pattern matching software can somehow solve a goodly element of your GDPR woes.
As I have written before, if you want to get there, don’t start from the place which looks at data and says “Is this bit OK? What about this bit?” A better starting point is the regulation, its rules around the kinds of data you can process and why, as documented by the Information Commissioner’s Office (ICO). The “lawful bases” offer a great deal of clarity, and start discussions from the right point.
Mapping an understanding of what you want to do with data, against what data you need, is not cause for concern. In the vast majority of cases, this is no different to what you would do when developing an information management strategy, undertaking a process modelling exercise, or otherwise understanding what you need to do business efficiently and effectively.
The thing GDPR rules out is use of personal data people didn’t want you to have, to fulfil purposes they didn’t want you to achieve. For example, use of ‘cold lists’ by direct marketing agencies may become more trouble than it is worth — both the agency, and the organization contracting them, become culpable. Equally, selling someone’s data against their will. That sort of thing.
But meanwhile, if you were thinking of harvesting maximum amounts of data about, well, anybody, because you were thinking you could be monetizing or otherwise leveraging it, or you were buying data from others and looking to use it to sell people things, goods or services, you should probably look for other ways to make money that are less, ahm, exploitative.
But if you have concerns about GDPR, and you are ‘just’ a traditional business doing traditional kinds of things, you have an opportunity to revisit your information management strategy, policies and so on. If these are out of date, chances are your business is running less efficiently than it could be so, how about spending to save, building in compliance in the process?
Across the board right now, you can get up to speed with what GDPR means for the kind of business you run, using the free helplines the regulators (such as the ICO) offer. If you are concerned, speak to a lawyer. And indeed, talk to vendors and consulting firms about how they are helping their customers, but be aware that their perspective will link to the solutions they offer.
Thank you to Criteo and Veritas, whose briefings and articles were very useful background when writing this article. As an online advertising firm, Criteo is keenly aware of questions around personal vs pseudonymous data, as well as the legal bases for processing. Veritas offers solutions for analysis of unstructured data sources, and has GDPR modules and methodologies available.
03-21 – Facebook Cambridge Analytica
Facebook Cambridge Analytica
Slaughterhouse Blues: What have we learned about Facebook that we didn’t already know?
Thus week’s backlash against Facebook’s practices was as predictable as it was unavoidable. The smoking gun is in the hands of Cambridge Analytica, either the epitome of corruption or simply the poor child who happened to be in the orchard when the farmer walked by, even though every other kid had been stealing apples with impunity.
Let’s pick this apart. The backlash was predictable — why, yes. We have been handing over our data in the vain hope that some random collection of profit-driven third parties might nonetheless act in our best interests. How very naive of us all, but we did so apparently eyes-open: if you are getting something for free, then you are the product, goes the adage.
What we perhaps didn’t realise was just how literally this might be taken. We, or our virtual representations, have been bundled into containers and sold to whoever took an interest, feeding algorithms like mincing machines that have, in turn, fed highly targeted and manipulative campaigns. Keep in mind that the perpetrators maintain that they have not broken any rules.
And indeed, maybe they haven’t. The ‘crime’, if there is one, revolves around the potential that Facebook’s data (well, it’s ours really, but let’s come back to that) was in some way ‘breached’. If that is the case, at least privacy law has something to go on. Let’s be realistic, however: this is trying to fit an ethical square peg into a legal round hole.
Meanwhile, the practice of Facebook data harvesting via quizzes etcetera carries on. You don’t have to be as underhand as this: simply have a page of puppy pictures, get lots of people to like it and then sell it on, we’ll never know as we are already drowning in the crashing waves of digital. We already know what is going on, deep down, but we live in hope that the worst will not happen. Optimism is a positive trait, but blind optimism less so.
When we are told exactly what is going on with our data, when it is presented in black and white, it is like a veil is removed from our eyes. People are angry, and perhaps rightly so when they realise the depths to which organisations will stoop even if “they have not broken any rules.” It’s the same, shocked reaction as when someone finds out exactly where their meat comes from, and swears to become vegetarian on the spot.
Equally, the backlash was unavoidable as we arrive at a point of realisation, and therefore responsibility, for what our new, data-oriented powers give us. Back in 2012 I wrote that we were heading towards a Magna Carta moment; more recently I said how GDPR, good as it is, doesn’t deal with the kinds of challenges that we now face. Rest assured that I’m not going to say it all again, but the points still stand.
What about Cambridge Analytica — should the company be hauled over the coals? It’s not as simple as that, as the organisation happened to be in the right (or the wrong) place, at the right time. The chances are that its algorithms are not all that smart, as in, they probably depend on some machine learning or Bayesian techniques that are now well understood. This also means the organisation will not be a lone wolf.
At the same time, doing wrong just because it is possible, or because everyone else is doing it, or because the law isn’t clear, is still doing wrong. We need the debate about digital ethics to become a top governmental priority, and we all need to adjust our consciousnesses, and our consciences, to what is now possible. The time for blind optimism is over.
April 2018
04-20 – When Is A Pitch Not A Pitch? Retrospective Thoughts On Techpitch 4.5
When Is A Pitch Not A Pitch? Retrospective Thoughts On Techpitch 4.5
When is a startup pitch not a pitch? Retrospective thoughts on TechPitch 4.5
This week I was lucky enough to be a judge at the most recent TechPitch 4.5 event in London. I say lucky for a number of reasons: it’s nice to be chosen, of course, but more than that, judging offers a rare opportunity to really think about what’s going on.
The range of candidates was diverse to say the least — from an enterprise-scale AI solution as a service to a widget that you can put on your web site, from a new way of making music to an asset management solution for estate agencies.
Largely because of this diversity, it really was possible to see what made a good pitch and what doesn’t. And indeed, why it matters. I’m reminded of a recent conversation with a colleague who fielded a (relatively cold) sales call. “I wasn’t clear on what they were trying to sell me,” she said. “I doubt I’ll be using it.”
While this may appear to be short-term thinking, in these cluttered, time-strapped times we really don’t have the bandwidth to investigate every new possibility that comes along. Failure to realise this significantly undermines the addressable market, to the subset of “people who will spend the extra time trying to work out what I didn’t articulate.”
It shouldn’t be necessary to say this but clearly, it is. The presenters at TechPitch 4.5 had only 3 minutes to tell their stories: some, but not all succeeded. This isn’t the place to run through the qualities of the perfect pitch, but at the very least it should be clear on what, why and how it benefits, where it is and what is needed at this point.
Add to that we were a relatively gentle panel: our ethos was not to be antagonistic. In reality however, and as mentioned by fellow judge Lindsey Whelan, some investment panels, VCs and so on take great pleasure in demonstrating their alpha-prowess by belittling the organisations they profess to be helping.
Perhaps the biggest lesson was that all organisations are a work in progress, with all the complexity and unfinished business that entails. The trick, therefore, is to present something simple: while this may only be a subset of what you do, it may be enough to move you forward. The more complicated it is, the less of a pitch it becomes.
This was the approach followed by both winner and runner up. The “we help connect you with runners” model from A, and the “we can put a form on your site that is better than what is there” from B might not have been the most technically profound or world changing. But they had the most resonance, reflected by panel and audience judging.
It would be a mistake to suggest that any of the presenters were unprepared: clearly a lot of effort had gone into each and every pitch. What was missing in some was whether it passed this basic litmus test. We can all have genius ideas — but if they leave the people you are speaking to scratching their heads, you probably have some work to do.
04-25 – Monetisation
Monetisation
GDPR - are we witnessing the death of one-way monetisation?
I may have spoken too soon about GDPR. Despite the conflicting legal advice and the general level of vagueness around the new legislation, a head of steam has been building up behind the notion of privacy. In significant part, it has been helped by the scandal around Facebook, Cambridge Analytica and so on — despite various authorities railing for years about social media playing fast and loose with our data (hence GDPR, of course), it has taken our august media to raise the level of public awareness alongside a frisson of panic among data controllers.
To be fair, this was difficult to predict but it has had quite an effect: the need for headline-grabbing material really is a two-edged sword. The consequence is that many organisations are treating the looming threat of GDPR non-compliance like a hot stone, to be dropped at the earliest opportunity. I’m sure I won’t be the only person to have received a raft of emails from various commercial and non-commercial sources, saying that if I don’t opt into marketing, I will never again know about the wonderful offers they might put on the table.
They may be over-doing it: as I understand it, organisations are within their rights to keep sending me stuff if I have bought from them before, unless I decline it. But organisations face a Hobson’s choice — they can spam me with requests for consent now (thus forcing them to fall on their own swords later, if I don’t respond), or face the uncertainty around what the law actually says. Tricky. So, for example, I’ve had currency card companies asking me whether it was OK to keep sending me currency-related information, and train operators asking me whether I wanted to know about special travel offers.
I have also had Facebook asking me whether it was OK to send targeted marketing, or to recognise my face. Which is all a far cry from the attitude of just a few months ago, certainly from the big boys who saw privacy as a bunch of doors to be pushed, or lines to be crossed (which reminds me, strangely, of training a spaniel). “Ask forgiveness not permission” has been a highly successful business strategy, enabling Facebook et al to grow phenomenally, and deliver a fair amount of innovation. This isn’t the place to knock Facebook — few people appear to be actually choosing to boycott it, which says something.
There’s a deeper point to all this knee-jerk reacting and giving the law the benefit of the doubt: that organisations are moving, nay running away from the idea that they can do whatever they like, with whatever data they like. This breaks with the assumed convention in thinking, that (personal) data is to be harvested, collated, aggregated and mined regardless of where it comes from, or whether it is known to have value. These notions surrounding monetisation of data are no longer valid: data: it may still be the new oil, but it isn’t necessarily your new oil, to do what you like with.
What does this mean in practice? First, it forces organisations to say, and therefore to think, in advance about what they require personal data for. This is no bad thing: it’s called strategy or design, depending on what level it is being considered. Indeed, it turns the binoculars around: rather than asking, “Why do we need this data point,” and looking for vague answers, a better starting point is to say, “What are we trying to achieve?” and then working out what data is needed to achieve it.
A second consequence, then, is a changing dialogue with the source of the data — that is, the identifiable person. It’s a requirement of GDPR to say what you will be using the information for. Of course, many organisations will look for loopholes in the regulation, though on the aspect of non-ambiguity it is pretty tight. While this is still to be tested, simply saying “to improve our services to you” may no longer be enough. Even Google — whose model is based on a “we don’t really know you” stance — is coming under the cosh.
Third, though this is wishful thinking on my part, is that we may arrive at a point where individuals appreciate the true value of their data. The net worth of many companies is currently calculated on the basis of how many active users, or customers, they have. So, what if (let’s say) a social media giant paid me for the privilege of accessing my thoughts and needs? If, at the end of every month, I received a cheque for having just been myself? I know, it’s been tried, but perhaps the right model is yet to be found.
Let’s get one thing straight. Marketing in general, and advertising in particular, isn’t going anywhere. The play on personal apathy and ignorance will continue, and as I have said previously, I don’t think our lives will be any more private as a result of GDPR. However, and even though it only currently applies to EU citizens, the new law is catalysing a sea change in how personal data is treated. One can only hope this ushers in a new era, in which data also serves as a backbone for transparency and value exchange between data creators and those who can make money from it. Put simply, if data is going to be monetised, we should all gain.
04-26 – Aura
Aura
AURA band
Based in the UK and Moscow, AURA Devices is launching a fitness band that goes to places others currently don’t based on what it calls ‘bio-impedance’ technology. The company’s Kickstarter is being pretty successful, with the company nearly doubling its original goal of $40,000. I spoke to co-founder Stas Gorbunov about the wrist-based health monitoring device.
1. Where did the ideas behind AURA come from?
The technology behind AURA was based on a university scientific project around bio-impedance, following which the founders decided to bring it to the wider market. When we started out, we wanted to use bio-impedance more conveniently than currently. So we thought about fitness trackers — we went to the corporate guys, the big IT companies, insurance companies who were working on smart health, if you are fit can reduce premiums and so on.
We also worked with athletes/fitness enthusiasts - who can track how body compositions changed, and people wanting to lose weight, who need to track how weight changes during the day. So this is our audience.
2. What does AURA bring to the party?
First, the hardware design. Most of the hardware parts you can find on the market, you can build bands but it’s all about having the right design for the device. There are a lot of issues to doing this. The main challenge is to ensure accuracy, as the device is pretty small - it only has two points where it touches skin, whereas a medical-grade device has 8 points.
The most exciting and unique feature is hydration level — it’s very difficult to do this, but we want to bring it to your wrist. Having said this, our IP is mostly in the software. The main idea is about interpreting the data, making big data comparisons and so on. For example, we can increase accuracy by adding information about activity type and lifestyle.
3. How does it fit within the fitness and health ecosystems?
The Aura band is just an instrument to get data about you - the goal is to give insights about your health. If you are enthusiast, you can get the raw data and interpret it for yourself. This is the first in several types of device - we plan more devices in healthcare field, we will increase our ecosystem at the same time.
We are integrating with Apple and Google health kits, adding and interpreting that data alongside our own. We also have gamification through ‘duels’ and loyalty programs — you can earn a pair of sneakers through healthy behaviour! We try to bring as much flexibility as we can.
4. AURA talks about insurance relationships — how does this work?
Insurance is a big opportunity for us: we are looking to bring new innovation in terms of health plans. The idea of insurance companies using AURA band data does create potential issues around privacy, but if this is a problem you can turn off this feature and use our band as any other fitness tracker.
There’s always going to be questions raised around heath data and insurance. For example, insurers are looking at scoring your online data, e.g. from public feeds of Strava and so on.
5. What are plans for the future?
We are heading towards mass production right now. We expect to deliver devices to Kickstarter customers in August, and online sales in September/October in time for Christmas. We have also seen some corporate sales.
We may look to license analysis to other hardware. Right now we could not find the right hardware, so we had to develop our own. If hardware comes, we can recommend it and work with it. Right now, we have what have.
We are trying to make an ecosystem - we think healthcare, insurance and fitness should work together and bring more personalised services, cannot do so without monitoring instruments. Our position is to exist right in the middle.
My take — another crystal in the solute of a nascent market
When I first looked at AURA band, I confess to have thought, “Yet another fitness device?” but it does look to collate and analyse data other devices cannot, or at least not outside the medical sphere. While the company is right to see its future in the software rather than the hardware, this whole area is subject to rapid commoditisation — the chances of other OEMs looking to tackle the same problems are pretty high so the company has its work cut out.
What of the ‘issue’ of giving data over to insurers, isn’t this a two-edged sword? The relationship between health monitoring, fitness and insurance will continue: in some cases it may be intrusive or cause greater premiums, but in others, it can help early monitoring or identification of issues, as well as encouraging behaviours that lead to better health. I’m not a great fan of loyalty programmes as they play into consumerism, but that’s probably just me.
Ultimately, there is still room for both innovation, and new players, in this still-nascent space. With companies like Philips re-inventing itself as a health data platform provider, the need will continue for organisations to deliver usable and effective data. Market opportunities are broader than we think (consider protection of endangered species, for example) and overall, I am optimistic that the benefits of better monitoring and interpretation will outweigh the downsides.
May 2018
05-18 – Is Enterprise Devops An Oxymoron?
Is Enterprise Devops An Oxymoron?
Is Enterprise DevOps an Oxymoron?
A long, long time ago, in my career days just before being an analyst, I was a software development consultant. I loved doing it — these were the Three Amigo days, full of unified models, cross-functional teams, dynamism and service orientation. Had it not been for the crushing amount of time I was spending travelling (and therefore not seeing my young family), I might still be doing it now.
The job was not without its challenges. Technical complexity, and tight timescales combined with getting people on board and showing them how, yes, drawing things out before hacking code really could speed things up without jeopardising quality. All in all, however, things went in the right direction, but for one difficulty that was near-fatal for a project I was supporting.
I can remember the meeting like it was yesterday. The shape of the room, the large windows down one side, the whiteboard on the wall. Opposite me and to the right was the operations manager for the firm. On my left were representatives of the development team.
Nobody gets out of bed in the morning thinking, “I know, today, I’m going to go out of my way to upset people.” Yet, here was the operations manager, in an impossible situation. “You want it deployed… when?” he asked, shaking his head. “As we said, by the end of the month,” said one of the developers. “But that’s… impossible. It can’t be done,” he said.
He didn’t mean to be angry, but he was clearly frustrated for a number of reasons. That the expectation was on him to make things easy, even if they were not. That he was therefore being put in a position of being the difficult one. And yes, that he would have to work out how to manage the new application alongside everything else he had to deal with.
Of course, he was frustrated because he didn’t understand why he hadn’t been told about the deployment weeks, if not months before, so he would have had time to prepare. But more than this: ultimately, because the situation seemed so unnecessary. How had the organisation arrived in a place where this conflict was even a thing?
It would be great to say that this is the challenge DevOps was set out to resolve, but that isn’t the case. In large part, DevOps is rooted in a world in which bright young things at well-funded startups are building exciting new capabilities on top of highly commoditised, cloud-based platforms: people who haven’t had to suffer the indignities of not one, but thousands of legacy systems vying for attention.
Of course, it was only a matter of time before the notion of frictionless deployment was confirmed as a jolly good idea for brown-field organisations. Which have, as my anecdote illustrates, been tussling with notions of dynamic software development practice for many years, through the waves of eXtreme Programming, Agile development, continuous integration and indeed, the realm of DevOps.
As per a recent conversation, DevOps proclaims to break down the wall of confusion between two technical tribes, each with their own ways of speaking and acting. And who wouldn’t want to do precisely that? Continuous deployment is a laudable goal, as is everything else building up around the pipeline right now — testing, security, data management, performance management, you name it. All help the notion of a single, seamless flow.
But still, when we look at the enterprise (a.k.a. any large organization that has an accumulated technology investment), we find that deploying DevOps itself is extremely hard, if possible at all. As a good friend, a colleague from my consultancy days and now senior developer at a big bank, commented: “I’m fascinated by any large company more than 10 years old that thinks anything like DevOps will make the slightest difference to their ability to deliver business impact through software-intensive systems,” he said.
It may well be that one day, we can all default to massively scalable, managed infrastructure that adjusts itself to the workload at hand, at which point all organisations could be in the same position as startups — though as a senior decision maker said to me, this could be ten years out or further; having said this, we are also looking at a re-distribution wave, following cloud’s impetus to catalyse centralisation, as illustrated by what’s currently termed the Internet of Things.
This, complex and dynamic situation is the starting point for the report I am writing on DevOps in the enterprise. Like the operations manager of old, large organisations are frustrated, as their desire to innovate and push into new areas using software is stymied through repeated disappointment caused by process friction and organisational inertia. Technology vendors may claim to have the answers, but they are all-too-frequently preaching to the converted, as the real challenge — of delivering real, practical, sustainable improvement — remains out of reach.
Over coming weeks, I will be scoping out the development and operational challenges faced by organisations today, and looking at how these can be addressed. The answers may come from a direction we currently label with the term ‘DevOps’, but rest assured, even this is no more capable of being a magic bullet than the technology solutions that purport to deliver on the dream.
So, yes, watch this space. Also, if you have any insights you want to share, let me know. Perhaps, with some thought about what actually works across the variety of scenarios that exist, we can distil out ways for even the largest, most hampered organisations to improve on what they are doing right now, or to move beyond successful experiments into enterprise scale, yet efficient software delivery.
June 2018
06-07 – The Missing Element Of Gdpr Reciprocity
The Missing Element Of Gdpr Reciprocity
The missing element of GDPR: Reciprocity
GDPR day has come and gone, and the world is still turning, just about. Some remarked that it was like the Y2K day we never had; whereas the latter’s impact was a somewhat damp squib, the former has caused more of a kerfuffle: however much the authorities might say, “It’s not about you,” it has turned out that it is about just about everyone in a job.
To put my cards on the table, I like the thinking behind GDPR. The notion that your data was something that could be harvested, processed, bought and sold, without you having a say in the matter, was imbalanced to say the least. Data monetisers have been good at following the letter of the law whilst ignoring its spirit, which is why its newly expressed spirit — of non-ambiguous clarity and agreement — that is so powerful.
Meanwhile, I don’t really have a problem with the principle of advertising. A cocktail menu in a bar could be seen as context-driven, targeted marketing, and rightly so as the chances are the people in the bar are going to be on the look-out for a cocktail. The old adage of 50% of advertising being wasted (but nobody knows which 50%) helps no-one so, sure, let’s work together on improving its accuracy.
The challenge, however, comes from the nature of our regulatory processes. GDPR has been created across a long period of time, by a set of international committees with all of our best interests at heart. The resulting process is not only slow but also and inevitably, a compromise based on past use of technology. Note that even as the Cambridge Analytica scandal still looms, Facebook’s position remains that it acted within the law.
Even now, our beloved corporations are looking to how they can work within the law and yet continue to follow the prevailing mantra of the day, which is how to monetise data. This notion has taken a bit of a hit, largely as now businesses need to be much clearer about what they are doing with it. “We will be selling your information” doesn’t quite have the same innocuous ring as “We share data with partners.”
To achieve this, most attention is on what GDPR doesn’t cover, notably around personal identifiable information (PII). In layperson’s terms, if I cannot tell who the specific person is that I am marketing to, then I am in the clear. I might still know that the ‘target’ is a left-leaning white male, aged 45-55, living in the UK, with a propensity for jazz, an iPhone 6 and a short political fuse, and all manner of other details. But nope, no name and email address, no pack-drill.
Or indeed, I might be able to exchange obfuscated details about a person with another provider (such as Facebook again), which happen to match similarly obfuscated details — a mechanism known as hashing. As long as I am not exchanging PII, again, I am not in breach of GDPR. Which is all well and good apart from the fact that it just shows how advertisers don’t need to know who I am in order to personalise their promotions to me specifically.
As I say, I don’t really have a problem with advertising done right (I doubt many people do): indeed, the day on which sloppy retargeting can be consigned to the past (offering travel insurance once one has returned home, for example) cannot come too soon. However I do have a concern, that the regulation we are all finding so onerous is not actually achieving one of its central goals.
What can be done about this? I think the answer lies in renewing the contractual relationship between supplier and consumer, not in terms of non-ambiguity over corporate use of data, but to recognise the role of consumer as a data supplier. Essentially, if you want to market to me, then you can pay for it — and if you do, I’m prepared to help you focus on what I actually want.
We are already seeing these conversations start to emerge. Consider the recent story about a man selling his Facebook data on eBay; meanwhile at a recent startup event I attended, an organisation was asked about how a customer could choose to reveal certain aspects of their lifestyle, to achieve lower insurance premiums.
And let’s not forget AI. I’d personally love to be represented by a bot that could assess my data privately, compare it to what was available publicly, then perhaps do some outreach on my behalf. Remind me that I needed travel insurance, find the best deal and print off a contract without me having to fall on the goodwill of the corporate masses.
What all of this needs is the idea that individuals are not simply hapless pawns to be protected (from where comes the whole notion of privacy), but active participants in an increasingly algorithmic game. Sure, we need legislation against the hucksters and tricksters, enforcement of the balance between provider and consumer which is still tipped strongly towards “network economy” companies.
But without a recognition that individuals are data creators, whose interests extend beyond simple privacy rights, regulation will only become more onerous for all sides, without necessarily delivering the benefits they were set out to achieve.
Cocktail, anyone? Mine’s a John Collins.
Follow @jonno on Twitter.
July 2018
07-20 – Broadcom Ca
Broadcom Ca
10 reasons why Broadcom bought CA
I wouldn’t normally comment on acquisition stories, but in this case, well. Back in the day, CA was the master of the acquisition, as it first knocked out several enterprise management competitors and then built out its portfolio to disguise its mainframe-centric business models (and, it turned out, its dodgy business practices).
And now, the company is being bought by Broadcom, the “diversified global semiconductor leader” (a.k.a. chip manufacturer), this hot on the heels of its attempted, and blocked, acquisition of the more eligible and compatible Qualcomm. Many are asking why, and not in a good way: ten billion dollars have been wiped off Broadcom’s stock price, which is over half of the acquisition cost.
Few are seeing this as a good idea, though some are seeing an upside. Given that nobody has a crystal ball, I thought it might be worth summarising some of the reasons why a chip company and an enterprise software company might be the perfect match.
1. There’s got to be something in that portfolio. CA has 1,500 patents across a portfolio of 200 products (irony alert: CA’s directory is A-Z, when all start with C — and that’s just the ones they are listing). Surely, surely, there’s got to be something directly related to Broadcom’s business?
2. CA has lots of smart people. OK, not convinced? The future, as we keep being told, lies in software, not hardware — areas such as machine learning, analytics and so on. Within CA’s products are some pretty smart capabilities, along with the people who built them, and who can turn their skills to Broadcom’s needs.
3. It’s all about the mainframe money. Bullet four on the press release mentions ‘recurring revenue’ — that is, for all CA’s pretence at doing otherwise, much of its business still comes from cash cow mainframe software.
4. It’s all about Broadcom having too much money. Conversely, you know how it feels, you were about to start a relationship then you find you can’t, you’re knocking around with a hundred billion in your pocket and feeling low, then who should walk round the corner but…
5. It means someone else can’t acquire CA. Stranger things have happened — like how EMC’s acquisition of Documentum was rumoured to have taken place just to stop IBM from getting it.
6. Broadcom genuinely wants to diversify. This is plausible, if a little scary. I’m not saying hardware companies never ‘get’ software, nor that chip manufacturers have only had limited success outside of their core business, nor that even software companies don’t tend to understand the enterprise, but, OK, yes, I’m saying all those things.
7. It really is a great match and everyone is stupid. This is a perfectly reasonable suggestion. After all, Broadcom will have done its technological and accounting due diligence, and found clear areas of alignment. Won’t it? And we’ve never seen that going completely wrong before, no sir.
8. There’s something deeply sneaky going on. Hang on, wait. Non-US company looks to buy US company, the deal gets blocked. So then the same non-US company goes to buy another US company in a completely different sphere. It surely wouldn’t do that just to gain a longer term foothold onto the territory, would it? No, too far fetched.
9. Well, there was this bottle of wine. Or a bet. You know the story, it was a chance meeting, then a drink, one thing led to another and then, well, the next thing they knew they were grinning at each other and signing some thing… or indeed, they were in the locker room, being all alpha, and one said, “So, you don’t believe it would work? Watch this!” Or something.
10. The world is about to change in a completely unexpected way. Little do we all know, but the biggest enterprises are on the brink of some fundamental, singularity-scale transformation, where the entire software stack collapses down into a self-orchestrating, massive distributed micro-kernel architecture that runs directly on the chip.
At which point, Broadcom wins the Internet, all technological problems in the world are solved, and we can all go home.
August 2018
08-02 – Five Questions For Farm 491
Five Questions For Farm 491
Food security is a growing concern for many, as we live longer and as the population grows. It goes without saying that technology can help; more interesting is how agriculture has been a slow adopter of some of the more leading edge (dare I say digital) technologies. I was lucky enough to attend the launch of the Alliston Centre at the UK’s Royal Agricultural University: as well as being host to general regional development as part of the Gloucestershire Growth Hub network, the centre operates as an incubation centre for AgriTech businesses, under the umbrella of the RAU’s Farm491 initiative.
So, will Farm491 deliver the kinds of innovations we need in both farming practices and underlying technologies? I spoke to Ali Hadavizadeh, Programme Manager for Farm491 and startup business mentor, to find out more.
1. What problem is Farm 491 set up to solve?
The challenge of global population increase is becoming more serious. If human numbers are to increase to 10 billion by 2050, this means that globally, we need to produce 60% more food, which needs to be nutritious, high quality and affordable. At the same time, our environmental footprint will increase exponentially, so we need to produce more food with less avaialble production land.
Historically, we have been complacent in our attitudes to food production. In the UK, waking us up to reality is Brexit. We’ve come to realise that we’ve become so dependent on imports, we’ve almost forgotten how to grow food efficiently. We have to learn to be more self-sufficient.
2. So, were are the areas of focus?
Primarily, AgriTech is the ability of thinking innovatively, so we get more product, with the same or less impact. One starting point is decoupling food production from the land. Farming has relied on favourable weather, traditionally, so if you can prevent heat loss and provide light artificially, you’ve cracked it. With vertical farming and LEDs for example, you can give plants the correct wavelength to maximise photosynthesis.
We also want to get value from waste. We’ve spend a lot of energy producing it in the first place, so why not get maximum value out of it? This is where technology is coming in, this is a huge area of untapped potential. Consider Multibox http://multibox.farm for example — rather than vegetable waste ploughed into land or into landfill, why not feed it to insect larvae, to create fish food such that the fish can then enter the human food chain?
Many AgriTech startups are of this mindset, to take the challenge on. Young minds lap it up — identify problem, writing software, seeking investment and going for it. We try to nurture that mindset, providing a safety net so people can come up with a solution and see if it is viable. We want to put a business case around a good idea.
3. What kinds of AgriTech solutions do you see?
I like to ask, is your solution like an aspirin, or a chocolate? If an aspirin, you need to find customers with a headache whereas chocolate is more of a nice to have. Consider The Land App https://www.thelandapp.com , which helps map out how agricultural land is used — that one is very much aspirin rather than chocolate! The Land App has just secured its first round of investment and have partnered with Ordnance Survey and other key influencers.
In some cases, it is a case of direct innovation in a specific area. For example, look at MagGrow https://www.maggrow.com , which is an engineering solution targeted at improving spraying coverage based on optimising droplet size. There is nothing new about spraying, but in conventional spraying you can lose up to 70%, as droplets lost in drift and miss their targets. If you can get active ingredients to hit 100%, you have then a win-win scenario, as you use less pesticide, less water and get better results.
4. How does Farm 491 help AgriTech startups?
Farm 491 is vital — it is the only agri-specific incubator in the country. We have an inspiring AgriTech innovation programme www.farm491.com/iai - we need to support 73 new AgruiTech businesses. If you’ve got a good idea, let’s test it and then try to help them seek clarity of long-term business survival and success. It’s like building a house — if you don’t do it properly, then it falls apart. Timing is also critical on introducing products to market. Too late and you miss the boat, but too early and people think you are bonkers!
One of the tools we use is to measure and advise across eight axes — product, market, channels, competition, financials, team, legal and IP. When they arrive, startups are often enthusiastic about product, but less knowledgeable on markets and channels. At the point when people say “I have no competition,” I get angry with them. That can spook them! Financials also often score very poorly, as do criteria around team.
When they come to us, many startups look like a pear on this eight-point spider diagram - our job is to create a balanced startup which marks high on all eight criteria. So, when they send a pitch deck to an investor, the proposition is de-risked as much as possible. Ultimately, perhaps the most important is about IP. Startups need to do something, even a trademark, to say “stay off our patch.”
5. What kinds of challenge do you see to incubating AgriTech startups?
At the moment, AgriTech is very new, it hasn’t been considered as a sexy technology area in terms of exploitation. On the upside, innovation can also come across from other sectors, as many verticals are quite advanced. So automotive, data, robotics — each offers massive potential to agriculture.
One specific challenge is to get more people to join the sector as innovators. We’re now partnered with AgriBriefing media (who produce Farmers Guardian) - hosting Agri Innovation Den https://www.aginnovationden.com . On this competition project we have partnered with BASF. 8 finalists will get business support & media coverage, plus a year of flexible membership with Farm 491 and possible investment from enterprise arm of BASF. These are incentives to look at the agriculture sector with new eyes.
To drive things forward, we have also helped formethe Agri-South West network. Our specific role is to support good entrepreneurs, but this is for the entire supply chain. As a result, we will be able to ensure our startups get all they need after they leave, so they don’t fall off the end of a cliff.
We are working with a fantastic group of partners, including universities, LEPs, councils and private sector. The South West provides 37% dairy, 32% beef and 15% of the poultry for the UK, but we didn’t have a common voice: the network aims to do that, and has recently applied for a fund called ‘Strength in Places’ to support the agri-supply chain in the south west. We are at the beginning of that journey.
My take: a crisis is a problem with no time left to solve it
Despite many challenges, today’s western populations are very lucky to live in a time of relative plenty, as well as living through rapid technological change. The latter has driven huge market changes: it is highly unlikely that outsourcing and offshoring would exist on the scale we have seen, without the availability of global connectivity. Not only has this led to an ongoing worldwide rebalancing of resources, but it has also led to a level of complacency around how, and where, our food is produced. We can debate the whys and wherefores of this; meanwhile, we can recognise that it is unlikely to last forever.
This creates a problem, not for now, but for the future. We not only need to be able to know how to produce food; we also have to become very good at doing it, as we will have to do so with increasingly constrained resources. In life there are good problems, and bad problems, and this is currently a good problem: rather than waiting for the problem to become a crisis, we need only to take the initiative, to grasp the clear opportunity that presents itself. Not only can we work towards more sustainable farming, but also, there is a great deal of efficiency to be gained, and therefore money to be saved in the short term.
08-17 – Making Tax Digital
Making Tax Digital
The UK tax authorities are Making Tax Digital, whether businesses like it or not
In a changing world, it is good to know that some certainties remain, such as the time of year when companies log their affairs and give up to Caesar what is due to Caesar, that is, pay their tax. In the UK at least, even this constant is under fire. I’m speaking tongue in cheek of course, but the government’s Making Tax Digital (MTD) initiative is proving troubling to more than a few businesses.
On the upside, tax accounting software vendors seem very well furnished with information. There’s a quick start guide which shows JSON and XML formats for information exchange, RESTful API calls and so on. There’s also a developer hub to test remote access to APIs. Less available is information to businesses and accountants, who are largely in the dark beyond a central assumption that everyone is using vendor packages now, aren’t they?
Simply put, if you’re already using a package such as QuickBooks or Xero, you should be OK (according to the list of those whose software will enable auto-uploading to Her Majesty’s Revenue and Customs, HMRC). But many organisations are not, preferring to keep their books as, erm, books or indeed, spreadsheets. So, what if you don’t want to deliver your books digitally? Well, you have to.
As a techie, I find myself strangely nonplussed about this: while I might not have a problem with using a computer, many others whose income passes the £85K threshold (and who have run businesses quite happily without one) are now faced with three new potential costs: the software itself, the training to use it, and the conversion from one package, or spreadsheet, to a certified package.
Software vendor Liquid Accounts will “provide a single company, single user version of Liquid VAT Filer free of charge to any VAT registered business” — this works with MTD and, according to the article, with spreadsheets. Regarding the latter, the HMRC mentions ‘bridging software’ for spreadsheets here, confirmed here. A couple of solutions are now available, as per the article, including the TaxCalc spreadsheet plugin.
But it begs a question: why didn’t the Revenue simply define a file format standard for accountants to use, which all packages could write to and which anyone could upload? Perhaps there is no place for such primitive mechanisms, not in the API economy. For UK businesses meanwhile, waiting for clarity is becoming an increasingly risky option: we have nine months to go before the end of the tax year in April 2019, by which point MTD will be the default approach.
We shouldn’t be surprised that the British Chambers of Commerce are requesting that the roll-out of MTD is delayed, as this press release notes. On the point about British businesses having enough on their plates right now, I have to concur and can only hope our bureaucratic betters see sense before April next year.
September 2018
09-13 – Could Devops Exist Without Cloud Models?
Could Devops Exist Without Cloud Models?
Could DevOps exist without cloud models?
The GigaOm DevOps market landscape report is nearing completion, distilling conversations and briefings into a mere 8,500 word narrative. Yes, it’s big, but it could have been bigger for the simple reason that DevOps touches, and is therefore helped and hindered by, every aspect of IT. Security and governance, service level delivery, customer experience, API and data management, deployment and orchestration, legacy migration and integration, they all impact DevOps success, or cause what we have termed DevOps Friction.
While the report is about DevOps, and not all these other things (the line had to be drawn somewhere), one aspect rings out like a bell. I go back to an early conversation I had with ex-analyst Michael Coté, who brings a hard-earned, yet homespun wisdom to technology conversations. I paraphrase but Coté’s point, pretty much, was, “The kids of today, they don’t know any other way of building things than using cloud-based architectures.”
With that, he lifted his rifle and shot a can off the fence. Okay, no he didn’t, he talked about the foolishness of caring about operating system versions rather than just using what’s offered by the cloud provider. It took me back to a software audit I undertook, many years ago, when the ultra-modern JBoss interface layer built onto a Progress back-end had been customised by a freelancer who promptly left, leaving the organisation with a poorly documented, legacy open source deployment… but I digress.
Many, if not all startups work on the basis of using what they are given, innovating on top rather than worrying about what’s under the hood (or bonnet, as we say over here. I know, right?). They also then adopt some form of DevOps practice, as the faster they can add new features, the more quickly their organisation will progress: the notion of the constant Beta has been replaced by measuring success in terms of delivery cycles.
Bluntly, the startup innovation approach wouldn’t work without cloud. Providers such as AWS know this; they also know their job is to deliver as many new capabilities as they can, feeding the sausage machine of innovation, however much this complicates things for people trying to understand what is going on. AWS is more SkyMall than Sears, its own business model also based on dynamism of new feature delivery.
This truth also applies to the toolsets around DevOps, which are geared up to help deploy to cloud-based resources, orchestrate services, deploy containers and spin up virtual machines. If a single cloud is your target, the DevOps pipeline is a sight simpler than if you are deploying to an in-house, hybrid and/or multi-cloud environment. Which, of course, reflects the vast majority of enterprises today.
The point, and the central notion behind the report, is that enterprises don’t have it easy: DevOps needs to roll with the punches, rather than sneering from the sidelines about how much easier everything could be. We are where we are: enterprises are complex, wrapped up in historical structures, governance and legacy, and need to be approached accordingly. They might want to adopt cloud wholesale, and may indeed do so at some point in the future, but getting there will be an evolution, not an overnight, flick the switch transformation.
DevOps Friction comes from this reality, and many providers are looking to do something about it. As per a recent conversation with my colleague Christina von Seherr-Thoss, such developments as VMware running on AWS, or indeed Kubernetes-VMware integration, help close the gap between the now-embedded models of the data centre, and the capabilities of the cloud. This isn’t just about making things work together: it’s also transferring some of the weight of processing from internal, to external models.
And, by doing so, it’s helping organisations let go of the stuff that doesn’t matter. We’ve long talked about data gravity, in that most data now sits outside the organisation, but an equally important notion is that processing gravity hasn’t followed, making enterprise DevOps harder as a result. I personally don’t care where things run: if you can run your own cloud, go for it. More important is whether you are locked into a mindset where you tinker with infrastructure, or whether you use what you are given and innovate on top.
Right now, enterprise organisations are looking to adopt DevOps as part of a bigger push, to become more innovative and adapt faster to evolving customer needs — that is, digital transformation. Enterprises are always going to struggle with the weight of complexity and size: as startups grow up, they hit the same challenges. But traditional organisations can do themselves a favour and shift to a model that breaks dependency with servers, storage and so on.
While we don’t deep-dive on infrastructure and cloud advances in our DevOps report, it is fundamental and inevitable that organisations which see technology infrastructure as an externally provided (and relatively fixed) platform will be able to innovate faster than those who see it as a primary focus. Breaking the link with infrastructure, minimising dependencies, using the operating systems you are given and building on top, could be the most important thing your organisation does.
October 2018
10-17 – Github Actions
Github Actions
The game changer that is GitHub Actions
“GitHub? That’s a code repository, right?” said a friend, when I mentioned I was in San Francisco. GitHub Universe, the company’s annual conference, is small but perfectly formed — 1,500 delegates fills a hall but doesn’t overwhelm. And yes, developers, engineers and managers are here because they are pulling files from, and pushing to, one of the largest stores of programming code on the planet.
GitHub representatives would likely dispute the “just a code repo” handle, nonetheless. I would imagine they would point at the collaboration mechanisms and team management features on the one hand, and the 30-plus million developers on the other. “It’s an ecosystem,” they might say. I haven’t asked, because the past two days’ announcements may have made the question somewhat moot. Or one announcement in particular: GitHub Actions.
In a nutshell, GitHub Actions allow you to do something based on a triggering event: they can be strung together to create (say) a set of tests when code is committed to the repository, or to deploy to a target environment. The “doing something” bit runs in a container on GitHub’s servers; and a special command (whose name escapes me…wait: RepositoryDispatch) means external events can trigger actions.
That’s kind of it, so what makes GitHub Actions so special? Or, to put it another way, what is causing the sense of unadulterated glee, across both the execs I have spoken to and those presenting from the main stage. “I can feel the hairs on the back of my neck as I talk about this,” I was told, not in some faux ’super-excited’ way but with genuine delight.
The answer lies in several, converging factors. First, as tools mature, they frequently add rules-based capabilities — we saw it with enterprise management software two decades ago, and indeed ERP and CRM before that. Done right, event-driven automation is always a feature to be welcomed, increasing efficiency, productivity, enforcing policy, governance and all.
Second is: what happens when you switch on such a feature for a user base as large, and as savvy, as the GitHub community? Automation is a common element of application lifecycle management tooling, and multiple vendors exist to deliver on this goal. But few if any have the ability to tell millions of open source developers, “let’s see what you got.”
Which brings to a third point: right now, we are in one of those fan-out technology waves. In my report on DevOps, I name-checked 110 vendors; I left out many more. Choosing a best-of-breed set of tools for a pipeline, or indeed, deciding the pipeline, involves a complex, uncertain and fraught set of decisions. And many enterprises will have built their own customisations on top.
As I wrote in the report’s introduction, “In the future, it is likely that a common set of practices and standards will emerge from DevOps; that the market landscape for tools will consolidate and simplify; and that infrastructure platforms will become increasingly automated.” The market desperately needs standardisation and simplification: every day, organisations reinvent and automate practices which, frankly, is not a good use of their time.
For there to be a better way requires a forum — an ecosystem, if you will — within which practices can be created, shared and enhanced. While there may be a thousand ways to deploy a Ruby application, most organisations could probably make do with one or two, based on constraints which will be similar for their industry peers. With a clear day, a following wind and the right level of support, GitHub Actions could provide the platform for this activity.
Will this put other continuous automation and orchestration vendors out of business? Unlikely, as there’s always more to be done (and no organisation is going to switch off existing automations overnight). However it could create a common language for others to adopt, catalysing standardisation still further; it also creates opportunities for broader tooling, for example helping select a workflow based on specific needs, or bringing in plugins for common actions.
It’s also notable that GitHub Actions is only being released as Beta at this point (you can sign up here). Questions remain over how to authorise and authenticate access, what criteria GitHub will set over “acceptable” Action workloads, and indeed, how Actions will work within a GutHub enterprise installation. Cliché it may be, but the capability creates as many questions as it does answers — which is perhaps just as well.
Above all perhaps, the opportunity for GitHub Actions is defined by its lack of definition. Methodologists could set out workflows based on what they thought might be appropriate; but the bigger opportunity is to let the ecosystem decide what is going to be most useful, by creating Actions and seeing which are adopted. And yes, these will go way beyond the traditional dev-to-ops lifecycle.
One thing is for sure: the capability very much changes the raison d’être for their founding organisation. “Just a code repository” they may have been, in the eyes of some; but a collaborative hub for best practice is what the organisation will undoubtedly become, with the adoption of GitHub Actions. No wonder the irrepressible sense of glee.
December 2018
12-18 – Aws And The Future
Aws And The Future
As a first take, the 2018 AWS Re:Invent conference event seemed to be slicker, even if it was bigger, than previous years. While conference sessions were far too distributed across multiple hotels to feed a coherent view, the big-barn expo exuded a feeling of knowing what it was about. Even the smallest vendors had stands which went beyond the lowest-common-denominator quick-assembly cube, suggesting either (a) the organisers had put more thought into it or (b) the vendors were better-established and (therefore) had more money. All in all, it felt less of a bun fight — more space between stands, less urgency to get from one place to another.
It would be too much of an extrapolation to suggest this reflects the state of the cloud marketplace in general, and AWS in particular; however, it does serve as a useful backdrop upon which to paint a picture of an industry maturing beyond its “look at us, over here, we are different!” roots. From the sessions held for analysts, a couple of notably aligned moments stand out: the first involving use of the H-word, met with a smattering of laughter as an AWS representative spoke about embracing (my word) hybrid architectures and deploying (in the form of AWS Snowball Edge) capabilities inside the enterprise boundary.
The second, also met with more of an accepting shrug than anything, was a presentation by Keith Jarrett, AWS’ worldwide lead on cloud economics, which accepted, nay endorsed the fact that AWS’ cloud models wouldn’t always be the cheapest option for everything. Any thoughts of “ah-HA! Got you!” were almost immediately overtaken by, “Well, of course, how could it be?” — unless someone has also invented the perpetual motion machine or some other magical device. At a risk of repeating the obvious, there is no silver bullet/single solution/one-model-to-rule-them all in technology, never has been and never will be. Keith went on to present a series of KPIs around value creation, rather than pure cost.
So, with maturity comes the circumspection of understanding one’s place in the world, what one brings to the party, and therefore a level of differentiation based on competence, not capability: in a nutshell, it’s not about “use cloud” but, “if you want to use cloud-based services, work with us as we do it better than anyone else.” We saw this across the AWS portfolio, for example through the repeated theme of ‘frameworks’ — AWS has one AI (as presented by Swami Sivasubramanian, VP, Amazon Machine Learning), one for IoT (thank you Dirk Didascalou, VP, AWS IoT), one for more general cloud adoption (hat-tip Dave McCann, VP, AWS Marketplace and Todd Weatherby, VP, AWS Professional Services).
It all makes sense — if the platform is (increasingly) a commodity, the differentiator becomes how it is used. We see this over and again: now that Kubernetes is (becoming) the de facto target for containerised applications for example, to say “we do Kubernetes” is no longer interesting. Nor for that matter are the frameworks, from a business perspective — illustrated by the current trend away from DevOps being an end in itself and towards governance models and tooling such as Value Stream Management. Most important are whether organisations can innovate and deliver faster, harness opportunities, deliver new customer experiences and generate business value, more effectively with one provider or another.
This is all good news for the enterprise, as the terminology and philosophical underpinnings of cloud computing increasingly align with the more traditional thinking pervading our largest organisations. Across the past ten years, it has been enough to ‘do’ cloud, or ‘do’ open source in order to create competitive advantage: indeed, upstart organisations (the usual suspects of AirBnB, Uber, indeed Amazon et al) have built their businesses on the basis of rapid time-to-value. Simply put, older companies, with all their meetings, legacy systems and indeed thinking, have not been able to deliver as quickly as businesses without all that baggage.
Indeed, they still can’t. But them old companies are still there, for a number of reasons. First that the new breed have largely tackled the customer-facing elements of business, but there’s only so much of that to go around. It is completely unsurprising that Amazon is opening (albeit automated) shops, and that Uber (together with Toyota) is investing in (driverless) car fleets: someone has to do the infrastructure stuff. Meanwhile, not all customer-oriented business can be done on an ad-hoc basis. Take Healthcare for example, which (thank goodness) has not thrown itself gaily into adopting the heck-why-not-throw-away-the-old-rules-and-see-what-happens business models of the platform economy.
And indeed, while big old businesses are still big and old, and therefore unable to act quite so responsively as the youngsters, three things are happening: not only are they getting better at that whole innovation thing — or indeed, learning how to align new models of innovation with their own approaches, but also, the younger companies are having to learn that they can’t get away with avoiding complexity for ever. And in parallel, as we have already seen, technology providers such as AWS are maturing to fit the evolving needs and capabilities of both sides. It’s not just the big players: at Re:Invent I was also able to talk to both organisations in Amazon’s partner ecosystem and their customers, notably a conversation with that quite popular gaming company Fortnite about both AWS and MongoDB.
Where does this leave us? First that AWS is establishing itself not as a cloud player but as a technology provider, and rightly so, moving away from a false debate based on cost and towards one based on value. Second, AWS recognises that it cannot go it alone, nor does it need to (historically echoing of Microsoft’s attempts to play the better together card, which worked to an extent but could never be the whole answer). Third, that this reflects a more general maturing of the industry’s relationship with business, as attention moves beyond the platform and towards how to get the most out of it in what, frankly, a highly complex and constantly evolving world.
Whatever happens, complexity of all types will continue to constrain our ability to maximise the value we can get from technology. While technological complexity may appear to be a Gordian knot, it is more a Hydra — cut off one head and many more grow back. Understanding this and trying to tame and align as a platform, rather than looking to restrict and present one model above all, holds the key to unlocking future innovation for businesses of all sizes.
2019
Posts from 2019.
January 2019
01-01 – GitHub Actions: The Best Practice Game Changer
GitHub Actions: The Best Practice Game Changer
“GitHub? That’s a code repository, right?” said a friend, when I mentioned I was in San Francisco. GitHub Universe, the company’s annual conference, is small but perfectly formed — 1,500 delegates fills a hall but doesn’t overwhelm. And yes, developers, engineers and managers are here because they are pulling files from, and pushing to, one of the largest stores of programming code on the planet.
GitHub representatives would likely dispute the “just a code repo” handle, nonetheless. I would imagine they would point at the collaboration mechanisms and team management features on the one hand, and the 30-plus million developers on the other. “It’s an ecosystem,” they might say. I haven’t asked, because the past two days’ announcements may have made the question somewhat moot. Or one announcement in particular: GitHub Actions.
In a nutshell, GitHub Actions allow you to do something based on a triggering event: they can be strung together to create (say) a set of tests when code is committed to the repository, or to deploy to a target environment. The “doing something” bit runs in a container on GitHub’s servers; and a special command (whose name escapes me…wait: RepositoryDispatch) means external events can trigger actions.
That’s kind of it, so what makes GitHub Actions so special? Or, to put it another way, what is causing the sense of unadulterated glee, across both the execs I have spoken to and those presenting from the main stage. “I can feel the hairs on the back of my neck as I talk about this,” I was told, not in some faux ’super-excited’ way but with genuine delight.
The answer lies in several, converging factors. First, as tools mature, they frequently add rules-based capabilities — we saw it with enterprise management software two decades ago, and indeed ERP and CRM before that. Done right, event-driven automation is always a feature to be welcomed, increasing efficiency, productivity, enforcing policy, governance and all.
Second is: what happens when you switch on such a feature for a user base as large, and as savvy, as the GitHub community? Automation is a common element of application lifecycle management tooling, and multiple vendors exist to deliver on this goal. But few if any have the ability to tell millions of open source developers, “let’s see what you got.”
Which brings to a third point: right now, we are in one of those fan-out technology waves. In my report on DevOps, I name-checked 110 vendors; I left out many more. Choosing a best-of-breed set of tools for a pipeline, or indeed, deciding the pipeline, involves a complex, uncertain and fraught set of decisions. And many enterprises will have built their own customisations on top.
As I wrote in the report’s introduction, “In the future, it is likely that a common set of practices and standards will emerge from DevOps; that the market landscape for tools will consolidate and simplify; and that infrastructure platforms will become increasingly automated.” The market desperately needs standardisation and simplification: every day, organisations reinvent and automate practices which, frankly, is not a good use of their time.
For there to be a better way requires a forum — an ecosystem, if you will — within which practices can be created, shared and enhanced. While there may be a thousand ways to deploy a Ruby application, most organisations could probably make do with one or two, based on constraints which will be similar for their industry peers. With a clear day, a following wind and the right level of support, GitHub Actions could provide the platform for this activity.
Will this put other continuous automation and orchestration vendors out of business? Unlikely, as there’s always more to be done (and no organisation is going to switch off existing automations overnight). However it could create a common language for others to adopt, catalysing standardisation still further; it also creates opportunities for broader tooling, for example helping select a workflow based on specific needs, or bringing in plugins for common actions.
It’s also notable that GitHub Actions is only being released as Beta at this point (you can sign up here). Questions remain over how to authorise and authenticate access, what criteria GitHub will set over “acceptable” Action workloads, and indeed, how Actions will work within a GutHub enterprise installation. Cliché it may be, but the capability creates as many questions as it does answers — which is perhaps just as well.
Above all perhaps, the opportunity for GitHub Actions is defined by its lack of definition. Methodologists could set out workflows based on what they thought might be appropriate; but the bigger opportunity is to let the ecosystem decide what is going to be most useful, by creating Actions and seeing which are adopted. And yes, these will go way beyond the traditional dev-to-ops lifecycle.
One thing is for sure: the capability very much changes the raison d’être for their founding organisation. “Just a code repository” they may have been, in the eyes of some; but a collaborative hub for best practice is what the organisation will undoubtedly become, with the adoption of GitHub Actions. No wonder the sense of suppressed glee.
February 2019
02-25 – GigaOm Infographic: Connectivity and Customer Experience
GigaOm Infographic: Connectivity and Customer Experience
How can businesses use connectivity to drive improved CX for their customer? GigaOm asked 350+ strategic enterprise decision-makers from North America and Europe to share their experiences. Check out the infographic below and then read the full Research Byte here.

March 2019
03-04 – Five questions for… Keri Gilder, Chief Commercial Officer, Colt Technology Services. Can Connectivity be linked to Customer Experience?
Five questions for… Keri Gilder, Chief Commercial Officer, Colt Technology Services. Can Connectivity be linked to Customer Experience?
Customer experience, or CX, is one of those areas that makes you wonder why it’s being discussed: after all, which organisation would go out of it way to say that customers were not a priority? Nonetheless, talking about customers can be very different to actually improving how they interact with the business, not least because the link between theory and technical practicality will not always be evident.
In the case of connectivity, the task is even harder. In principle there should be a connection – if you (as a customer) can’t connect to the service you need, or if it is slow or unresponsive, your experience will be less good. In practice however, connectivity is often seen as low-level infrastructure, with little value to add beyond linking things up.
These challenges made our research on the link between connectivity and CX, conducted in partnership with Colt, all the more fascinating. The top-line finding was that organizations did see a link, and furthermore, were actively looking for ways to improve CX via connectivity. Following the research, I sat down with Keri Gilder, Chief Commercial Officer, Colt Technology Services, to find out what she thought of the findings, and what the provider was doing in response.
- From the perspective of a connectivity provider, how are you framing the increasing attention on customer experience? How is it impacting both your wholesale and enterprise customers, and what do you think is behind this?
Customers in all sectors are demanding much more from their providers – the consumerisation of IT isn’t a new trend but it’s still highly relevant. People look at the flexibility and service they get from consumer facing companies and are asking why that doesn’t apply to their B2B suppliers. Many telco companies have been slow to adapt to these demands, so the result is that connectivity can be treated as a commodity rather than a differentiator.
Our customers are dealing with massive change, from the growth in cloud applications and the changing structure of the workplace, to security challenges and the constant state of digital transformation. This means the network becomes even more critical for those with a focus on delivering the best experience to customers.
When customers are dealing with these challenges it’s not good enough to sit back and wait for them to tell us what they need – we need to work together to help shape requirements, acting as advisors instead of just a supplier.
- In the report, we saw a number of challenges getting in the way of improving CX delivery, not least how difficult it is to draw a clear picture of what customer experience actually means. How is this manifesting itself in the organisations you speak to?
A ‘good’ customer experience can mean different things to different people and sectors, so it’s not a surprise to see people struggling to identify the best course of action. To some degree it’s the obvious things that people expect – delivering quickly and on time, while ensuring they have access to the information they need.
But for suppliers it’s also about putting yourself in the customer’s shoes; what challenges are they facing and what are their customers demanding of them? From there it’s easier to see how to make a difference to their business and, in turn, how you can improve their experience of working with you.
- A fascinating and repeated finding was that enterprises want connectivity that ‘just works’ from the outset, whether or not it has more advanced capabilities such as flexibility over time. How does this map onto what your customers are asking for?
Our customers have always expected connectivity that just works – the challenge we’re seeing now is that it’s much harder to predict network demand for the coming years or even months. CIOs are having to manage capacity requirements for applications or activities that might not even be on their radar and that’s driving a need for flexibility. This shows how connectivity can directly impact customer experience goals – if the network can’t manage these new services or if it doesn’t have the ability to quickly add new locations or services then it’ll be seen as a barrier, rather than a platform for innovation.
- Also interesting was the low level of importance assigned to Net Promoter Scores (NPS). Is it that such metrics have had their time, or how else would you explain this? [Probably that NPS is an aggregated view, of the consequences of other metrics]
We closely track our NPS score – it’s an excellent way for us to measure ourselves as it covers so many aspects of what we provide to customers. But we know it isn’t and shouldn’t be the only measure of good customer experience. It’s the other factors identified in the research like delivering on time and how you respond if something goes wrong.
If you don’t deliver on promises, meet expectations or go above and beyond to keep the customer happy then you won’t score highly. I don’t think it was a surprise to see that people don’t use NPS as a way to measure their suppliers, but if suppliers are getting everything else right, then their NPS score will naturally improve.
- Respondents told us that the most important way to improve the link between connectivity and CX, was to get their own houses in order, improving skills sets and operational processes. How is Colt as an organisation helping its customers achieve this goal?
We’ve always been focussed on customer experience, and our vision is to be known as the most customer-oriented businesses in the industry. This means that we need to do much more than providing connectivity to our customers. Whether that’s Enterprise, Capital Markets or Wholesale, it’s about working in partnership with our customers to find out what their goals are and then collaborating to show how we can help achieve them.
A crucial part of achieving this comes from listening to our customers and taking the time to understand the challenges they’re facing; one way in which we do this is through Innovation Workshops. These take part in the early stages of an engagement, bringing together multiple stakeholders with Colt experts to fully understand the broader business problems and how we can use technology to solve them. This means we’re providing more than just technology – we’re helping customers with their business objectives.
The other aspect is in leading from the front – everyone at Colt has a performance objective relating to customer experience. We also have several internal programs running which don’t just superficially look at customer experience but are seeing the business invest in new tools and create new processes to ensure we’re going above and beyond what people expect from a connectivity supplier.
2020
Posts from 2020.
April 2020
04-08 – VDI in the Age of Covid-19: Remote Work and the Challenge of the Virtualized Client
VDI in the Age of Covid-19: Remote Work and the Challenge of the Virtualized Client
These are trying times, not least because corporate life needs to go on, which for millions of businesses means delivering compute resources to employees at home. Remote work is no longer an option or an initiative – almost overnight it’s become a global imperative. And just like that, IT pros worldwide face a massive challenge.
One possible solution is Virtual Desktop Infrastructure (VDI), which connects users via web browser into a virtual machine instance running on a server somewhere, be it inside a corporation’s data centers in Citrix or VMware, or provided by a cloud platform such as AWS or Azure.
Of course, the concept of remote access to a pre-configured virtual desktop is not new. I can remember how on one of my first analyst assignments, some two decades ago, I was tasked to determine the total cost of ownership (TCO) of thin-client systems against their local, rich-client desktop equivalents. Twenty years later, I return to this suddenly urgent topic to ask some of our analysts what’s new about VDI and how it might address our current challenge.
First off, VDI is still very much a thing, with technology that continues to evolve and leverage hosted, cloud models.
“Every company I’ve worked with in the last 15 years has started some sort of VDI environment – especially now that teams are upgrading or replacing legacy Citrix environments,” says Iben Rodriguez, whose day job crosses a number of enterprise clients in the financial and government sectors. “We had a company come to us for an expansion of their 1000-user AWS Workspaces solution, and another customer is moving 3000 users to a Microsoft VDI solution away from Citrix.”
He says that a 2009 user deployment on VMware Horizon VDI still runs on Cisco and EMC hardware.
And Iben contends that there’s still plenty of TCO to be found in VDI deployments. Even if endpoint hardware prices have dropped, cost overhead can still be significant in an unmanaged environment. The lower costs enabled by centralized control appears to be the compelling reason to move to a VDI approach. Add to that the security and management benefits across both the remote desktop and the communications link, and the benefits add up. Control enables simplicity, which reduces risk —all good reasons to adopt a virtualized model.
However, centralized control can cause conflicts with the user base, which is after all the group being served. And that conflict, says Andrew Brust, is all about end users wanting control.
“While VDI from old school Citrix and Remote Desktop to newer cloud-hosted platforms are cool, people find that desktop-on-desktop gets confusing and nobody loves it. Just as people like apps on their phones, people like to install software on their laptops and don’t love delegating control of that away — even if IT does.”
There’s another issue: Today’s ”perfect” desktop configuration may not be quite so perfect in six, 12 or 18 months. Management systems have a decay curve, which needs to be factored into the initial business case and approach.
“The gold image problem is real, and a real headache, says Ned Bellavance, who also warns that proper hardware needed to support good VDI can be costly. “And it doesn’t help with overwhelmed VPNs or disconnected scenarios.”
The answer, in part, lies in deciding what is worth fixing, and what should remain outside of centralized control. “When you factor stuff out that’s portable, it scales well,” says Brust. “When you try to replicate the full stack including the personal OS and environment, not so much.
He adds: “In general, centralizing and templatizing for large-scale deployment of things that are based on personal [computing] environments can hit glitches.”
If this sounds like a compromise, that’s because it is — at least in the short term. Looking further out, we can learn from another domain — Mobile Device Management (MDM) – which has evolved to help organizations control and secure smartphones and handheld devices.
“Many traditional MDM solutions moved to a mobile application management paradigm, because controlling the device is a pain,” says Bellavance. Core to the new MDM approach is the use of containers.
Containers, essentially stand-alone application modules that can run anywhere, are having an impact across the technology space — not least in massively scalable, cloud-based application architectures. Netflix, for example, is the poster child for containerization.
As it turns out, containers are also very useful when it comes to balancing control with user flexibility.
“It’s easier to control the app as a container on mobile devices,” Ned continues. “Ideally we would bring a similar container approach to desktop operating systems, and you wouldn’t need to mess about with local device management.”
Microsoft is an adopter of this model, with its InTune app protection product, and the company has leveraged containers to enable Windows to run on ARM processors.
“The container approach (broadly speaking) has been liberating in lots of ways. It’s made things work that seemed utterly insoluble for a very long time,” says Brust.
So, how can organizations adopt VDI today, while at the same time planning for the future? The answer is to be realistic on cost planning in the short term, particularly in terms of management and support overhead created by the huge increase in remote work. At the same time, IT organizations should watch for advances around containerization and how it can enable an optimal blend of end-user flexibility and centralized control.
May 2020
05-07 – Why Should You Bother with Value Stream Management?
Why Should You Bother with Value Stream Management?
What is Value Stream Management?
Value Stream Management (VSM) is the TLA du jour among software development tools, so is it relevant to your organization? We can separate this question into three parts: philosophy, approach, and benefits. First, a bit of history: like so many dev practices, the term originated in manufacturing, specifically lean engineering. The core principle is to consider not only a process as a whole, but also each step, in terms of the value it brings. It should be possible to log not only the benefits (e.g. features built, problems solved) but also the costs, measured as time spent, people/hand-offs involved, and so on.
The relevance to DevOps pipelines and approaches may be self-evident to organizations looking to scale their efforts. DevOps is used by many enterprises as an innovation mechanism, based on a belief that iterative, continuous, fast delivery is key to success. But scaling DevOps across the organization comes with pitfalls, including loss of efficiency, a lack of coordination between stakeholders, increased bottlenecks, and prioritization difficulties. All of which can hold back deployment, and undermine the very reason for doing DevOps in the first place.
Enter VSM, which can help to achieve more consistent and productive pipelines. The core notion is to think of activities across development and operations in terms of a ‘value stream’ (rather than a pipeline or workflow), in which value should be measurable, and maximized, at every stage. This enables bottlenecks to be identified from a tactical perspective, plus the overall process can be assessed in terms of hand-offs, repetition, and other criteria, enabling it to be improved as a whole.
What Does This Mean in Practice?
Key to the success of Value Stream management is measuring the DevOps pipeline in terms of how well it meets the needs of its users and customers. It helps DevOps managers measure what is important, in terms of pipeline efficiency, effectiveness of results, and overall business value. In addition, it enables and encourages that all-important link between technology and business decision-makers, as they look to collaborate, set priorities, and drive delivery.
From a principles and tooling perspective, VSM incorporates the following aspects:
- Value Stream Mapping: modeling the steps of a pipeline in a measurable fashion, for example using a graphical tool
- Value Stream Efficiency: measuring pipeline steps, pulling in information to identify bottlenecks across development, testing, and deployment
- Value Stream Effectiveness: creating measures and dashboards that link to return on investment (ROI), customer satisfaction, and other business-facing criteria
Who Can Use VSM and What Can it Achieve?
If you’re wondering what VSM could achieve for you and your organization, we’ve outlined a few stakeholder-specific cases below.
Chief Information Officers (CIO)
What can it achieve? If you are looking to deliver on a board-level strategy of digital transformation, VSM enables you to offer a constructive response to board strategy, while keeping atop of decision making.
How can you achieve it? Show how VSM can deliver on both agility and governance, demonstrating value without undermining the potential for innovation. CIOs can use VSM as a route to remove risk and increase control of technology-based innovation in the face of consumerization and Shadow IT.
Business Leaders
What can it achieve? If you need to drive more effectiveness from your technology investments, VSM offers a conduit to conversations around measurable software delivery, based on business outcomes
How can you achieve it? Engage with the IT teams and work together to deliver both efficiency and effectiveness using VSM and agree on delivery metrics that link to key areas of business strategy such as growth or customer experience.
IT Development or IT Operations Leaders
What can it achieve? VSM can increase efficiency, freeing up time and money from process overheads, which can be better spent on innovation goals.
How can you achieve it? VSM can be a way to request more structure and control in the development process, reducing the level of conflict across the deployment wall. Consider tools and mechanisms that can deliver management information both in terms of process efficiency and resulting business impact. Also, find different ways to present information to different stakeholder groups, enabling intercommunication and supporting broader decision making.
Adopting a Staged Approach
Different organization types can view and deploy VSM according to their own experience and maturity. For example:
- Enterprises with early-stage DevOps practice can use VSM to set out a framework for DevOps best practice, which can then be replicated across the organization as additional departments and projects adopt it. For these organizations, we would advise deploying the minimum necessary VSM mechanisms you need to start DevOps on the right foot, implementing an approach that builds in process efficiency measurement and business value from the outset.
- Mature DevOps organizations, and organizations that have implemented DevOps across development and operations, can use VSM to achieve greater efficiency and governance. For these organizations, you can use VSM to define core processes and toolchains to meet your needs, consolidating existing DevOps toolchains and practices. VSM can also drive automation decisions for testing, security, and governance aspects of the DevOps cycle, so you can justify the spend on tools via overall cost savings.
Conclusion
“Value” has many meanings, and by following a process that defines what it means for different parts of your organization, you will build a better understanding of your development process and its value to your whole business. This is no silver bullet: by breaking down development and deployment into discrete steps, VSM can help you make small changes that incrementally improve production and development processes. These adjustments can be easy to implement, but, repeated throughout an organization, can create exponential improvement.
These techniques can lead to better efficiency and good governance, which can create a huge variety of positive outcomes for an organization. Happier customers, bigger profits, higher productivity, happier staff, and better internal communication can all be by-products of using VSM. Note however that while VSM tools enable organizations to collate, and visualize information, it is equally important to have a business-facing, value-oriented mindset across the software delivery process and beyond.
You can read more about VSM in our report Research Byte: Value Stream Management. And stay tuned – we’re producing a Key Criteria Report and Radar to help you set strategy and evaluate vendors. If you have any questions or feedback, you can find the author, Jon Collins, on Twitter.
05-20 – Three Steps to Successful DevOps
Three Steps to Successful DevOps
You can listen to the entire conversation here.
July 2020
07-14 – Is Visibility the DevOps Magic Bullet?
Is Visibility the DevOps Magic Bullet?
DevOps is an area defined by aspiration – there’s a better way of doing things, it suggests, a path to faster software delivery, better results, more efficient processes and higher levels of productivity. The potential is there, but as we cover in our report Driving Value Through Visibility, the path to success is beset by challenges. Not least:
- Siloed teams
- Complex architectures and legacy constraints
- Complicated and challenging compliance issues
- Communications issues internally
These challenges can stymie development efforts, diminish potential value and negatively shape the view of development projects internally. It’s not just traditional enterprises that can suffer: younger, and reputedly ‘agile’ organizations can hit similar challenges when they attempt to scale.
At least part of the answer, in our experience, comes down to visibility (or, as somebody once said, “if you can’t measure, you can’t manage”. Building on themes we have been developing across our DevOps report series, visibility needs to be end-to-end, across the pipeline from development and into operations. This gives management the information they need to prioritize and plan; it helps teams identify bottlenecks in development; and it offers wider understanding of ongoing innovation projects, and their value across the organization.
As we discuss in our Key Criteria report, end-to-end visibility is a key element of Value Stream Management, which enables value creation across a process, helping deliver on the goals of DevOps, such as efficiency, improved time to value and so on. Not only this but it helps shift the mindset of an organization from project- to product-based thinking, effectively focussing on the outcomes for the customers, not the needs of the project.
As organizations look to scale their practices, they need to increase visibility within their organization in parallel with their processes becoming more complex. So, while there may be no such thing as a magic bullet (not in this industry at least), visibility helps decision-makers know which way to aim.
To read the full report, click here.
October 2020
10-26 – When Is a DevSecOps Vendor Not a DevSecOps Vendor?
When Is a DevSecOps Vendor Not a DevSecOps Vendor?
DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.
But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.
The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.
From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”
This is how to tell the two types of vendor apart and how to use them.
Vendors Enabling DevSecOps: “Tools That Do”
A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:
- Create: Help to set and implement policy
- Develop: Apply guidance to the process and aid its implementation
- Test: Facilitate and guide security testing procedures
- Deploy: Provide reports to assure confidence to deploy the application
The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.
Supporting DevSecOps: “Tools That Help”
In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.
Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.
DevSecOps-washing is not a good idea for the enterprise
While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.
The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.
At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”
December 2020
12-10 – Asked and Answered: How Incorporating AI into DevOps Will Unlock the Future
Asked and Answered: How Incorporating AI into DevOps Will Unlock the Future
Incorporating AI into DevOps
So often, technology practitioners can feel like the cobblers’ children of tech—we talk about how development and operations can be automated, yet the capabilities we have at our disposal are frequently incomplete, fragmented, complex, and in all a long way from the vision of what tools could be. So, is there any hope for the future? I spoke recently to Eran Kinsbruner, Chief Evangelist at Perfecto, and Justin Reock, Chief Architect, OpenLogic at Perforce Software, about Incorporating AI into DevOps, and how the processes of DevOps will be transformed in the next five years by automation. Eran has just got a book out—Advancing Software Quality: Machine Learning and Artificial Intelligence in the Age of DevOps (available from all good outlets), so he should have a few of the answers.
What did I learn? First, the importance of a focus on value in DevOps; second, the role of AI and ML in accelerating DevOps; and third, the opportunities that exist today for AI-based improvement. Our conversation has been edited for clarity, but here’s the key points:
Jon Collins: What does DevOps really mean for you and what makes it work?
Eran Kinsbruner: DevOps is not a closed term that people really understand perfectly. I love Microsoft’s definition of DevOps: it is a union of people, process and product, helping deliver constant value to their customers. Fantastic. But it’s still quite vague.
So how do I do it? I have the people, I have feature teams, I have technology. I’m building these features in a short amount of time. But what is value? How do I know that I’m really adding value to my clients?
Perhaps execution speed is vital to them? In my mind, value is not just about execution speed, it’s much more, but you need to listen to your end users. What are they actually looking to get from your product? Sometimes the end developer doesn’t even know how his feature is going to be utilized out there.
Jon: Yes, I agree—you have to ask: What does value mean to your customers? Suddenly you’ve got a conversation: How do we define value? What are the benefits that our customers are getting? And what are they prepared to put in, in order to get those benefits? It becomes a higher level conversation that can steer everything else.
Without that high-level conversation then you just pump stuff out with no clue. It’s like producing cars. Here’s another one. Here’s another one. Here’s another one. Is anyone driving them? I have no idea! So, and how does that relate to quality in your mind? How does quality play across the lifecycle?
Eran: So I am looking at DevOps from the perspective of end users. Are my end users consuming my products? What do they think about my products? And how can I make sense of all the feedback so I can improve and create more value to these users?
So quality is not just about function: put something in, get something out. You want value equals quality by definition. When you bake into your costs into value delivery, you learn what it really means: functionality, performance, response time, availability.
You discover it by testing what’s right from the end-user perspective because if it’s not something that your clients are dealing with, you’re also not testing or providing quality for what matters. So both from a development and quality assurance perspective, you need to be very focused.
What do I need to cover? What do I need to test? On which platform? Which scenarios? Which is the most eloquent feature that someone actually touched in the previous code commit and stuff like that. This is when you find the result: valuable features or products to your end users.
Jon: Great, let’s make it value-first. But how does this map onto the DevOps process, from a pipeline perspective? And how can AI help?
Justin Reock: When I think about DevOps, it goes back to the Theory of Constraints and applying the idea of reducing the amount of friction involved in converting value to throughput. That to me is the essence of DevOps, at least from a business perspective. We’re doing everything we can to reduce the amount of “laying around” inventory, i.e. code that has not been converted into money yet.
The more we can do to reduce friction between converting our inventory and organizational costs throughput, the quicker every line of code that a developer commits to a source control repository becomes throughput, or money, out in the market. And if you distill it back to that birth, then I think that if you look at AI, its place becomes very clear.
The ideal DevOps pipeline is one that will be completely frictionless: a developer checks in his code and that code is then running in production five seconds later, right after passing through a series of tests where no human was ever involved. The customer is buying something, and you converted that code into throughput in a matter of seconds. That’s brilliant and beautiful and elegant, and that is the goal of DevOps and software, and so AI.
Jon: Let’s get down to the nitty gritty—can we look at an example?
Justin: Sure, for example software testing? There are multiple points where we can remove not only the slowdowns that having humans as part of that process adds, but we also, if we do it right, can eliminate more and more tester bias from that pipeline, which means we have less and less retesting. In a lot of ways, we’re still brute forcing the way we deal with that problem. We do A/B testing and Canary releases, just in case we didn’t think about a possible pathway.
But we still have goals here: DevOps is all about the continuous feedback loop. So you have to get feedback about your product and you have to integrate that into new features and you have to fix bugs, of course. The more we can reduce those issues and prevent them from seeing the light of day, through things like fuzzing and AI, the faster we can get that code out making money.
This all ties together. In a world where it’s all connected software, it opens the door to ambient services, self-driving cars, or completely automated retail venues. It helps create our fully realized virtual and augmented reality where everything is a digital asset, and scarcity is proven through blockchain, but that blockchain only matters if quality is enforced.
Jon: Whoa. That’s quite a leap!
Justin: Yes, you’re right, but I don’t think people really understand the molecular level at which software is about to bloom, due to AI in the DevOps process. Reducing friction in the pipeline is the biggest necessity, and it will open up all kinds of opportunities.
Jon: OK great, let’s drill into this –what is the lowest hanging fruit? What is going to change in DevOps over the next few years, because of AI and ML?
Eran: Let’s think back to the feedback loops. Sometimes developers and DevOps managers think they got it right, and are doing things right, but then a machine learning algorithm comes and sets them free, providing feedback which is quite different from what they thought they would get. ML can help provide unbiased, objective feedback, which doesn’t really look at the product roadmap or anything like that, but it looks at the end users, which is kind of clean.
Then when you merge it with the product decisions and the software delivery cycle, maybe you’re going to get something more solid and more relevant to your clients. That’s what I see as the biggest opportunity right now.
Jon: That all sounds great, it’s great theory. But what do I do to address these ideas?
Eran: That’s a good question. You don’t need to throw everything away and AI cannot really solve everything immediately. But we do need this acceleration of software quality. The noise reduction, the prioritization. We can obviously apply them throughout the entire pipeline, but let’s just focus on testing.
The test cases that are the most unreliable are a good case in point. We call them flaky. They’re showing up red in your CI/CD pipeline and you’re doing nothing about them because you don’t know why they are failing. AI is able to look into these failures, and classify them into different buckets. And suddenly we can see 80% of all these failures are not real bugs. They’re just down to poor coding skills by a test engineer. We now can zoom into the 20% that are real bugs, that are real issues that may impact the value to my customers. Now I have something I can prioritize. I know where my developers need to focus.
So noise reduction and prioritization of testing can result in an acceleration of software delivery. Once you’re applying that into your existing processes, you can move much faster.
Jon: Great, thank you! So AI and ML may unlock huge value in an increasingly digital world. Key right now is to look for direct opportunities to remove friction from the process itself, in testing and across the pipeline. Eran and Justin, thank you very much for your time!
12-17 – Beyond Agile: 4 Lessons to Better Software Development
Beyond Agile: 4 Lessons to Better Software Development
The widespread adoption of Agile, coupled with the rise of DevOps, means you’d be forgiven for thinking software development is now an easy, stress-free process. But whenever I speak to developers in the field that’s not the experience they describe. Missed deadlines, accumulating technical debt, and high workloads are all common struggles in the developer community.
This begs the question: if Agile isn’t the solution, what is?
I spoke to Daniel Mostert, a director of infrastructure technology solutions with a large technology services company, to tap his experience leading projects for large-scale applications and helping companies address struggles in their development processes. We discussed the problems inherent in development, the role Agile can have in those processes, and the techniques managers can use to engage their teams and deliver projects on time and on budget. Our conversation produced four key lessons that Mostert says he has gleaned from his 30-year career in development.
1/ Break Down the Process
This is the key to improving any process, as Mostert explained to me, and the sheer simplicity of it means many managers forget to do it. But by breaking down a process, and evaluating each step’s importance to the overall goal, it helps create priorities and sharpen focus in the team.
“I came through the whole process of functional design, object orientation, even up to scale,” Mostert said. “It’s basically all the same: breaking the process into smaller pieces and understanding what you’re trying to do. That’s what it’s all about, but we call it all different things. But in the end, that’s what it boils down to.”
2/ One Size Does Not Fit All
Mostert believes the Agile concept can be very positive and productive in certain environments, but not others. Often, achieving higher productivity requires employing different and creative management techniques and processes.
“If you’re in a large project and you’re developing some functionality which is very clearly defined, I think there’s a lot of reasons for the Agile approach,” he said. “If you’re working in a hybrid environment where you’re doing some maintenance, some enhancements, some development, then the question becomes, ‘What is the priority of the day?’ There’s probably multiple environments that can be laid out, and different approaches for each of these different environments. But good leadership, I think, remains essential.”
3/ Avoid the Scheduling Trap
It’s a key part of many project management training courses: How to create a schedule. But, Mostert said, this is often a fatal mistake from management—looking to control a process from the top down, but with little knowledge of what the project actually requires and how long it’s going to take.
He suggested working with the development team early on to create rough timetables because they can provide better estimates of expected completion dates without setting rigid timescales.
“What’s absolutely not going to work is the story approach of people telling you: ‘This is the date. This is the plan and this is when you have to have it finished,’” Mostert said. “You can’t, from the top, assess the work and say, ‘This is going to be finished on this date,’ because you can’t plan a software project from a schedule. That’s just not going to work. How long does it take you to write a program? How long does it take you to test? How long does it all take? There’s a little bit of Agile coming back into that, that the team has got to assess the work and help with the scheduling.”
4/ Creative Structure, Positive Culture
Mostert is a proponent of adapting management styles and processes to the project at hand, but he has a few “plays” he recommends for specific problems. He likes to pair these plays with a positive team culture, where everyone buys into the new structure and supports each other to achieve various goals. But this approach can have its downsides too, Mostert warned.
One model I found intriguing was to separate teams into “firefighters” for addressing bugs and “heads down coders” for developing functionality. Mostert said this drive for creating good functionality is critical to avoid burdening the project or product with crippling technical debt.
“You have to structure the team so that you have one team that can pick up all this nonsense of changing priorities and things like that,” he said. “And then you have to have another part of the team that can just roll with the functionality that you want to create.”
Mostert said the danger comes from creating technical debt that never gets cleared. Teams lay down a solution and plan to refine it later, and then later never comes. Over time, Mostert said, “You end up with one or two people that know what’s going on in that mess and they are forever busy firefighting. And you never get out of that spin.”
The biggest challenge, according to Mostert? Keeping the team and the culture intact as different members roll in and out of the group. If one or two key members leave, he said, “then you almost start all over again.”
Conclusion
Mostert’s ideas struck a chord with me, especially the way they address the importance of flexibility in approaching a software development project. It’s a vital quality for project managers to possess, especially when operating at scale.
Agile processes have been very successful enabling the modern, efficient culture of software development, but it isn’t applicable to every environment, as Mostert so clearly expressed. His philosophy of changing processes and team structure to suit the project gives agency to project managers to assess the tasks in hand, evaluate their brief and team, and to make creative decisions that may or may not fit into a traditional Agile structure. In the end, all that matters is that it works.
2021
Posts from 2021.
February 2021
02-26 – Getting Started with Value Stream Management
Getting Started with Value Stream Management
Getting Started with Value Stream Management
Value stream management (VSM) has been one of the hottest buzzwords in DevOps over the last few years—but what does it actually mean in practice, and how can DevOps professionals implement VSM in a way that helps them achieve more without disrupting their existing pipelines?
I sat down with Siddharth Pareek, Global Practice Lead for DevOps at NatWest Group UK to discuss VSM, why it matters, and what DevOps teams can practically do to start their journey toward using VSM to improve their development process.
What is VSM?
Jon Collins: Hi Siddharth, and thanks for joining me today. My theory is Value Stream Management is really just the reinsertion of good old management governance visibility principles into fast-moving, dynamic Agile. I don’t know how that resonates with your organization, Siddharth?
Siddharth Pareek: I think it resonates with most of the organizations I’ve worked with. We have started running tools and technologies, we have started running towards cloud, digital transformation—but in that journey of going through change or being in competition, or getting digital, getting cloud, somewhere down the line we have forgotten, “What is the value we are actually trying to derive? For whom? Why?”
Jon Collins: It’s one of those things that when you’ve realized that you’ve forgotten, you’re like: “Oh my goodness! Right, we definitely need to focus on this!” It’s just something that’s been forgotten. It’s not like we’re building a rocket ship here. It’s merely that we’ve forgotten why we’re here in the first place.
Silos in the Pipeline
Siddharth Pareek: It may be possible that people are focusing on just one part of the delivery pipeline. There are different teams assigned at different stages. Each team does not have the view of the whole delivery. So if you ask them what is happening at the team level, you may have an answer, or asking the ops team what’s happening at their level, they may have an answer. But if you ask, “What is happening in that whole delivery pipeline?” They don’t have an answer for it.
Jon Collins: The thing I’ve learned is, essentially, that we should move from a project mindset to a product mindset. But I’ve worked in project offices, you know, CMOs and so on where everything’s about the deadlines. Everything’s about the Gantt charts. Everything’s about the scheduling. And then when you hit a deadline, you’ve done it, and you’re pleased, but you’ve got absolutely no inkling of whether or not what you did was useful. So this idea that it’s actually about a product, because when you build a product you deliver it, and then you’re really keen to know what it is that you have, whether or not your customers are using it and whether it works, and so on.
Getting Started with Value Stream Management
Siddharth Pareek: For DevOps professionals in the field, about to go on the journey to search for value using VSM, do you think it is something which has been forgotten, that could come back into the foreground, or do they need to discover it completely fresh?
Jon Collins: Value stream management is about thinking of things as value streams and then managing them as value streams. But then how do you do that and what does it mean in practice? The first step is, if you want visibility of your value stream, which is essentially onto your development pipeline, then you need to know what it is. So value stream mapping is vital. There are tools out there that essentially enable you to map out the different activities that happen in your pipeline in any process.
I’ve had really good experience with the business process side of things where you do lots of interviews, you write down exactly all the different stages and then you present it back and everyone says, “Oh, that’s great. Fantastic. Finally, now I understand it!”
So if you’re not getting on top of outlining your value streams and where that value lies, then a really good first step is to just start speaking to people, work out what’s happening. It doesn’t matter if it’s being done efficiently, because the goal of the next stage is to help it be done more efficiently. What matters is, do you understand how things are built? Do you understand how things are deployed today? If it’s complex and convoluted, you’re already halfway towards solving the problem because you are already identifying areas that you can address.
Efficiency is King when getting started with Value Stream Management
Jon Collins: I’m a great believer in separating out efficiency and effectiveness. Efficiency, essentially, is doing the right thing. So if you’re doing the wrong thing, you’re probably wasting money: you’re spending time doing stuff you don’t need to and you’re creating problems for yourself that are unnecessary. In software development terms that maps onto bottlenecks. For example, something goes off to the test team. It sits around for two weeks, waiting for them to get through their testing backlog and then finally comes back to you, which is very inefficient, of course. So, you can start to identify things like that.
Now we’re happy with our pipeline, which we’re calling a value stream, the next step is really the master class of VSM: It’s the why.
Siddharth Pareek: Exactly. Are the things that you’re building actually going to help people? How can you measure the value in features that you’re building? How can you rank them according to whether or not they’re supposed to give value? What metrics do you want to put around that?
Jon Collins: And then, how much do they cost to build? And then when you release them: How can you show that they are actually delivering the benefits that you expected?
This could be a feature on a website that simplifies the way that people buy multiple things. You build it, you implement it, you add it to your shopping basket functionality, then you have to examine whether or not people are buying more stuff from you. Or are they buying faster or has the average order value increased? Or you can look at customer experience metrics, etc.
But ultimately in that situation, you’d care mostly about people spending more with you. And so good tools in the value stream management sphere actually enable you to link metrics like website effectiveness and the amount that’s going through the shopping cart with the pipeline so you can get a direct feedback loop on the benefits that your stuff is providing.
So that’s really the kind of toolkit around values. And value stream management, of course, is a response to a need, because it’s emerged that people aren’t necessarily thinking about value.
March 2021
03-25 – Asked and Answered
Asked and Answered
You can also listen to the full conversation here.
April 2021
04-13 – The Why of Value Stream Management
The Why of Value Stream Management
If you’re like me and have been around the block in tech more than once, you’ve seen three-letter acronyms come and go. Sometimes the technology they refer to is a flash in the pan; other times it hangs around for a bit before being subsumed into the platforms we build upon.
And so it is with value stream management (VSM), which has grown in popularity in DevOps circles. The first question I tend to ask about this is, why? Is this some new innovation that needs a name, or has someone spotted a weakness in existing tools and methods?
In a nutshell, VSM refers to the need—or the ability—to have visibility over how software is being built. As units of function pass along the pipeline, from concept to deployment, managers can benefit from understanding how this is taking place, from speed of development, to where the bottlenecks are, to what value is being delivered, and so on.
The question of whether we need VSM is particularly pertinent in the field of software development, not least because people have been building applications for an awfully long time. You’d think we’d know how to do that, and how to manage the process by now.
So, has the DevOps world hit an epiphany where suddenly it discovered the secret to life, the universe, and how to develop software? Not quite. VSM (which also has a heritage) exists as a response to a current need, so let’s take a look at the causes.
First, let’s face it: Software development has been running itself into the sand for decades. As systems became larger and more complex, linear processes couldn’t keep up or, more accurately, increasingly slowed things down. There may be some waterfall advocates still out there, but all too often, the process itself was the bottleneck, hindering innovation.
Back in the nineties, pockets of people looked into different ways of doing things. Some went for lean manufacturing approaches and Japanese efficiency techniques. Others focused on outcomes, with use-case-driven design and eXtreme programming, both being about just getting stuff done. Still, when I was training people in agile development methodologies such as DSDM, such approaches were very much the exception.
And then a new reality appeared, driven by the Web, open source, RESTful APIs, and more, where kids were getting stuff done and leapfrogging older, more crusty approaches. Sites and apps needed to be developed fast; they needed to be put together and put out there, quick. People started to say: Look, can we just get that website by next week?
The need for speed was very much driven by fear, and we’re still seeing this today as organizations are (rightly, yet hyperbolically) being told how they need to transform or risk going out of business. But as software development accelerated, it hit new challenges and bottlenecks—not the least of which was the need to control change (one of the founding principles of DevOps, in 2007).
Fast forward to today, and there’s a whole new set of challenges. The fact is that any approach, if applied universally, will eventually show weaknesses. In this case, “just” developing things fast will come at the detriment of other aspects, such as developing them well (cf. shift-left quality and security), or delivering things that make a positive difference.
The latter is where VSM kicks in. It basically serves to fill a gap: If you’re not thinking about whether you’re doing the right stuff, in the right way, then it’s probably time to start. We are now in an age where agile practices, which used to be the exception, have become the norm. But agile itself is not sufficient: “managed agile” is what’s needed.
Which brings us to another challenge. The world has moved from scenarios where everyone was building stuff in the same (waterfall) way, to using development processes that flex according to what people want to do. This is great when you’re just getting going and want to focus on building stuff, but not so good when you want to, say, switch teams and crack on without learning how everything works again.
Frankly, development processes have become fragmented, inefficient, cumbersome, and costly. Which is not good—teams don’t want to be spending their time managing processes and tools, when they could be building cool new applications. And this is where VSM comes in.
The term value stream comes from manufacturing. The easiest way to think about it, I think, is to think about what you’re trying to deliver as a stream of activities that build value on top of each other. So, first, start thinking about the development pipeline as a value stream; make it efficient and effective, then look to standardize value streams across the organization.
Someone recently asked me: Isn’t VSM just applying business process modeling and management techniques to software now? And I answered: It’s completely just applying business process modeling and management to software development. This goes back to a good old Hammer and Champy’s definition of a business process: It’s a sequence of activities that deliver value to a customer, and that’s what software delivery should be.
Value stream management exists because it has to right now, though it doesn’t exist in many places that are trying to implement DevOps practices. So it’s really the reinsertion of tried and true management governance and visibility principles into fast-moving, dynamic, and agile environments.
Will VSM last? That’s another good question. I’m hearing some organizations find VSM to be yet another overhead (clue: they’re probably doing it wrong). I’m also of a mind that if we, as a collective of development and operations advocates, could agree that we don’t need every individual project to reinvent best practice, we could probably standardize our pipelines more, allowing more time to get on with, yes, the cool stuff.
I don’t want to see a return to onerous methodologies such as waterfall. I do want to see innovators innovate, developers develop, and operators operate, all with minimal stress. I’m watching with interest the DevOps Institute’s move toward assessing capabilities, I’m enjoying seeing the adoption of product-based approaches in software development, and I’m talking to multiple vendors about how we might see pipelines as code form into a Terraform-like open standard.
All of these threads are feeding a more coherent future approach. There’s a catalyst for all of this, namely microservices approaches, which beg for straightforwardness in face of the complexity they create. Cf. a recent conversation with DeployHub’s Tracy Regan about the need for application configuration management.
I realize I haven’t answered the question yet. I believe VSM will prevail, but as a feature of more comprehensive end-to-end tooling and platforms, rather than as an additional layer. Managed value streams are a good thing, but you shouldn’t need a separate tool for that, isolated from the rest of your toolchain.
So, ultimately, VSM is not a massive epiphany. It’s simply a symptom of where we are, as we look to deliver software-based innovation at scale. The journey is far from over, but it’s reaching a place of convergence based on microservices (which, somewhat ironically, go back to modular design principles from 1974). Best practice is emerging, and will deliver the standards and platforms we need as we move into the future.
June 2021
06-02 – Becoming a Learning Company: Why Boosting Technology Training is Just as Important as Operations
Becoming a Learning Company: Why Boosting Technology Training is Just as Important as Operations
In the highly competitive and skilled world of enterprise IT, continued technology training has always been a huge part of career development. The recent massive shift to remote work combined with the continued democratization of learning platforms is enabling a whole generation to find the training it needs online, anytime, and from anywhere.
What does this fundamental shift from training a lucky few within the company to a distributed world with self-taught developers and engineers mean for business and the skills IT workers need to succeed?
To answer this question, I spoke to Don Gannon-Jones, Head of Software Developers Skills for technology training platform Pluralsight. We spoke of the evolving use of the platform, how the pandemic has affected attitudes toward technology training, and how enterprises can become ‘learning companies’ with ongoing training at the heart of business and IT strategy.
Jon Collins: Hi Don, and thanks for joining me today. What has been the biggest change you’ve noticed on the Pluralsight platform, and how have individuals and companies been using it over the past year?
Don Gannon-Jones: It’s funny. The pandemic went through this really weird cycle that in retrospect was pretty predictable. A lot of people had massive job insecurity, so they either needed to learn a new skill, or what we saw was a lot of people who wanted to skill up.
If there’s a takeaway, it’s that a lot of people have learned that your job is only your job, Your career must be all-encompassing. It’s up to you to make sure your career path is up to snuff, particularly in technology, so when you need a new job, your career is ready to take you there.
The second thing I noticed was a lot of companies experiencing a corporate version of the same thing. It goes something like, “We’ve been doing things this way for a long time and all of a sudden, some key things in our world no longer fit together and we weren’t prepared for that.” training wasn’t up to date, so there’s been an enormous demand for companies to upscale their teams.
Some companies told us, “We need to triple down on technology, and bring in a ton of people skilled in Agile development, who are much broader technically. It’s not just going to be Java and C#. We need a broader range of skills in house.” But they were unprepared.
That’s because the market was not prepared to give them the right people. We’ve started to see a lot of insourcing and more apprenticeships. We’re seeing companies bring people in from non-technical sides of the company, like business analysts and project managers. These are roles that were perhaps technology adjacent. So companies are teaching them to be software developers, systems engineers, architects, and so on because people with the right skills aren’t there.
Jon Collins: Which parts of that have accelerated? Is it Agile, or project management, or the deep engineering that people are lacking?
Don Gannon-Jones: It’s the technical skills—software developers in particular. It’s not so much that there’s this sudden massive demand for software developers. It’s the variety of software developers: “We need a little bit of JavaScript, a little bit of this, a little bit of that.” All of a sudden, shops that used to be more monolithic from a technology perspective have become super broad, so that’s what they’re looking for.
Attitudes toward technology training and certifications tend to “roller coaster” in our industry. First organizations care. Then they don’t care. Demand for cloud certifications have exploded, and it’s unfortunate, because I feel like every time we go through one of these “certifications exploding” cycles, three or four years later we tend to regret it.
Employers start looking for certifications as a minimum bar of competency and that becomes a barrier. It becomes a checkbox on the job posting, so the entire world goes out and starts studying real hard to earn certifications. Then we realize the minimum bar wasn’t all that high to begin with, and we regret having leaned into certifications so hard. And that is where we are right now. Everybody wants people who know Azure, Google Cloud and AWS. Those are almost table stakes.
We’re seeing an incredible trend internally of companies pressing people to get skilled up, so they can leverage the cloud technologies that they need.
Jon Collins: Something I’ve learned over the decades is how much IT is run by non-technical drivers. One is quarterly sales cycles and the other is CVs. Over here, we’ve had the furlough scheme. A lot of people are working part time or being laid off on full pay. This is a great time for them to think, ”Let me look at my CV and get it up to scratch,” so I can get another job.
Don Gannon-Jones: For people who have the personal ambition and self-starter drive to do that, it really is a great time. We’re seeing a lot more people engaged with fundamental learning paths across all languages with entry level certifications. That’s obviously what those are for — someone just getting started. We’re also seeing people jump from adjacent industries into the hardcore side of technology, because I think there’s a feeling there’s going to be more jobs in that area. Tech is one of the easiest types of work to do remotely. It’s not just people in the US. It’s people throughout Africa, South America, and beyond. If you’ve got the Internet, you can work.
Another interesting part of this is around diversity, which is related to insourcing. We’re starting to see a few companies that may have often expressed strong commitments to diversity, particularly amongst their technical workers, but they’ve struggled to make it happen. If you’re not getting a diverse pool of candidates, then you’re not going to achieve your diversity goals. And while everyone was still hiring for skills, they just said: “Well, we did the best we could, and we’re going to continue to try and do better.”
Now we’re seeing these insourcing programs starting to be successful. They’re super hard to set up, can be a big risk, and they take a lot of management. But when they work, they work incredibly well. The result is they’re going to hire, knowing they have the ability to train anyone up. We’re seeing some companies confident enough in their ability to build skills, and they’re hiring for things they can’t build, including diversity.
Jon Collins: This is fascinating. I’m a great believer in the need for diversity of thought in order to innovate, which means diversity of people. It also means you don’t have to hire for skills, but for other abilities, including the ability to learn.
Don Gannon-Jones: Yes, indeed. A popular phrase for a few years has been, “Every company is now a tech company, whether they like it or not.” And I think there’s a corollary to that, which is that every tech company needs to be a learning company. You need to be a skills company.
The only way you can survive in the long term, and the only way you’ll be able to get all the things you need, is if you can reduce your dependency on the job market to bring you the skills your company needs. You must be able to control those skills through technology training. Once you do that, you can have any technology you want. And you can have anything in the workforce that you want.
It can be local. It can be remote. It can be diversity. You can do all those things if you can confidently bring skill levels where you need them. I think there are a lot of technology and business leaders starting to see learning isn’t a “nice to have.” It’s not an employee benefit. It’s a core competency, and it’s just as crucial as accounting, or operations, or any other part of the business.
Jon Collins: Don, thank you so much!
Don Gannon-Jones: My pleasure.
July 2021
07-13 – Why Cloud Observability Now?
Why Cloud Observability Now?
The term Observability might come as a conundrum for anyone who has been involved in IT Operations over the years, given how a major part of the challenge has always been to keep “single pane of glass” visibility on what can be a complex IT estate. Moving to the cloud also requires visibility, across the virtualized infrastructure and services in use — this is to be expected. Less evident at the outset is how cloud-based architectures change the nature of what needs to be managed:
- cloud-based applications can become very complex, particularly if they are based on microservices
- they can rely on multiple integration points with external and SaaS-based applications and services, accessed via APIs.
- they are often hosted in multiple cloud environments or may still have portions hosted in on-premises or private clouds as well.
- Third-party components such as JavaScript frameworks, advertising and analytics, etc. must also be monitored
Alongside the need to maximize reliability and performance of their applications, organizations also need to focus increasingly on user experience, often directly with customers. For all of these reasons, organizations are today recognizing they need to rethink the way they approach IT Operations: cloud observability is required now, because of both the nature of what is being monitored, and the reasons for monitoring.
Cloud Observability starts with how organizations view a much more extended, and more complex, portfolio of IT assets; changes are also required at a deeply technical level, to collate the required low-level information on how services are performing — system events, alerts, performance data and other telemetry information that can be used to build a picture of what is going on.
Despite this apparent change of focus, the goals remain the same — to assure the responsiveness of IT operations to any changes in the environment, including expected events (such as deployments), adverse incidents, and outages. Cloud Observability captures the practices, platforms, and tools required to manage cloud-centric IT architectures in such a way that uptime can be maintained at a high level, and when things go wrong, service issues can be minimized. Outages are measured by Mean Time To Resolution (MTTR) and it is the goal of the observability concept to drive the MTTR value to as close to zero as possible. Also, cloud observability products can reduce costs by not requiring egress charges to move metrics, traces, and logs from the cloud app to a non-cloud monitoring system.
Understandably given the rate of change of digital transformation today, there is no one single way to deliver on the goals of Cloud Observability. Solution providers such as Splunk have extended their existing platforms to embrace observability across cloud and hybrid environments, whilst new vendors have focused specifically on cloud-based applications. In our Key Criteria report for Cloud Observability, we cover how end-user organizations can evaluate different solutions according to their own needs, and we conduct our own evaluation in the accompanying Radar report.
Tools are only one element of a Cloud Observability strategy, which needs to incorporate best practices and structures that fit with a cloud-centric IT Operations mindset. We explore this topic, as well as the rationale for Cloud Observability, in our up-coming webinar. We’d love to see you there, and would welcome any questions you may have so tune in and we can flesh out how to deliver on a future-proof approach for IT operations, together.
To learn more, please join us for the webinar on July 15. Register here.
2022
Posts from 2022.
April 2022
04-19 – Ransomware: Why It’s Time to Think of it as a Data Management Problem
Ransomware: Why It’s Time to Think of it as a Data Management Problem
Over the last couple of years, ransomware has taken center stage in data protection, but very few people realize it is only the tip of the iceberg. Everybody wants to protect their data against this new threat, but most solutions available in the market focus just on relatively quick recovery (RTO) instead of detection, protection, and recovery. In fact, recovery should be your last resort.
Protection and detection are much more difficult measures to implement than air gaps, immutable backup snapshots, and rapid restore procedures. But when well-executed these two stages of ransomware defense open up a world of new opportunities. Over time, they will help defend your data against cybersecurity threats that now are less common, or better said, less visible in the news—such as data exfiltration or manipulation. And again, when I say less visible, it is not only because the incidents are not reported, it is because often nobody knows they happened until it’s too late!
Security and Data SilosNow that data growth is taken for granted, one of the biggest challenges most organizations face is the proliferation of data silos. Unfortunately, new hybrid, multi-cloud, and edge infrastructures are not helping this. We are seeing what we might call a “data silo sprawl”–a multitude of hard-to-manage data infrastructure repositories that proliferate in different locations and with different access and security rules. And across these silos there are often rules that don’t always follow the company’s policies because the environments are different and we don’t have complete control over them.
As I have written many times in my reports, the user must find a way to consolidate all their data in a single domain. It could be physical—backup is the easiest way in this case—or logical, and it is also possible to use a combination of physical and logical. But in the end, the goal is to get a single view of all the data.
Why is it important? First of all, once you have complete visibility, you know how much data you really have. Secondly, you can start to understand what the data is, who is creating and using it, when they use it, and so on. Of course, this is only the first step, but, among other things, you start to see usage patterns as well. This is why you need consolidation: to gain full visibility.
Now back to our ransomware problem. With visibility and pattern analysis, you can see what is really happening across your entire data domain as seemingly innocuous individual events begin to correlate into disturbing patterns. This can be done manually, of course, but machine learning is becoming more common, and subsequently, analyzing user behavior or unprecedented events has become easier. When done right, once an anomaly is detected, the operator gets an alert and suggestions for possible remediations so they can act quickly and minimize the impact of an attack. When it is too late, the only option is a full data recovery that can take hours, days, or even weeks. This is principally a business problem, so what are your RPO and RTO in case of a ransomware attack? There really aren’t many differences between a catastrophic ransomware attack and a disaster that make all of your systems unusable.
I started talking about ransomware as malware that encrypts or deletes your data, but is this ransomware the worst of your nightmares? As I mentioned before, such attacks are only one of the demons that keep you up at night. Other threats are more sneaky and harder to manage. The first two that come to mind are data exfiltration (another type of prevalent attack where ransom is demanded), and internal attacks (such as from a disgruntled employee). And then of course there is dealing with regulations and the penalties that may result from the mishandling of sensitive data.
When I talk about regulations, I’m not joking. Many organizations still take some rules lightly, but I would think twice about it. GDPR, CCPA, and similar regulations are now in place worldwide, and they are becoming more and more of a pressing issue. Maybe you missed that last year Amazon was fined €746,000,000 (nearly $850,000,000) for not complying with GDPR. And you would be surprised at how many fines Google got for similar issues (more info here). Maybe that’s not much money for them, but this is happening regularly, and the fines are adding up.
There are several questions that a company should be able to answer when authorities investigate. They include:
- Can you preserve data, especially personal information, in the right way?
- Is it well protected and secure against attacks?
- Is it stored in the right place (country or location)?
- Do you know who is accessing that data?
- Are you able to delete all the information about a person when asked? (right to be forgotten)
If regulatory pressures weren’t concerning enough to encourage a fresh look at how prepared your current data management solution is for today’s threats, we could talk for hours about the risks posed by internal and external attacks on your data that can easily compromise your competitive advantage, create countless legal issues, and ruin your business credibility. Again, a single domain view of the data and tools to understand it are becoming the first steps to stay on top of the game. But what is really necessary to build a strategy around data and security?
Security is a Data Management ProblemIt’s time to think about data security as part of a broader data management strategy that includes many other aspects such as governance, compliance, productivity, cost, and more.
To implement such a strategy, there are some critical characteristics of a next-generation data management platform that can’t be underestimated. Many of these are explored in the GigaOm Key Criteria Report for Unstructured Data Management:
- Single domain view of all your data: Visibility is critical, yet attempts to close a visibility gap with point solutions can result in complexity that only heightens risk. Employing multiple management platforms that can’t talk to each other can make it almost impossible to operate seamlessly. When we talk about large-scale systems for the enterprise, ease of use is mandatory.
- Scalability: The data management platform should be able to grow seamlessly with the needs of the user. Whether it is deployed in the cloud, on-prem, or both, it has to scale according to the user’s needs. And scalability has to be multidimensional, meaning that not all organizations have the exact same needs regarding compliance or governance and may start with only a limited set of features to expand later depending on the business and regulatory requirements.
- Analytics, AI/ML: Managing terabytes is very difficult, but when we talk about petabytes distributed in several environments, we need tools to get information quickly and be readable by humans. More so, we need tools that can predict as many potential issues as possible before they become a real problem and remediate them automatically when possible.
- Extensibility: We often discussed the necessity of a marketplace in our reports. A marketplace can provide quick access to third-party extensions and applications to the data management platform. In fact, it is mandatory that APIs and standard interfaces integrate these platforms with existing processes and frameworks. But if the IT department wants to democratize access to data management and make it readily available to business owners, it must enable a mechanism that, in principle, looks like an app store of a mobile platform.
From my point of view, these are the main principles of a modern data management platform, and this is the only way to think holistically about data security looking forward.

**Data Management is Evolving. Are You?**Now back to the premise of this article. Ransomware is everybody’s top-of-mind threat today, and most organizations are focusing on finding a solution. At the same time, users are now aware of their primary data management needs. In most cases, we talk about the first steps to get more visibility and understand how to improve day-to-day operations, including better data placement to save money, search files globally, and similar tasks. I usually classify these tasks in infrastructure-focused data management. These are all basic unstructured data management functions performed at the infrastructure level. Still, they need the same visibility, intelligence, scalability, and extensibility characteristics of advanced data management I mentioned above. But now there are increasingly pressing business needs, including compliance and governance, in addition to learning from data to improve several other aspects of the business.
Now is the right time to start thinking strategically about next-generation data management. We can have several point solutions, one for ransomware, one for other security risks, one for infrastructure-focused data management, and maybe, later, one more for business-focused data management. Or we can start thinking about data management as a whole. Even if the initial cost of a platform approach should prove higher than single-point solutions, it won’t take long before the improved TCO repays the initial investment. And later, the ROI will be massively different, especially when it comes to the possibility of promptly answering new business needs.
May 2022
05-01 – Case Study: Datadobi moves from product to platform with GigaOm advisory
Case Study: Datadobi moves from product to platform with GigaOm advisory
Context – From Product to Platform
Datadobi is a fast-growing, unstructured data management software vendor based in Belgium. Since its origins in data migration, it expanded into data protection; then, in 2022, the company broadened into data management with the introduction of its product StorageMAP. The impetus was a realization that Datadobi’s object-based data analysis and movement capabilities could be offered as a platform to support a much wider range of services.
At the same time, this meant moving into new markets, addressing different scenarios, and potentially adopting different terminology. Datadobi was an infrastructure-focused company, and needed to reflect the requirements of its customer base: this had an impact both on its roadmap, and on its go-to-market strategy.
Engagement – Value-add From The Outset
In June 2020, Datadobi worked with Touchdown PR to instigate an analyst program, speaking to a range of analysts for the first time, including GigaOm’s data management lead, Enrico Signoretti. Datadobi recalls how the conversation with GigaOm was much more open than with other analyst firms – “We agreed on many things, disagreed on others,” recalls Carl D’Halluin, CTO, Datadobi.
Part of this conversation related to Datadobi’s planned roadmap. Enrico shared his perspectives on the market, in particular, referencing a model he had created for data management, mapping infrastructure against business governance. Based on this model, Datadobi could see how to adjust the roadmap to fit with market trends around unified data management (UDM), but more than this, the company could now define how to address different and less familiar market segments.
Response – Open and Transparent Advisory
To dig deeper into these topics, Datadobi took out a GigaOm subscription incorporating advisory time. This gave the company’s sales team a focused, compelling way of engaging with customers and prospects in a language they could understand and connect with.
More than this, the idea of Datadobi becoming primarily a universal platform came out of these conversations. In roadmap terms, this drove a need for software partnerships, with Datadobi offering an “App Store for UDM” that partners could plug into.
Outcomes – Building a Deeper Partnership
While it is difficult to put a hard figure on benefits, Datadobi’s relationship with GigaOm helped focus the company on a model and language that it could take to the market around the launch of its StorageMAP product. Not only this, but GigaOm was a pleasure to work with with its engineering-led, technical focus. “GigaOm analysts know what they are talking about and always give us valuable feedback rather than just trying to sell an offering!” says Carl.
Moving into the future, Datadobi intends to extend its platform into financial and other areas of data management. As it does so, the company sees its partnership with GigaOm as a foundation of honesty, advice and expertise, which it can use to hone its roadmap and drive its customer conversation ever higher.
June 2022
06-19 – Retrospective thoughts on KubeCon Europe 2022
Retrospective thoughts on KubeCon Europe 2022
https://vimeo.com/715991400
I’m not going to lie. As I sit on a plane flying away from Valencia, I confess to have been taken aback by the scale of Kubecon Europe this year. In my defence, I wasn’t alone the volume of attendees appeared to take conference organisers and exhibitors by surprise, illustrated by the notable lack of water, (I was told) t-shirts and (at various points) taxis.
Keynotes were filled to capacity, and there was a genuine buzz from participants which seemed to fall into two camps: the young and cool, and the more mature and soberly dressed.
My time at KubeCon Europe was largely spent in one-on-one meetings, analyst/press conferences and walking the stands, so I can’t comment on the engineering sessions. Across the piece however, there was a genuine sense of Kubernetes now being about the how, rather than the whether. For one reason or another, companies have decided they want to gain the benefits of building and deploying distributed, container-based applications.
Strangely enough, this wasn’t being seen as some magical sword that can slay the dragons of legacy systems and open the way to digital transformation the kool-aid was as absent as the water. Ultimately, enterprises have accepted that, from an architectural standpoint and for applications in general, the Kubernetes model is as good as any available right now, as a non-proprietary, well-supported open standard that they can get behind.
Virtualisation-based options and platform stacks are too heavyweight; serverless architectures are more applicable to specific use cases. So, if you want to build an application and you want it to be future-safe, the Kubernetes target is the one to aim for.
Whether to adopt Kubernetes might be a done deal, but how to adopt certainly is not. The challenge is not with Kubernetes itself, but everything that needs to go around it to make resulting applications enterprise-ready.
For example, they need to operate in compliance environments; data needs to be managed, protected, and served into an environment that doesn’t care too much about the state; integration tools are required with external and legacy systems; development pipelines need to be in place, robust and value-focused; IT Operations need a clear view of what’s running whereas a bill of materials, and the health of individual clusters; and disaster recovery is a must.
Kubernetes doesn’t do these things, opening the door to an ecosystem of solution vendors and (often CNCF-backed) open source projects. I could drill into these areas Service Mesh, GitOps, orchestration, observability, and backup but the broader point is that they are all evolving and coalescing around the need. As they increase in capability, barriers to adoption reduce and the number of potential use cases grows.
All of which puts the industry at an interesting juncture. It’s not that tooling isn’t ready: organizations are already successfully deploying applications based on Kubernetes. In many cases, however, they are doing more work than they need developers need insider knowledge of target environments, interfaces need to be integrated rather than using third-party APIs, higher-order management tooling (such as AIOps) has to be custom-deployed rather than recognising the norms of Kubernetes operations.
Solutions do exist, but they tend to be coming from relatively new vendors that are feature rather than platform players, meaning that end-user organisations have to choose their partners wisely, then build and maintain development and management platforms themselves rather than using pre-integrated tools from a singe vendor.
None of this is a problem per se, but it does create overheads for adopters, even if they gain earlier benefits from adopting the Kubernetes model. The value of first-mover advantage has to be weighed against that of investing time and effort in the current state of tooling: as a travel company once told me, “we want to be the world’s best travel site, not the world’s best platform engineers.”
So, Kubernetes may be inevitable, but equally, it will become simpler, enabling organisations to apply the architecture to an increasingly broad set of scenarios. For organisations yet to make the step towards Kubernetes, now may still be a good time to run a proof of concept though in some ways, that sip has sailed perhaps focus the PoC on what it means for working practices and structures, rather than determining whether the concepts work at all.
Meanwhile and perhaps most importantly, now is a very good moment for organisations to look for what scenarios Kubernetes works best “out of the box”, working with providers and reviewing architectural patterns to deliver proven results against specific, high-value needs these are likely to be by industry and by the domain (I could dig into this, but did I mention that I’m sitting on a plane? ;) ).

KubeCon Europe summary – Kubernetes might be a done deal, but that doesn’t mean it should be adopted wholesale before some of the peripheral detail is ironed out.
August 2022
08-05 – Achieve more with GigaOm
Achieve more with GigaOm

As we have grown substantially over the past two years. We are often asked who (even) is GigaOm, what the company does, how it differentiates, and so on. These are fair questions—many people still remember what we can call GigaOm 1.0, that fine media company born of the blogging wave.
We’ve been through the GigaOm 2.0 “GigaOm analyst firm” phase before deciding we wanted to achieve more. That decision put us on a journey to where we are today, ten times the size in headcount and still growing, and covering as many technology categories as the biggest analyst firms.
Fuelling our growth has been a series of interconnected decisions. First, we asked technology decision-makers —CIOs, CTOs, VPs of Engineering and Operations, and so on—what they needed and what was missing: unanimously, they said they needed strategic technical information based on practical experience, that is, not just theory. Industry analysts, it has been said, can be like music critics who have never played in an orchestra. Sure, there’s a place for that, but it leaves a gap for practitioner-led insights.
Second, building on this, we went through a test-and-learn phase to try various report models. Enrico Signoretti, now our VP of Product, spearheaded the creation of the Key Criteria for table stakes and the GigaOm Radar report pair based on his experience in evaluating solutions for enterprise clients. As we developed this product set in collaboration with end-user strategists, we doubled down on the Key Criteria report as a how-to guide for writing a Request For Proposals.
Doing this led to the third strand, expanding this thinking to the enterprise decision-making cycle. Technology decision-makers don’t wake up one morning and say, “I think I need some Object Storage.”
Rather, they will be faced with a challenge, a situation, or some other scenario – perhaps existing storage products are not scaling sufficiently, applications are being rationalized, or a solution has reached the end of life. These scenarios dictate a need: often, the decision maker will need to define a response and then have to justify the spending.
This reality dictates the first product in the GigaOm portfolio, the GigaBrief, which is (essentially) a how-to guide for writing a business case. Once the decision maker has confirmed the budget, they can write an RFP leveraging the GigaOm Key Criteria and GigaOm Radar), and then consider running a proof of concept (PoC).
We have a how-to guide for these as well, based on our Benchmarks, field tests, and Business Technology Impact (BTI) reports. We know that, alongside thought leadership, decision-makers need hard numbers for costs and benefits, so we double down on these.
For end-user organizations, our primary audience, we have created a set of tools to make decisions and unblock deployments: our subscribers come to us for clarity and practitioner-led advice, which helps them work faster and smarter and achieve their goals more effectively. Our research is high-impact by design, which is why we have an expanding set of partner organizations using it to enable their clients.
Specifically, learning companies such as Pluralsight and A Cloud Guru use GigaOm reports to help subscribers set direction and lock down the solutions they need to deliver. By its nature, our how-to approach to report writing has created a set of strategic training tools which directly feed more specific technical training.
Meanwhile, channel partners and professional services companies such as Ingram Micro and Transformation Continuum use our research to help their clients lock down the solutions they need, together with a practitioner-led starting point for supporting frameworks, architectures, and structures. And we work together with media partners like The Register and The Channel Company to support their audiences with research and insights.
Technology vendors also benefit from end-user decision-makers who are better equipped to make decisions. Rather than generic market-making or long-listing potential vendors, our scenario-led materials directly impact buying decisions, taking procurement from a shortlist to a conclusion. Sales teams at systems, service, and software companies tell us how they use our reports when discussing options with prospects, not to evangelize but to explore practicalities and help conclude.
All these reasons and more enable us to say with confidence how end-user businesses, learning, channel and media companies, and indeed technology vendors are achieving more with GigaOm research. In a complex and constantly evolving landscape, our practitioner- and scenario-led approach brings specificity and clarity, helping organizations reach further, work faster and deliver more.
Our driving force is the value we bring; at the same time, we maintain a connection with our media heritage, which enables us to scale beyond traditional analyst models. We also continue to learn, reflect, and change — our open and transparent model welcomes feedback from all stakeholders so that we can drive improvements in our products, our approach, and our outreach.
This is to say, if you have any thoughts, questions, raves, or rants, don’t hesitate to get in touch with me directly. The Jon Collins virtual door, and my calendar, are always open.
08-30 – Five Top Tips for Radar Briefings
Five Top Tips for Radar Briefings

Inspired by Harley Manning’s excellent advice on vendor briefings for evaluations, I thought I would document some of my recent experiences. Let’s be realistic: GigaOm is not the gorilla in the analyst market. Plus, we have some curious differences from other analyst firms — not least that we major in practitioner-led evaluation, bringing in an expert rather than (as Chris Mellor points out) “a team of consultants”. Nothing wrong with either approach, as I have said before, they’re just different.
So, what would be my top tips for vendors looking to brief us for a Radar report?
1. Make it technical
At GigaOm we care less about market share or ‘positioning’, and more about what the product or solution actually does. Our process involves considerable up-front effort pulling together, and peer reviewing a research proposal, following which (every time) we produce a Key Criteria report — for subscribers, this offers a how-to guide for writing an RFP.
By the time we’re onto the Radar, we’re mainly thinking, “Does it do the thing, and how well?” If we can get our technical experts in a virtual room with your technical experts, we can all get out of the way. See also: provide a demo.
2. Understand the scoring
Behind GigaOm’s model is a principle that technology commoditizes over time: this year’s differentiating product feature may be next year’s baseline. For this reason, we score against a general level, with two plusses given if a vendor delivers on a feature or quality. A vendor doing better than the rest will gain points (and we say why), and the converse is true. If we’re saying something, we need to be able to defend it — in this case, in the strengths and weaknesses in the report.
3. Make it defensible
Speaking of which, a vendor can make our lives simpler by telling us why a particular feature is better than everyone else’s. Sorry, we’re not looking for an easy ride, but to say what makes something special gives us something to talk about (as opposed to “but everyone thinks so,” etc). Note that customer proof points carry much more weight than general statements — if a customer says it to us directly, we’re far more likely to take it on board.
4. Tell us the scenarios
At GigaOm, we’re scenario-led — which means we’re looking at how technology categories address particular problems. Many vendors solve specific problems particularly well (note, I don’t believe there’s such a thing as a top-right shortlist of vendors to suit all needs). Often in briefings, I ask ‘magic’ questions like, “Why do your customers love you?” which cut through generalist website hype and focus on where the solution is particularly strong.
5. Focus on the goal
A Radar briefing shouldn’t be perceived as a massive overhead — we want to know what your product does, not how well your media-trained speakers can present. Once done, our experts will be able to complete their work, then run the resulting one-pager back past you for a fact check. For sure, we’d love as much information as you can provide, and we have an extensive set of questionnaires for that purpose.
I’ve just flicked back through Harley’s ten points, and there’s a lot in there about being respectful, aiming to hit dates, not arguing over every judgment, and so on. Wise words, which we get just as often, I wager. I also recognize that even as we have published schedules, methodologies, planned improvements, and so on, you also have your own challenges and priorities.
All of this means that together, our primary goals should be effectiveness, such that we are presenting you, the vendor, correctly with respect to the category, and efficiency, in that a small amount of effort in the right places can benefit all of us. Which probably means, let’s talk.
November 2022
11-09 – A Ray of Hope for Getting Into the Tech Industry
A Ray of Hope for Getting Into the Tech Industry
At the end of last week, I was fortunate to be attending Pluralsight’s EXP executive event in London. In case you don’t know, Pluralsight is an online courses provider offering cloud computing courses to budding programmers, network and security engineers, cloud architects, and in general, helping people break into tech — the company is also a GigaOm partner as, ultimately, a lot of what we do is building tech skills amongst technology decision makers.
The event wasn’t about deep tech but rather the nature of learning in technology-related disciplines — for example, speakers included BT’s Director of Leadership, Learning, Talent, and Diversity, Wendy James. In the midst of it, all were presented some stats about the need for technology-based talent in the UK and beyond — in the simplest terms possible; we’re short of it. Like, seriously short. There are 500,000 jobs in tech vacancies in the UK alone, and that’s likely to increase.
Mixed news for the tech industry perhaps — but thinking more broadly, just an hour after Wendy spoke, the Bank of England predicted the longest recession the UK has ever had, plus a rapid increase in unemployment. We have an online training company, a shortfall of skills, and an increasing number of people with no work. Now, I’m not the smartest person in the street, but even I can put two and two together and make something of this.
For sure, an opportunity exists for someone recently handed their papers to reskill and join the ranks of gainfully employed tech stars. But of course, it isn’t as simple as saying, “Oh dear, I’ve just lost my job; I know I’ll become an expert data scientist.” Neither is it inconceivable: across tech are tasks that would suit more administrative types or managers, people-facing and back-room roles, deeply nerdy engineers, and equally passionate creative brains.
And indeed, this doesn’t have to be all about tech. Wendy James pointed out that tech skills weren’t necessarily the hardest to acquire. Rather, it was softer, people-oriented abilities that needed more effort. Now, you could debate this point, but please do debate it with Wendy, who is working on this challenge day in, and day out.
I remember working with a neighbor who had been made redundant from their manual labor job a few years ago. Could he work in tech? He asked me. We talked about it, and shortly afterward, he applied to electronics retailer Dixons to join one of their tech support training schemes. He got in and never looked back.
He wasn’t young (he’s retired now), but sure, he was self-motivated, which helped. Equally, the bar to entry can be lower than you think — the ability to check a computer’s configuration or wire up a cable is already good, as is the ability to follow a standard set of instructions. I’m reminded of how Carphone Warehouse used to build training aids directly into their customer service scripts. Perhaps they still do.
I don’t want to be in any way patronizing, but all of this is to say you don’t have to be a rocket scientist to work in the tech world. It’s a bit like people who say, “I can’t sing,” even though singing is a fundamental human ability; similarly, saying, “I’m not technical enough,” is unlikely to be true in the real, multifaceted world of IT that now exists.
The opportunity is there. (Of course) it’s not as straightforward as deciding to retrain those made redundant with a new set of skills. Employers should stop hoping they will get fully formed programmers, network engineers, and Helpdesk staff and instead start looking at what the people can bring at a human level with soft skills. Then, bring in technical skills as part of a development program.
And equally, the industry should do what it can to demystify many tech-related areas. Programming, for example, requires a certain set of skills that may be (understandably) unfamiliar but can nevertheless be made fully approachable. Writing programs, or indeed reading them, isn’t that different from creating or reading recipes — if you can follow Gordon Ramsay, chances are you can make sense of reasonably-written computer code.
Perhaps it’s not for you but never say never — if you’ve been made redundant and don’t believe that you have what it takes to work an entry-level cloud or software development job, try testing that assumption with a more technical friend or colleague. Just as my neighbor did years ago, a simple question could be all it takes to get your foot in the door with a fulfilling and long-lasting career in tech.
11-09 – Now’s the Moment to be Thinking About Sovereign Cloud
Now’s the Moment to be Thinking About Sovereign Cloud
Sovereign Cloud was one of VMware’s big announcements at its annual VMware Explore Europe conference this year. Not that the company was announcing the still-evolving notion of sovereignty, but what it calls “sovereign-ready solutions.” Data sovereignty is the need to ensure data is managed according to local and national laws. This has always been important, so why has it become a thing now if it wasn’t three years ago?
Perhaps some of the impetus comes from General Data Protection Regulation (GDPR compliance), or at least the limitations revealed since its arrival in 2016. It isn’t possible to define a single set of laws or regulations around data that can apply globally. Different countries have different takes on what matters, move at different rates, and face different challenges. “Sovereignty was not specific to EMEA region but driven by it,” said Joe Baguley, EMEA CTO for VMware.
GDPR requirements are a subset of data privacy and protection requirements, but increasingly, governments are defining their own or sticking with what they already have. Some countries favor more stringent rules (Germany and Ghana spring to mind), and technology platforms need to be able to work with a multitude of policies rather than enforcing just one.
Enter the sovereign cloud, which is accelerating as a need even as it emerges as something concrete that organizations can use. In terms of the accelerating need, enterprises we speak to are talking of the increasing challenges faced when operating across national borders — as nations mature digitally, it’s no longer an option to ignore local data requirements.
At the same time, pressure is increasing. “Most organizations have a feeling of a burning platform,” remarked Laurent Allard, head of Sovereign Cloud EMEA for VMware. As well as regulation, the threat of ransomware is highly prevalent, driving a need for organizations to respond. Equally less urgent but no less important is the continuing focus on digital transformation — if ransomware is a stick, so transformation offers the carrot of opportunity.
Beyond these technical drivers is the very real challenge of rapidly shifting geopolitics. The conflict in Ukraine has caused irrecoverable damage to the idea that we might all get along, sharing data and offering services internationally without risk of change. Citizens and customers—that’s us—need a cast iron guarantee that the confidentiality, integrity, and availability of their data will be protected even as the world changes. And it’s not just about people—industrial and operational data subjects also need to be considered.
It is worth considering the primary scenarios to which data protection laws and sovereignty need to apply. The public sector and regulated industries have broader constraints on data privacy, so organizations in these areas may well see the need to have “a sovereign cloud” within which they operate. Other organizations may have certain data classes that need special treatment and see sovereign cloud architecture as a destination for these. And meanwhile, multinational companies may operate in countries that impose specific restrictions on small yet important subsets of data.
Despite the rapidly emerging need, the tech industry is not yet geared up to respond — not efficiently, anyway. I still speak to some US vendors who scratch their heads when the topic arises (though the European Data Act and other regulatory moves may drive more interest). Hyperscalers, in particular, are tussling with how to approach the challenge, given that US law already imposes requirements on data wherever it may be in the world.
“These are early days when it comes to solutions,” says Rajeev Bhardwaj, VP of Cloud Provider Solutions at VMware, “There is no standard for sovereign clouds.” Developing such a thing will not be straightforward, as (given the range of scenarios) solutions cannot be one-size-fits-all. Organizations must define infrastructure and data management capabilities that fit their own needs, considering how they move data and in which jurisdictions they operate.
VMware has made some headway in this, defining a sovereign cloud stack with multiple controls, e.g., on data residency — it’s this which serves as a basis for its sovereign-ready solutions. “There’s work to be done. We’re not done yet,” says Sumit Dhawan, President of VMware. This work cannot exist in isolation, as the whole point of the sovereign cloud is that it needs to work across what is today a highly complex and distributed IT environment, whatever the organization’s size.
Sure, it’s a work in progress, but at the same time, enterprises can think about the scenarios that matter to them, as well as the aforementioned carrot and stick. While the future may be uncertain, we can all be sure that we’ll need to understand our data assets and classify them, set policies according to our needs and the places where we operate, and develop our infrastructures to be more flexible and policy-driven.
I wouldn’t go as far as saying that enterprises need a chief sovereignty officer, but they should indeed be embedding the notion of data sovereignty into their strategic initiatives, both vertically (as a singular goal) and horizontally as a thread running through all aspects of business and IT. “What about data sovereignty aspects” should be a bullet point on the agenda of all digital transformation activity — sure, it is not a simple question to answer, but it is all the more important because of this.
11-15 – Can low code process automation platforms fix healthcare?
Can low code process automation platforms fix healthcare?
I was lucky enough to sit down with Appian’s healthcare industry lead, Fritz Haimberger. Fritz is someone who practices what he preaches — outside of his day job, he still spare-times as a medic and a firefighter in his hometown of Franklin, Tennessee (just outside Nashville). I’ve been lucky enough to work with various healthcare clients over the years, from hospitals to pharmaceutical firms and equipment manufacturers; I’ve also been involved in GigaOm’s low code tools and automation platforms reports. So, I was interested in getting his take on how this space has evolved since I last had my sleeves rolled up back in 2016.
While we talked about a wide range of areas, what really caught my attention was the recognized, still huge, challenge being faced by healthcare organizations across the globe. “If you look at healthcare over 15 years, starting with electronic medical record systems — for so long, we’ve had a continued expectation that those implementations might cost 500 million dollars and might be implemented in 14-16 months. Reality has never been like that: repeatedly, it’s been three years down the road, a billion dollars plus in expense, sometimes with no end in sight,” said Fritz. “The notion of implementation time to value was blown away, and organizations resigned themselves to think that it’s just not possible to deliver in a timely manner.”
In part, this comes from legacy tech, but equally, it is down to underestimating the scale of the challenge. When I was working on clinical pathways for Deep Vein Thrombosis (DVT), what started as a simple series of steps inevitably grew in complexity — what about if the patient was already being treated on other drugs? What if blood tests returned certain, conflicting information? So many of these questions rely on information stored in the heads of clinicians, doctors, nurses, pharmacists, and so on.
The resulting impact is not only on data models and systems functionality but also the way in which information needs to be gathered. Keeping in mind that healthcare scenarios must, by their nature, be risk averse, it’s not possible to build a prototype via “fail fast” or “test and learn” — real patient lives may be involved. So, how can healthcare organizations square the circle between addressing unachievable expectations without having the “do it quick and cheap” option?
Enter low code app development, integration, process, and other forms of automation development platforms. Let’s work back from the clichéd trick in the tale and agree that it won’t be a magic digital transformation bullet. You only have to look at a technical architecture map of the NHS to realize that you’d need an entire squadron of magic rockets to even dent the surface. But several elements of the low code process automation platform approach (okay, a bit of a mouthful, so I’ll stick with automation platforms from here) map onto the challenges faced by healthcare organizations in a way that might actually make a difference.
First off, the low code development platforms are not looking to either directly replace or just integrate between existing systems. Rather, and given their heritage, they are aimed at accessing existing data to respond to new or changing needs. There’s an industry term – “land and expand” – which is largely about marketing but also helps from a technical perspective: unlike historical enterprise applications, which required organizations to adopt and adapt their processes (at vast cost), the automation platform approach is more about solving specific challenges first, then broadening use — without imposing external constraints on related development processes.
Second, the nature of software development with automation platforms plays specifically to the healthcare context. Whilst the environment is absolutely safety critical, it’s also very complex, with a lot of knowledge in the heads of health care professionals… This plays to a collaborative approach, one way or another — clinicians need to be consulted at the beginning of a project but also along the way as clinical needs emerge. “The tribal knowledge breakdown is huge,” said Fritz. “With platforms such as Appian, professional developers, clinicians, and business owners can better collaborate on custom applications, so it’s bespoke to what they’re trying to achieve, in a quick iterative process.” Not only does this cut initial time to value down considerably – Fritz suggested 12-14 weeks – but also, it’s along the process that complexity emerges, and hence can be treated.
Automation platforms align with the way it is possible to do things, but at the same time, they are, inherently platforms. This brings to a third pillar that they can bake in the capabilities healthcare organizations need without having to be bespoke — security hardening, mobile deployment, healthcare compliance, API-based integration, and so on. From experience, I know how complex these elements can be if either relying on other parts of the healthcare architecture or having to build bespoke or buy separately — the goal is to reduce the complexity of custom apps and dependencies rather than creating them.
Perhaps automation platforms can, at the very least, unlock and unblock opportunities to make technology work for key healthcare stakeholders, from upper management to nursing staff and everyone in between. Of course, they can’t work miracles; you will also need to keep on top of your application governance — thinking generally, automation platforms aren’t always the best at version controls, configuration and test management, and other ancillary activity.
Above all, when the platforms do what they do best, they are solving problems for people by creating new interfaces onto existing data and delivering new processes. “Honestly— if I’m looking at the individual, whether it’s a patient in clinical treatment, a life sciences trial participant or an insured member – if we’re improving their health outcomes, and easing the unnecessary burden on clinicians, scientists and others, that’s what makes it worthwhile putting two feet on the floor in the morning and coming to work for Appian!”
Yes, platforms can help, but most of all, this is about recognizing that solving for business users is the answer: with this mindset, perhaps healthcare organizations really can start moving towards dealing with their legacy technical challenges to the benefit of all.
11-30 – Time Is Running Out For The “Journey To The Cloud”
Time Is Running Out For The “Journey To The Cloud”

Cloud is all, correct? Just as all roads lead to Rome, so all information technology journeys inevitably result in everything being, in some shape or form, “in the cloud.” So we are informed, at least: this journey started back in the mid 2000s, as application service providers (ASPs) gave way to various as-a-service offerings, and Amazon launched its game-changing Elastic Compute Cloud service, EC2.
A decade and a half later, and we’re still on the road – nonetheless, the belief system that we’re en-route to some technologically superior nirvana pervades. Perhaps we will arrive one day at that mythical place where everything just works at ultra scale, and we can all get on with our digitally enabled existences. Perhaps not. We can have that debate, and in parallel, we need to take a cold, hard look at ourselves and our technology strategies.
This aspirational-yet-vague approach to technological transformation is not doing enterprises (large or small) any favors. To put it simply, our dreams are proving expensive. First, let’s consider what is a writ (in large letters) in front of our eyes.
Cloud costs are out of control
For sure, it is possible to spin up a server with a handful of virtual coppers, but this is part of the problem. “Cloud cost complexity is real,” wrote Paula Rooney for CIO.com earlier this year, in five words summarising the challenges with cloud cost management strategies – that it’s too easy to do more and more with the cloud, creating costs without necessarily realizing the benefits.
We know from our FinOps research the breadth of cost management tools and services arriving on the scene to deal with this rapidly emerging challenge to manage cloud cost.
(As an aside, we are informed by vendors, analysts, and pundits alike that the size of the cloud market is growing – but given the runaway train that cloud economics has become, perhaps it shouldn’t be. One to ponder.)
Procurement models for many cloud computing services, SaaS, PaaS, and IaaS, are still often based around pay-per-use, which isn’t necessarily compatible with many organizations’ budgeting mechanisms. These models can be attractive for short-term needs but are inevitably more expensive for the longer term. I could caveat this with “unless accompanied by stringent cost control mechanisms,” but evidence across the past 15 years makes this point moot.
One option is to move systems back in-house. As per a discussion I was having with CTO Andi Mann on LinkedIn, this is nothing new; what’s weird is that the journey to the cloud is always presented as one-way, with such events as the exception. Which brings us to a second point that we are still wed to the notion that the cloud is a virtual place to which we shall arrive at some point.
Spoiler alert: it isn’t. Instead, technology options will continue to burst forth, new ways of doing things requiring new architectures and approaches. Right now, we’re talking about multi-cloud and hybrid cloud models. But, let’s face it, the world isn’t “moving to multi-cloud” or hybrid cloud: instead, these are consequences of reality.
“Multi-cloud architecture” does not exist in a coherent form; rather, organizations find themselves having taken up cloud services from multiple providers—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and so on—and are living with the consequences.
Similarly, what can we say about hybrid cloud? The term has been applied to either cloud services needing to integrate with legacy applications and data stores; or the use of public cloud services together with on-premise, ‘private’ versions of the same. In either case, it’s a fudge and an expensive one at that.
Why expensive? Because we are, once again, fooling ourselves that the different pieces will “just work” together. At the risk of another spoiler alert, you only have to look at the surge in demand for glue services such as integration platforms as a service (iPaaS). These are not cheap, particularly when used at scale.
Meanwhile, we are still faced with that age-old folly that whatever we are doing now might in some way replace what has gone before. I have had this conversation so many times over the decades that the task is to build something new, then migrate and decommission older systems and applications. I wouldn’t want to put a number on it, but my rule of thumb is that it happens less often than it doesn’t. More to manage, not less, and more to integrate and interface.
Enterprise reality is a long way from cloud nirvana
The reality is, despite cloud spend starting to grow beyond traditional IT spend (see above on maybe it shouldn’t, but anyway), cloud services will live alongside existing IT systems for the foreseeable future, further adding to the hybrid mash.
As I wrote back in 2009, “…choosing cloud services [is] no different from choosing any other kind of service. As a result, you will inevitably continue to have some systems running in-house… the result is inevitably going to be a hybrid architecture, in which new mixes with old, and internal with external.”
It’s still true, with the additional factor of the law of diminishing returns. The hyperscalers have monetized what they can easily, amounting to billions of dollars in terms of IT real estate. But the rest isn’t going to be so simple.
As cloud providers look to harvest more internal applications and run them on their own servers, they move from easier wins to the more challenging territory. The fact that, as of 2022, AWS has a worldwide director of mainframe sales is a significant indicator of where the buck stops, but mainframes are not going to give up their data and applications that easily.
And why should they if the costs of migration increase beyond the benefits of doing so, particularly if other options exist to innovate? One example is captured by the potentially oxymoronic phrase ‘Mainframe DevOps’. For finance organizations, being able to run a CI/CD pipeline within a VM inside a mainframe opens the door to real-time anti-fraud analytics. That sounds like innovation to me.
Adding to all this is the new wave of “Edge”. Local devices, from mobile phones to video cameras and radiology machines, are increasingly intelligent and able to process data. See above on technology options bursting forth, requiring new architectures: cloud providers and telcos are still tussling with how this will look, even as they watch it happen in front of their eyes.
Don’t get me wrong, there’s lots to like about the cloud. But it isn’t the ring to rule them all. Cloud is part of the answer, not the whole answer. But seeing cloud – or cloud-plus – as the core is having a skewing effect on the way we think about it.
The fundamentals of hosted service provision
There are three truths in technology – first, it’s about the abstraction of physical resources; second, it’s about right-sizing the figurative architecture; and third, that it’s about a dynamic market of provisioning. The rest is supply chain management and outsourcing, plus marketing and sales.
The hyperscalers know this, and have done a great job of convincing everyone that the singular vision of cloud is the only show in town. At one point, they were even saying that it was cheaper: AWS’ CEO, in 2015, Andy Jassy, said*: “AWS has such large scale, that we pass on to our customers in the form of lower prices.”
By 2018, AWS was stating, “We never said it was about saving money.” – read into that what you will, but note that many factors are outside the control even of AWS.
“Lower prices” may be true for small hits of variable spending, but it certainly isn’t for major systems or large-scale innovation. Recognizing that pay-per-use couldn’t fly for enterprise spending, AWS, GCP, and Azure have introduced (varyingly named) notions of reserved instances—in which virtual servers can be paid for in advance over a one- or three-year term.
In major part, they’re a recognition that corporate accounting models can’t cope with cloud financing models; also in major part, they’re a rejection of the elasticity principle upon which it was originally sold.
My point is not to rub any provider’s nose in its historical marketing but to return to my opener – that we’re still buying into the notional vision, even as it continues to fragment, and by doing so, the prevarication is costing end-user enterprises money. Certain aspects, painted as different or cheaper, are nothing of the sort – they’re just managed by someone else, and the costs are dictated by what organizations do with what is provided, not its list price.
Shifting the focus from cloud-centricity
So, what to do? We need a view that reflects current reality, not historical rhetoric or a nirvanic future. The present and forward vision of massively distributed, highly abstracted and multi-sourced infrastructure is not what vendor marketing says it is. If you want proof, show me a single picture from a hyperscaler that shows the provider living within some multi-cloud ecosystem.
So, it’s up to us to define it for them. If enterprises can’t do this, they will constantly be pulled off track by those whose answers suit their own goals.
So, what does it look like? In the major part, we already have the answer – a multi-hosted, highly fragmented architecture is, and will remain the norm, even for firms that major on a single cloud provider. But there isn’t currently an easy way to describe it.
I hate to say it, but we’re going to need a new term. I know, I know, industry analysts and their terms, eh? But when Gandalf the Grey became Gandalf the White, it meant something. Labels matter. The current terminology is wrong and driving this skewing effect.
Having played with various ideas, I’m currently majoring in multi-platform architecture – it’s not perfect, I’m happy to change it, but it makes the point.
A journey towards a more optimized, orchestrated multi-platform architecture is a thousand times more achievable and valuable than some figurative journey to the cloud. It embraces and encompasses migration and modernization, core and edge, hybrid and multi-hosting, orchestration and management, security and governance, cost control, and innovation.
But it does so seeing the architecture holistically, rather than (say) seeing cloud security as somehow separate to non-cloud security or cloud cost management any different to outsourcing cost optimization.
Of course, we may build things in a cloud-native manner (with containers, Kubernetes and the like), but we can do so without seeing resulting applications as (say, again) needing to run on a hyperscaler, rather than a mainframe. In the multi-platform architecture, all elements being first class citizens even if some are older than others.
That embraces the breadth of the problem space and isn’t skewed towards an “everything will ultimately be cloud,” nor a “cloud is good, the rest is bad,” nor a “cloud is the norm, edge is the exception” line. It also puts paid to any idea of the distorted size of the cloud market. Cloud economics should not exist as a philosophy, or at the very least, it should be one element of FinOps.
There’s still a huge place for the hyperscalers, whose businesses run on three axes – functionality, engineering, and the aforementioned cost. AWS has always sought to out-function the competition, famous for the number of announcements it would make at re:Invent each year (and this year’s data-driven announcements are no exception). Engineering is another definitive metric of strength for a cloud provider, wrapping scalability, performance and robustness into the thought of: is it built right?
And finally, we have the aforementioned cost. There’s also a place for spending on cloud providers, but cost management should be part of the Enterprise IT strategy, not locking the stable door after the rather expensive and hungry stallion has bolted.
Putting multi-platform IT strategy into the driving seat
Which brings to the conclusion – that such a strategy should be built on the notion of a multi-platform architecture, not a figurative cloud. With the former, technology becomes a means to an end, with the business in control. With the latter, organizations are essentially handing the keys to their digital kingdoms to a third party (and help yourself to the contents of the fridge while you are there).
If “every company is a software company,” they need to recognize that software decisions can only be made with a firm grip on infrastructure. This boils down to the most fundamental rule of business – which is to add value to stakeholders. Entire volumes have been written about how leaders need to decide where this value is coming from and dispense with the rest (cf Nike and manufacturing vs branding, and so on and so on).
But this model only works if “the rest” can be delivered cost-effectively. Enterprises do not have a tight grip on their infrastructure providers, a fact that hyperscalers are content to leverage and will continue to do so as long as end-user businesses let them.
Ultimately, I don’t care what term is adopted. But we need to be able to draw a coherent picture that is centred on enterprise needs, not cloud provider capabilities, and it’ll really help everybody if we all agree on what it’s called. To stick with current philosophies is helping one set of organizations alone. However, many times, they reel out Blockbuster or Kodak as worst-case examples (see also: we’re all still reading books).
Perhaps, we are in the middle of a revolution in service provision. But don’t believe for a minute that providers only offering one part of the answer have either the will or ability to see beyond their own solutions or profit margins. That’s the nature of competition, which is fine. But it means that enterprises need to be more savvy about the models they’re moving towards, as cloud providers aren’t going to do it for them.
To finish on one other analyst trick, yes, we need a paradigm shift. But one which maps onto how things are and will be, with end-user organizations in the driving seat. Otherwise, their destinies will be dictated by others, even as enterprises pick up the check.
*The full quote, from Jassy’s 2015 keynote, is: “There’s 6 reasons that we usually tell people, that we hear most frequently. The first is, if you can turn capital expense to a variable expense, it’s usually very attractive to companies. And then, that variable expense is less than what companies pay on their own – AWS has such large scale, that we pass on to our customers in the form of lower prices.”
December 2022
12-20 – How GitOps is Driving Cloud Native at Scale
How GitOps is Driving Cloud Native at Scale
GitOps is great, isn’t it? What’s that, I hear you ask. Simply put, in these days, where all infrastructure can be virtualized, GitOps is about managing information about what that needs to look like (written as a text file), alongside the application that’s going to run on it. Hold onto that word ‘managing’.
The concept of infrastructure-as-code managed in the same way as software code may be simple, but its consequences are powerful. Thence GitOps, the term coined by Alexis Richardson, CEO, and co-founder at Weaveworks: ‘git’ being the code repository of choice for cloud-native applications, and ‘ops’ because, well, isn’t everything about that these days?
Weaveworks’ own GitOps workflow solution, FluxCD, has just graduated from the incubator factory that is the Cloud Native Computing Foundation (CNCF) – no mean feat given the hoops through which it will have had to jump. “We had security auditors all over the code,” said Alexis when I caught up with him about it.
FluxCD is not the only kid on the block: ArgoCD for example, led by teams at Intuit, Codefresh, and others, has also achieved CNCF graduation. Two competing solutions aren’t a problem – they work in different ways and suit different use cases.
And what of those powerful consequences? Well. Driving GitOps work is the clear-and-present need, to manage configuration data in massively distributed, potentially highly change-able application environments. In the increasingly containerized space of cloud-native applications, this same driver spawned the existence of orchestration engines such as DockerSwarm and Kubernetes, as well as the need for cloud observability tooling – a.k.a. how the heck do we identify a problem when we don’t even know where our software is running?
In the cloud native space, this generally means that any applications that have achieved their goals of delivering at scale – cue examples that follow the Netflix architecture – need to keep on top of how they deploy their software and then how they manage it at the same scale. Do so and you can achieve great things.
For example, the manifestation of all three is vital to scenarios such as machine to machine communications and driverless cars. In the telecoms space, in which the latest generation of wireless (5G) is cloud-native by design, the ability to deliver software and configuration updates in parallel and at scale only becomes possible by adopting such principles as GitOps. “You can update forty thousand telco towers without touching them. That just wouldn’t be possible otherwise,” remarks Alexis, referring to Weaveworks’ partnership with Deutsche Telekom.
GitOps is neat. However, there’s a lot to unpack in the phrase “manage configuration data” from the fifth paragraph above: this isn’t all about moving left to right, from application/infrastructure design to deployment and then into operations. Close to my heart, and something I’ve written about before is an issue at the heart of all things DevOps – that, in our drive to innovate at speed, we have sacrificed our ability to manage what we have created.
This inability to close the DevOps infinity loop can be likened to a firehose spluttering out trace data, incident reports, user experience metrics and the like, showering the development side of the house with bits and pieces of information without any real prioritization or controls. It’s a mess, often meaning (I am told, anecdotally) that developers don’t know what to work on next in terms of fixes, so they just get on with what they were going to do anyway, such as new functionality.
Elsewhere I’ve talked about the governance gap between innovation strategy (“Let’s build some cloud native stuff”) and delivery. It’s a reason why I latched onto Value Stream Management early on as a way of building visibility across the pipeline; it’s also why I was keen to learn more about Atlassian’s move squarely into the IT service management space.
GitOps solves for the governance gap, not by adding dashboards and controls – at least, not by themselves. Rather, a fundamental principle of GitOps is that configuration information is pushed in the same way as code and then not tampered with post-deployment, unless it can’t be helped.
These two concepts are enshrined in the heart of GitOps tooling, as otherwise it’s just stuff that I bet looks good on a whiteboard. From the Open GitOps site, the full set of principles is as follows:
-
Declarative – a system needs to be documented in advance through declared statements rather than having to discern the system from its runtime configuration
-
Versioned and Immutable – this is the bit about storing these infrastructure declarations alongside application code, in a version-controlled repository such as git.
-
Pulled Automatically – now we’re talking about how the desired system is always built based on its declared configuration rather than by tinkering.
-
Continuously Reconciled. This is the coolest and most important bit – if you do go and tweak the runtime configuration, the tooling should detect the change, and trigger a fix.
Tools such as FluxCD and ArgoCD enact these principles. Fascinatingly, that they work with the fact that engineers aren’t going to want to slow how they build stuff, they just enforce the fact that you can’t tamper with it once it’s done – and if you do, an alert will be raised. This can cause pushback from people who want to enact changes on the running system, rather than changing source of truth, says Alexis. “People say there’s high latency, they often haven’t set their system up right.”
I’m making this point as clearly and directly as I can, because of the dangers of (can I call it) GitOps-washing. Just delivering in the first two principles above, or simply storing infrastructure-as-code information in git, does not mean GitOps is being done. Either it’s a closed loop with alert-driven configuration drift identification and reconciliation, or it’s just another pipeline.
Neither is this merely about principles but benefits. That point earlier about rolling out updates to forty thousand telco towers? That’s only possible if the sources of deployment friction are minimized or removed altogether and if the resulting environment can be operationally managed based on a clear-as-possible understanding of what it looks like. “There’s no other operating model that really scales,” remarks Alexis, and he’s right.
Ultimately this goes to the heart of what it means to be agile in the digital world. Agility is not about controlled chaos or breaking things without ever really creating them: it succeeds with ways of working and accompanying tooling that aligns with the needs of innovation at scale. Yes, GitOps is great, but only if all its facets are adopted wholesale – GitOps lite is no GitOps at all.
12-20 – The Benefits of Participating in GigaOm Radars
The Benefits of Participating in GigaOm Radars
We’ve spoken to over 800 vendors over the past year so that we can get input into our Radar reports, our evaluations of the industry, and the market categories we cover. There is no cost to participate in a Radar report, however it, does take time and effort to brief our analysts, respond to questionnaires, and fact check our materials. So, the valid question is, “what’s in it for me?”
Some benefits for vendors:
Technology solutions respond to a given problem or scenario. Often, highly needed solutions are not being considered because “megatrends” take up the oxygen, this is often the case for less cool areas such as IT operations or data governance, or newer areas like value stream management and FinOps. As we cover the solution category, we are essentially making a case for it — and your solution can benefit from the increased visibility this brings.
As well as our subscription base, GigaOm and its analysts have over half a million followers across social media. Readers of our reports number in the thousands — 1500-plus downloads is typical for a report. And our analysts are increasingly quoted in the press. If we review your product or service, you will pick up a public mention for being in the category. See it as free advertising.
Brief one of our analysts, and you can expect an interactive dialogue with a technical and market expert. The more open you can be in this part of the process, the more honest we can also be, driving a conversation about current product, marke,t fit, roadmap and so on. Our evaluations cover strengths and weaknesses, which we will share so you can see where and how to improve.
We’re not looking to mark down a product or a solution, but in some cases, we have no choice – for example if a feature is not clearly described or documented in publicly available information. Most vendors bring something of value to the party, but if this is not clear to us, then potentially, it will not be clear to your prospects. An interactive dialogue cuts through this, giving you the credit you deserve.
A repeating pattern, for us, is when a vendor tells us they are too busy to participate in our research. Then, when we do our own research and send it for a fact check, the vendor requests a briefing at the last minute to run through any revisions. Sure, we may have got some elements wrong (but then, so would an end-user trying to research you). The point is, why not speak to us first, so then we do the job more efficiently?
There we have it. We’re always looking to improve what we do, and we welcome your thoughts on that. Questionnaires can always be a pain, for example, but they should be seen as the means to an end — which is presenting your solution as well as possible. If you want the visibility, feedback and results, then come on in, let’s work together on this. Or, if you’re inundated with requests by firms similar to us, and you can’t cope with them all, let’s talk about that: we want to be the signal, not the noise. Either way, we’re here for any questions you may have.
https://www.youtube.com/watch?v=VQKUH8IGhT8
12-22 – In Software Development, Address Complexity and the Rest Will Follow
In Software Development, Address Complexity and the Rest Will Follow
Where is DevOps going? Is it ‘dead’ as some are suggesting, to be replaced by other disciplines such as platform engineering? I would proffer that while it was never as simple as that, now is as good a moment as any to reflect on approaches such as those discussed in DevOps circles. So, let’s consider what is at their heart, and see how they can be applied to delivering software-based innovation at scale.
A bit of background. My first job, three decades ago, was as a programmer; I later ran software tools and infrastructure for application development groups; I went on to advise some pretty big organizations on how to develop software, and how to manage data centers, servers, storage, networking, security and all that. Over that time I’ve seen a lot of software delivered successfully, and a not-insignificant amount hit the rails, be superseded or not fit the bill.
Interestingly, even though I have seen much aspiration in terms of better ways of doing things, I can’t help feeling we are still working out some of the basics. DevOps itself came into existence in the mid-Noughties, as a way of breaking out of older, slower models. Ten years before that however, I was already working at the forefront of ‘the agile boom’, as a Dynamic Systems Development Methodology (DSDM) consultant.
In the mid-Nineties, older, ponderous approaches to software production, with two-year lead times and no guarantees of success, were being reconsidered in the light of the rapidly growing Internet. And before that, Barry Boehm’s Spiral methods, Rapid Application Development and the like offered alternatives to Waterfall methodologies, in which delivery would be bogged down in over-specified requirements (so-called analysis paralysis) and exhausting test regimes.
No wonder software development gurus such as Barry B, Kent Beck and Martin Fowler looked to return to the source (sic) and adopt the JFDI approach that continues today. The idea was, and remains simple: take too long to deliver something, and the world will have moved on. This remains as aspirationally true as ever — the goal was, is, and continues to be about creating software faster, with all the benefits of improved feedback, more immediate value and so on.
We certainly see examples of success, so why do these feel more akin to delivering a hit record or killer novel, than business as usual? Organizations across the board look hopefully towards two-pizza teams, SAFe Agile principles and DORA metrics, but still struggle to make agile approaches scale across their teams and businesses. Tools should be able to help, but (as I discuss here) can equally become part of the problem, rather than the solution.
So, what’s the answer? In my time as a DSDM consultant, my job was to help the cool kids do things fast, but do things right. Over time I learned one factor that stood out above all others, that could make or break an agile development practice: complexity. The ultimate truth with software is that it is infinitely malleable. Within the bounds of what software can enable, you really can write anything you want, potentially really quickly.
We can thank Alan Turing for recognising this as he devised his eponymous, and paper tape-based machine, upon which he based his theory of computation. Put simply, the Turing Machine can (in principle) run any program that is mathematically possible; not only this, but this includes the program that represents how any other type of computer works.
So you could write a program representing a Cray Computer, say, spin that up on an Apple Mac, and on it, run another that emulates an IBM mainframe. Why you’d want to is unclear, but for a fun example, you can go down a rabbit hole finding out the different platforms the first-person shooter game Doom has been ported to, including itself.
Good times. But the immediacy of infinite possibility needs to be handled with care. In my DSDM days I learned the power of the Pareto principle, or in layperson’s terms, “let’s separate out the things we absolutely need, from the nice-to-haves, they can come later.” This eighty-twenty principle is as true and necessary as ever, as the first danger of being able to do everything now is, to try to do it all, all at once.
The second danger is not logging things as we go. Imagine you are Theseus, descending to find the minotaur in the maze of caverns beneath. Without pausing for breath, you travel down many passageways before realizing they all look similar, and you no longer know which ones to prioritize for your next build of your cloud-native mapping application.
Okay, I’m stretching the analogy, but you get the point. In a recent online panel I likened developers to the Sorcerer’s Apprentice — it’s one thing to be able to make a broom at will, but how are you going to manage them all? It’s as good an analogy as any, to reflect how simple it is to create a software-based artifact, and to illustrate the issues created if each is not at least given a label.
But here’s the irony: the complexity resulting from doing things fast without controls, slows things down to the extent that it kills the very innovation it was aiming to create. In private discussion, I’ve learned that even the poster children of cloud-native mega-businesses now struggle with the complexity of what they have created — good for them for ignoring it while they established their brand, but you can only put good old-fashioned configuration management off for so long.
I’ve started writing about the ‘governance gap’ between the get-things-done world, and the rest. This works in two ways, first that things are no longer got done; and second, that even when they are, they don’t necessarily align with what the business, or its customers, actually need — call this the third danger of doing things in a rush.
When the term Value Stream Management first started to come into vogue three years ago, I didn’t adopt it because I wanted to jump on yet another bandwagon. Rather, I had been struggling with how to explain the need to address this governance gap, at least in part (DevSecOps and the shift-left movement are also on the guest list at this party). VSM came at the right time, not just for me but for organizations that already realised they couldn’t scale their software efforts.
VSM didn’t come into existence on a whim. It emerged from the DevOps community itself, in response to the challenges caused by its absence. This is really interesting, and offers a hook to any senior decision maker feeling out of their depth when it comes to addressing the lack of productivity from its more leading-edge software teams.
Step aside, enterprise imposter syndrome: it’s time to bring some of those older wisdoms, such as configuration management, requirements management and risk management, to bear. It’s not that agile approaches were wrong, but they do need such enterprise-y practices from the outset, or any benefits will quickly unravel. While enterprises can’t suddenly become carefree startups; they can weave traditional governance into newer ways of delivering software.
This won’t be easy, but it is necessary, and it will be supported by tools vendors as they, too, mature. We’ve seen VSM go from one of several three-letter-acronyms addressing management visibility on the development pipeline, to becoming the one the industry is rallying around. Even as a debate develops between its relationship with Project Portfolio Management (PPM) from top-down (as illustrated by Planview’s acquisition of Tasktop), we are seeing increased interest in software development analytics tools coming from bottom-up.
Over the coming year, I expect to see further simplification and consolidation across the tools and platform space, enabling more policy-driven approaches, better guardrails and improved automation. The goal is that developers can get on and do the thing with minimal encumbrance, even as managers and the business as a whole feels the coordination benefit.
But this will also require enterprise organizations—or more specifically, their development groups—to accept that there is no such thing as a free lunch, not when it comes to software anyway. Any approach to software development (agile or otherwise) requires developers and their management to keep tight hold of the reins on the living entities they are creating, corralling them to deliver value.
Do I think that software should be delivered more slowly, or do I favor a return to old-fashioned methodologies? Absolutely not. But some of the principles they espouse were there for a reason. Of all the truths in software, recognise that complexity will always exist, which then needs to be managed. Ignore this at your peril, you’re not being a stuffy old bore by putting software delivery governance back on the table.
12-22 – Why DevOps is failing: It’s Not You, It’s The Tools
Why DevOps is failing: It’s Not You, It’s The Tools
We all know the adage — “Bad workers blame their tools.” But here’s five reasons why the software tools involved in DevOps in general, and the CI/CD pipeline in particular, have become part of the problem rather than the solution.
1. They take too long to deploy
This, it must be said, is a massive irony. When software development started down the agile path in the last millennium, it was to break with waterfall models – not because these were wrong per se, but because they were too slow. Just as a two-to-three year development cycle is far too long as the (tech) world will have changed, so is a similar period to deploy a tool across an enterprise.
As well as the fundamental timescales, decision making attention spans simply don’t stretch that far. The result is that tools are often half-deployed before the next wave of tools comes in, adding to the complexity rather than reducing it.
2 . They are still being sold as magic bullets
In just about every solution-oriented webinar I have ever been on, the “but it’s not a magic bullet” line is used (fair enough, if nobody else says it, I will). Nonetheless, we’re still being told that software tools are the answer – they can transform how you deliver software, they have everything you need, and so on.
It would genuinely be lovely if these marketing statements were true, but they are only partially correct. Unlike financial instruments, however, tools vendors are not required to include disclaimers on their front pages – “Success requires customer due diligence and change management,” for example.
3. New ones appear all the time
This is as ironic as 1. A reasonably standard path is when smart engineers working for a bigger company express their frustration with their own pipelines and end up building a cool solution. Equally smart, they realize they’re on to something, form a startup, find a couple of customers and go get some investment, spending not unreasonable sums on marketing and, indeed, analyst advisory.
In the DevOps space, which is inevitably full of developers, this situation is more common than (say) networking infrastructure – people don’t generally start telecommunications companies as side projects. The trouble is, this falls into the trap of assuming that nobody else out there has ever solved the same problem, creating additional sides up the same mountain to work through.
4. They’re sold without controls
In my experience of deploying software development tools (which isn’t bad), they tend to fall into two camps. Some are tactical – a little performance testing widget, dashboard or integration capability. The rest, almost to a fault, have some kind of strategic expectation on how things are done. Development environment managers need teams to work with the notion of environments. Testing tools need a testing methodology.
Whilst many tools may have an associated framework, way of thinking or approach, they don’t often lead with this. However, let’s be clear, all will require management-level changes in how things are done, or even higher. Exceptions exist – some vendors sell at board level – but see also 5.
5 . They’re sold to, and bought by engineers
This does have overlaps with 4, but it is more about the go-to-market, which is often a freemium or open-source-plus model. Essentially tools are presented as tactical, in the direct knowledge that sooner or later, they will have to be perceived as strategic.
In the industry, this sales model is called “land and expand” – just get in any which way, and grow from there. However, the reality is more “land, expand and then start to hit problems” – organizations end up with pockets of tooling deployed in different ways, and a very fragmented environment. Vendors, as well, can claim customer logos but then struggle to turn tactical deployments into more strategic customers.
All of these issues can be summed up as, “Strategic tools, sold and bought tactically.” I’m not going to point the finger exclusively at vendors (though I am reminded of a conversation I had about sales tactics, in a different domain. Me: “Why do you do it that way, it’s not nice.” Them: “Because it works.”).
And then newer capabilities, such as feature flags, are presented as making things even better, when in fact they’re the opposite. I think feature flags are great for the record, but sold/bought without controls, they are a route to even more fragmentation and despair.
Is there an answer? Given that software tools vendors are hardly going to adopt a “let’s sell less and qualify out any organization that isn’t organized enough to use our stuff” approach, the buck has to stop with end-user organizations, specifically with their engineering teams and how they are managed. Whilst vendors need to be held to account, it takes two to tango.
The clue is with the word “tools” itself. We need to stop thinking about tools like we might see screwdrivers and mallets, and start seeing them as we might manufacturing systems – we’re building a software factory, not a hipster-esque workshop.
Organizations need to see their internal software supply chains in the same way they might a fabrication plant (third irony alert – that’s exactly the point made in the Phoenix Project, written 15 years ago).
This also directly means restricting developers from making unquestioned tooling decisions. I’m sorry, but I don’t fully buy the “developers are the new kingmakers” line, as I speak to too many operations and infrastructure people who have to shovel up the piles of manure they create if left unchecked.
We all need to be protected against ourselves – and the consequence of an anything-goes-in-the-name-of-innovation culture is a series of fragmented fiefdoms, not great empires. The “prototype/PoC becomes the platform” issue is as true for tooling as it is for bespoke software if time is restricted, which it inevitably is.
From a vendor perspective, this means focusing on a smaller list of vendors, potentially giving them more money, and working with them to understand the cultural changes required to adopt their capabilities fully.
And enterprise leaders, as long as you allow the situation to pervade, you are encouraging inefficiency and waste. Good things only come out of chaos by exception. On the upside, particularly relevant in these recessionary this, this fragmentation creates major opportunities to reduce waste and increase efficiency, releasing funds for innovation.
Above all, we have created a jungle by not paying attention up front. It’s never too late to start tackling this, but bluntly, if all businesses are software businesses, they need to start acting like it.
12-23 – What do GigaOm analysts see as the big trends in 2023?
What do GigaOm analysts see as the big trends in 2023?
This week I spoke to a number of GigaOm analysts about how they see 2023. Costs are driving, and consolidation, better architectures and more and governance are following, they told me – while this will cause a wave of uncertainty, you can be taking steps now to get ahead. Let’s tune into what they said, and do get in touch if you have any thoughts.
Cloud cost management is driving strategy
To start, organizations are looking to get on top of IT costs, leading with this rather than assuming cloud architectures will automatically be better value. Says Enrico Signoretti, “Everyone is looking for cost savings and re-assessing their cloud strategy for this – looking at flexibility vs predictability, public Cloud vs on-premises. They’re thinking twice before pushing stuff in the Cloud just for the sake of it, and repatriating data when necessary.”
Agrees Iben Rodriguez, “Everyone is trying to tighten the belt, being more frugal, so budgets are more limited. Instead of just spending money on everything, we have to choose wisely.”
A term that has emerged over the past couple of years in response is FinOps. “This will become a greater focus in IT management to address budget management issues of Cloud spending that has gotten out of control with no accountability,” says Michael Delzer.
This is driving increased interest from vendors that only partially solve for this. “I’m really tired of companies that do some kind of financial management, that are now calling themselves FinOps companies, as they don’t have the first clue as to what FinOps actually is, and where it sits in the organization,” says Howard Holton.
One option is to outsource to managed service providers, thinks Paul Stringfellow: “In the UK and Europe in particular, we’re seeing a real shift towards people wanting managed security services, managed infrastructure.”
But outsourcing is one aspect of better cost management, which is a great deal to do with tooling, products and vendor solutions having got out of control. “Organizations have almost got too much technology, they like to simplify and consolidate,” continues Paul Stringfellow.
Agrees Don McVittie, “Consolidation is a trend I see across the board, including DevOps pipelines, as budgets tighten and organizations get more picky about what they are willing to pay for.”
As Lisa Erickson-Harris illustrates in Healthcare IT and Virtual Care, this makes for interesting times for vendors. “There’s consolidation across vendors, creating ‘the gorilla in the room’ handling virtual care solutions, targeting health insurance companies, as well as care providers. Profit growth has flattened out in these companies. You can see it in what’s happening in their stock movement.”
Consolidation and convergence are particularly in security and service management
Maybe counter-intuitively, we see a shift from single-provider cloud centricity to a multi-provider, manageable and orchestrate-able platform. “The goal should always have always been hybrid multi-cloud. The right app, the right workload, in the right place, at the right time, for the right output. The problem is, cloud vendors have done their best to make it complex,” says Howard.
At the same time, we’re seeing a convergence of infrastructure applications and services, says Kerstin Mende-Stief, “My big prediction is that ‘my’ space will no longer exist. Infrastructure services such as networking, computing power, cybersecurity, and storage will become one and can no longer be considered standalone. This is evidenced not only by the evolution of cyber insurance but also by platform developments of market-leading vendors. Against this background, my prediction for 2023 is the breakthrough of (decentralized) Web3 in enterprise IT services.”
So, where are we seeing particular attention? Specifically, security and service management – not least because of the fragmentation in existing tools. Says Ron, “Endpoint management, asset management, patch management and security have managed to get themselves wrapped around each other’s legs so badly, none of them can walk! Asset management and patch management are coming together, but, not with security management. That’s a big problem.”
And on the service management side, continues Ron, “There’s some consolidation starting to take place between the various observability tools – AIOps, APM, Cloud management, and lastly, FinOps itself. We have too many vendors in AIOps; APM is reasonable; for Cloud management we have scattering.”
As a result, it becomes hard for technology buyers, says Iben: “For example, with the consolidation of applications – do I give my money to my vulnerability system, which is also trying to do inventory and CMDB updates? Or should I focus on Service Now, or something else?”
This is exacerbated by the siloed nature of enterprise IT, says Ron. “We have security and the patch, or the fix, in two different groups. Security keeps living in this ivory tower and not communicating with others as well as they should. That’s true in every company I know of, over 1 billion dollars.”
In consequence of end-user consolidation, we are seeing security and service management vendors increasingly coming together. Illustrates Paul, “A theme that came out of the refresh of my Unified Endpoint Management Radar was the move to unified Endpoint Security, EDR and Endpoint Management. The route vendors were looking at was either to build their own, or to build better integration.”
Concurs Ron, “Patch Management is finally beginning to merge with Endpoint Management.” But, he continues, “The guys who are security, are doing more governance than they are security” – which brings to our next topic.
Governance is emerging from botha project and a compliance perspective
As industry pressures change, so vendors are looking to respond: illustrates, Don MacVittie, “Given the US Government’s direction to require Software Bill of Materials (SBoMs) for all contractors, there is a rush to get tools implemented in a variety of existing technology markets.” And we are seeing similar with data sovereignty – an issue that has kicked off in Europe and is going global: suddenly, everyone is inventing sovereignty.
In a rush to provide solutions, however, vendors can create more challenges, adding to the complexity. The result is to feed the need for tooling consolidation still further, with the dual goals of simplification and alignment with needs. Continues Don, “Customers do not need dozens of tools generating SBoMs! Consistency matters, and having many vendors in several spaces implementing SBoM generation gives users options but can cause confusion. Enterprises should choose one as “the source” and turn all other generation tools off.”
In some cases, tools are extending beyond their historical remit, for better or worse – for example IT service management tools being used for project management. Says Iben, “Tools such as ServiceNow are being used in some organizations for project tracking, and it’s so complicated!”
Not only that but the value proposition for enterprise tools is aimed at larger organizations and doesn’t fit the needs of smaller and medium businesses. Says Dana Hernandez, “I’ve completed some interesting projects on SLA terms. What you would use as a large enterprise, is very different as a SMB. The tools for the SMBs don’t look great when you put them next to some of the enterprise tools, but would be great for the SMBs.”
To flip the model, we are seeing historical project tracking vendors such as Atlassian emerge into the service management space. Explains Ron, “ITSM is focused on ITIL, which is the gold standard. ITIL 4 will become standard in ITSM next year. However, some groups need a lighter version of that, smaller systems that are easier to use and cost less – especially in the SMB market.”
Other areas are moving from lighter to heavier weight, for example we talked about the upper end of Value Stream Management merging in with project portfolio management, to give organizations an overall business dashboard. Says Dana, “Across reports, I am seeing a continued interest in Value Stream Management.”
Silo-ed security is a blocker to progress
As well as consolidation for cost reasons, organizations are faced with significant security and compliance challenges: “Security attacks are up 3000%, which is an insane amount,” says Howard.
Two specific areas are infrastructure security and API security. Says Enrico, “We’re expecting embedded security in infrastructure for example checking security posture of your storage: tools embedded in backup/other solutions to check if volumes are correctly configured, so you don’t have unassigned files, or the wrong group assignments, as well as having more features that can act quickly on potential threats.”
Meanwhile, says Michael, “The security gap in chaining API calls will become critical for some companies and it will only be detected after a major breach.” Concurs Howard, “This is the thing that I’m hearing most about, the thing organizations are most concerned about. What we’re seeing now is not malicious attacks, but unintentional attacks: valid API calls are returning unintentional results. As in, the developer does not intend for that data to ever be returned.”
Data architecture are maturing, taking the weight off data science
Turning our attention to data, much of our interest is in terms of how environments are maturing, notably from service providers, supporting applications more broadly. “Data as a Service will be the saving feature to allow companies to successfully modernize their applications. This will be seen as DataLakes, BoatHouses, and Data Brokers that leverage legacy storage repositories as well as newer solutions like cloud-based data warehouses. This will drive more interest in Intelligent Data Processing, and Workflow systems. The PaaS value proposition will compete with K8s as less skilled cloud and infrastructure staff will replace high-cost staffing, that in the past custom-built every data flow.”
Catalyzing this is the need to deliver data to AI models and machine learning algorithms. Says William McKnight, “Historically, the data scientist has done a lot of (what I call) data cultivation, because the internal data environments have not been ready. I think that’s going to shift next year – for example data lakes have matured a bit, that’s where most of the data is coming from for data science. In 2023, we’re going to see data scientists start doing more data science, rather than data cultivation. Architectures are going to be built, adopted, matured, and transformed, to fit machine learning, as it sinks into organizations.”
The use of AI links to new data types and sources, not least the use and management of synthetic data. “Data Scientists don’t have the data to promote to great machine learning algorithms today. A lot of it is pretty basic stuff: understanding different accents, reading faces, natural language processing and so on. There’s synthetic data now for all that, and I see a growing market there, which is going to drive further adoption of machine learning,” says William.
AI is becoming a business enabler
Speaking of AI more broadly, we are seeing ML and AI-based functionality across our reports now. Says Andrei, “I have put AI on pretty much every report over the last 12 months, so I feel that a lot of vendors are rushing to deploy AI in one form or another.”
This is particularly true for security tooling. “We’ve always known Security was behind – security was a bottleneck, security couldn’t get enough staff and so on. Vendors are trying to answer that by using AI to restrict what needs to be done by people. In 2023 I think we will see a lot of forward motion in that direction, using AI not as the solution, but to save man hours,” says Don.
Equally, AI is being applied to real-world use cases. “Some new ideas are trying to emulate empathy using AI in healthcare,” illustrates Lisa, and equally we have the current wave of text-based AI driven by OpenAI and ChatGPT. Says Howard, “If you go to writer groups right now, they are talking about it. It’s very good at middle school level (which is what most writers write at), and there’s concern that these tools are good enough to start replacing their jobs.”
Equally, AI has its challenges, says Ron Williams. “I asked ChatGPT to give me a simple definition of what it was like for the life of a certain person, and it was accurate, really good. And the question is, with the AIs that we have, are we going to focus on the biases that they have? We need to look at this.”
In response, Alan Rodger explains: “The ubiquity of AI brings the need to prove, at the business level of the organization and down in the tech area, for compliance purposes and corporate responsibility, that AI is being done right. This means best practice in building models, plus controls to govern AI during its operation, documenting how the software is making its decisions.”
Agrees Howard, “I’ve spent a lot of time with Canada, US and EU legislation, specifically around the ethical use of AI. A primary focus is explainability, which is important. But when a legislator says that word, they don’t mean what we mean; rather they mean reducing the capability of AI, to that of an Excel formula.”
Environmental, social and governance are increasing in priority
Another area of interest is sustainability, albeit conflated with energy cost reduction, says Andrei: “I’ve heard a lot of people talking about sustainability and then energy efficiency, packing them into one.” This is fair given the current pressures on energy markets, he continues. “Cooling is going to be impacted by rising energy costs, but then you have regional or authority rebates or energy-caps so pricing could be a bit funny over the next couple of years.”
Concurs Stelios Moschos, “I can see a big trend in GreenOps, and anything that relates to sustainability. A lot of ideas have been shared on this, on LinkedIn and at conferences. I think it will be massive now, and for the next couple of years at least – I have been seeing interest growing for clients.”
Given the cost aspects (plus the fact that much hyperscaler energy is ‘low carbon’), this goes back to balancing between cloud and in-house facilities, says Enrico: “In the short term, it could be easier for them to move to the cloud, just because their data center is old and not that efficient. In other cases, enterprises are making huge investments in things like solar panels thinking about repatriating because it’s quite efficient from a cost perspective.”
Social and governance aspects of ESG are also important. ESG is huge in Europe, it has been for a while. The US is now catching up, and doubling down. We’re seeing more and more conversations, more and more requests for this,” says Howard.
Lisa Erickson-Harris also reports increased interest in Diversity & Equity, relating to data collection and data governance. “For the last 2-3 years I’m seeing it everywhere in the non-profit sector and now in state government. Being to capture data around how well we’re addressing diversity is a challenge that has nuances and I think it will be dominant in 2023 and beyond.”
Closing thoughts on uncertainty
We close with a wrap-up thoughts from our CTO, Howard Holton. “Uncertainty is the word of 2023. No one is an expert of any of the emerging areas we are seeing. AI, what’s that going to do for me? Those that make money out of AI are going to tell you, it does everything and makes coffee, but how to get value out of it yourself? Nobody really knows what this is going to look like.”
So, all in all, a lot going on, but to get ahead, you can look for areas to turn uncertainty into certainty, not least around costs and governance. By adding (looking how to build) a picture of each to your strategy, you will be in a stronger position than otherwise, and doing nothing will not be an option.
2023
Posts from 2023.
February 2023
02-08 – will.i.am: ideas people drive tech-first innovation
will.i.am: ideas people drive tech-first innovation
will.i.am is enthralling, sharp, and, yes, he has technical smarts and a view on the future. At a recent Atlassian ITSM event, at the O2 in London, The Black-Eyed Peas performer had some interesting things to say to Atlassian’s Dom Price about his philanthropy, and what he has learned about teamwork. More than this, he painted a picture about how his musical experiences fed his understanding of tech and how the whole thing came full circle. The TL;DR: it’s about the people.
Interestingly, it was will.i.am’s philanthropic work that first led him to tech. “Music taught me a lot, so now I like solving problems,” he remarks. Over the past 12 years, his foundation has been looking towards education to address inner city challenges – “Teaching computer science and engineering and autonomy and robotics, to kids that are at the intersection of harm’s way.” In doing so, he realized it was more than a passing interest for himself. “If I’m telling kids that they should take an interest in computer science and engineering and mathematics, then I should pursue that path as well.”
Enter: will.i.am the tech entrepreneur, bringing his experience as a producer and musician into the corporate sphere. “Imagine if governments and corporations of the world worked the way an orchestra works, and when the whole premise is to make sure that whatever they’re making is pleasant to the ear?” While some of his work has been very public, with brands like Coca-Cola and Intel, many of his projects are behind closed doors – that’s the nature of innovation. “There’s this one project that I’m doing currently, I can’t name the company. But we’re doing some pretty cool stuff!”
So, what has he learned? First, engineering is what drives so much of the innovation we see today. “I tell my kids that good music is great, but we can’t make it without innovators and engineers. If you’re making music with computers, you need engineers! There’s an abundance of actors and actresses, dancers, football players, musicians, and TikTok-ers. But there’s a shortage of engineers. Imagine you’re starting a company, and somebody is like, we’re going to write this in QT. But QT engineers are invisible – there’s a shortage.”
Not just this, but there’s an absence of role models, exacerbating the problem. He continues, “I can’t wait to see what Melissa Robertson writes when she graduates from High School to go to MIT. I want to see that draft. I want to see when Melissa graduates from MIT to work at Google. The world should see that. Young kids should be, I want to be like Melissa, I want to be like Sundar, I want to be like Sunil.”
But this isn’t just about the talent on individual projects. The companies that have changed the world are the ones that have led with such innovation, not at the team or department level but across the corporation.
The standout example is Apple: “The way they do things is, wow. IBM’s cool: they’re really championing quantum computing. But think about Apple in the early 80s and how dominant IBM was. When Apple said Think Different, they were saying, think different to computing as it was – ‘Computers are meant for mainframe computing and corporations, and regular, normal people will never probably need a computer in their house. Apple was like, no, I think everybody should have a computer in their house.”
Apple’s journey from computer company to music provider, then the broadcast network is well charted, as is Amazon’s route to being an everything store and global infrastructure provider (and a broadcast network), and many other examples. But all are characterized by the people that drove their success as portfolio companies rather than one-offs.
“Red Bull… now they have motocross competitions and breakdancing teams, and they just won the freaking F1 championships. Wow, what’s going on with these multi-companies that are collaborating with all different types of talents and disciplines?” It’s all down to the people, top to bottom, he suggests. “If you’re a company of yesteryears and you’re only working with talent as it was, and don’t think it’s smart to bring in other disciplines, then you’re going to be swallowed up. Nokia, BlackBerry, other energy drink companies…”
So, how to address this? First, find the right people. “There’s a lot of risk takers who want to start solving problems, be entrepreneurs. You have to go and meet them out in the world, where they are. For the work I do in tech, I go to Israel, to Turkey, to Bangalore. For folks in Ukraine, go to Kiev. Go to Austin, there are some cool developers there. Brazil is popping right now.”
Next, look for ideas people, not just skills. “These AI tools, where you type in a word and then boom, a picture comes out? That means the folks that will be creating awesome stuff tomorrow are just the ideas people, because now they don’t have to illustrate, or translate their ideas to an illustrator. New idea manifesters are going to be the superstars. In music, it’s producers like Doctor Dre, Kanye types of producers. It’s going to be easy for world builders and storytellers to tell stories and build worlds with these new AI tools. It’s liberating, but it’s also threatening if all you do is Illustrate.
Third, learn how to manage personalities. “If somebody’s awesome they’re going to come with a big ego, but you have to figure out a way to work with that individual because they’re going to deliver. There’s a parallel between business and the arts. With the arts comes a whole lot of ego, especially when they have success and have come from nothing. As a producer, you know that somebody’s coming with some funk, but they’re bringing the goods. Michael Jordan wasn’t known to be the nice guy, but he helped his team win championships. Steve Jobs wasn’t the nice guy but hey, thanks, Steve Jobs. And thank you for everybody that figured out a way to work with that type of personality and tolerated that.
Building on this, be empathetic to different levels of people skills. “In the very sensitive society that we live in today, who knows if it’s going to stifle the next level of innovation? The folks that work in isolation don’t always have people skills, but dammit, are they really freaking amazing? It’s usually the folks that don’t know how to engage with people that have amazing ideas for people. My concern is, as society gets more sensitive, those folks with that type of mindset are probably not going to feel comfortable unearthing ideas because they don’t know how to engage.”
And finally, invest in future expertise. To close, will.i.am referenced a project with AMG, the manufacturers of Mercedes (reference: “I invested in Tesla before Elon took over the company. They gave me incomplete cars, then I put my ideas on them, I built two.”). With the AMG, he designed a 2-door saloon based on the Mercedes GT 63: the resulting funds went towards his inner cities project. “That build Is going to create a little over 150 robotics teams in the States, young kids from the age 15 to 18 competing building robots. Why is that important? As we get more technologically advanced and autonomous, so many jobs are going to be rendered obsolete.”
Which brings the whole thing full circle. will.i.am’s recipe for success: Find people that aspire to solve real problems with technical solutions, wherever they are; understand how to get the best out of them; and invest in them now and in the future – that’s will.i.am’s recipe for innovation success. It’s all about the people, and will.i.am is above all a people person, linking business and technology, music and creativity, art and production. “I connect the dots,” he says, and long may he continue to do so.
02-17 – GigaOm Research Bulletin #001
GigaOm Research Bulletin #001

Hello!
To kick things off, please let us express our gratitude for vendor participation in our research last year. We are now researching 1,700 vendors across 120 solution categories. Last year we were briefed by a total of 836 vendors, which is staggering. Whether responding to questionnaire requests or fact checks, or just including us on general briefing schedules, we couldn’t do our work without input and support from analyst relations, product leaders and other folks. So, thank you for that!
Where To Meet GigaOm Analysts
The events calendar is just starting to fill, but you can expect to see our analysts at MWC in Barcelona, Kubecon in Amsterdam, RSA in San Francisco, and Tech Show London. Do let us know if you want to fix a meet.
Recent Reports
We have published the following reports in the past month or so:
In Analytics and AI, we have Radar reports for Machine Learning Operations(MLOps), Data Catalogs, and Data Warehouses, and Cloud/Operational Databases according to whether they are Relational or NoSQL-based.
In Storage, we have covered Scale-Out File Systems for both High-Performanceand Enterprise environments, plus Primary Storage for Large, Midsize and SmallEnterprises, alongside Disaster Recovery and Business Continuity as a Service
In Cloud Infrastructure and Networking, we have separate Radars for Hyperconverged Infrastructure in Enterprise and Edge deployments, and a research report on Software-Defined Wide Area Networks (SD-WAN).
In the Security domain, we have reports on User and Entity Behavior Analysis(UEBA), Managed Detection and Response (MDR), Unified Endpoint Management, Development Security, and Governance, Risk, and Compliance.
And in Software and Applications, we’ve released reports on Regulated Software Lifecycle Management and API Functional Automated Testing, plus Human Resource Information Systems for Small-to-Medium Businesses.
Blogs and Articles
will.i.am: ideas people drive tech-first innovation. Jon here – I was particularly taken by will.i.am’s passion and indeed, technical chops. Here he talks about how to hire and manage the right people.
What do GigaOm analysts see as the big trends in 2023? We cover cloud cost management, security convergence, maturing data architectures and AI as a business enabler.
A Three-Point Plan For Mid-Market Technology Cost Saving. Our CTO and lead analyst Howard Holton goes into detail on how mid-sized organizations can get better bang per buck.
How APM, Observability And AIOps Drive Operational Awareness. Ron Williams, our IT Ops lead analyst, presents the bigger picture on top of multiple operations technology categories.
CXO Insight: Cloud Cost Optimization. Enrico Signoretti, practice lead for Storage and Cloud infrastructure shares findings post-AWS Re:Invent.
Let’s Kill Email! Cybersecurity As A Driver To Better Communication Strategy. Ben Stanford, practice lead for Applications and Infrastructure offers sage advice about how to put security first to deal with collaboration woes.
Connecting with the Analyst Team
We’d like to ensure you’re connected with analysts covering your areas of interest, whilst keeping things efficient. For news and updates, do add analystconnect@gigaom.com to your lists — this is managed by Claire (below), who curates a weekly internal email for GigaOm analysts. Feel free to communicate with analysts directly, but perhaps copy us so nothing gets lost.
Let us know if you have any questions or feedback, want to go through our research schedule, or just understand our modus operandi better. Please do forward this to anyone you think might find it useful, and watch this space!
Jon Collins, VP of Research Claire Hale, Engagement Manager

02-21 – Chairs, towels and GPS devices: where’s the line for analyst event swag?
Chairs, towels and GPS devices: where’s the line for analyst event swag?
Cisco gave me a chair once.
It wasn’t just me, you understand – they handed one out to every person that attended their analyst event, at their UK offices at Bedfont Lakes. It was a lovely chair – one of those collapsible ones you might want to take camping, but this one had an extra feature – a detachable footrest, so you could sit and watch the sunset in luxury. It was blue, and the firm, yet comfortable head cushion had the Cisco logo emblazoned across it. I knew, wherever I might travel, that Cisco would have my back, or at least my neck, fully supported.
As said chair was thrust into my arms by a cheery analyst relations professional, I couldn’t help feeling slightly sorry for those analysts who had not arrived at the event by car. Some would be braving public transport to get back to their London-centric bases; others would be on trains or even planes to voyage home. Glancing around me, I saw the faces of several, somewhat flummoxed analysts. It was a really nice chair, but it did come with… ramifications? And might it have been expensive?
It’s likely that some in the room would have been party to the 3Com GPS debacle of a year or so before. 3Com was being re-booted under new management, and its analyst relations program kick-started with a big budget and a desire to impress. Analysts, including myself, were sent a snazzy Garmin GPS device, with a caption along the lines of “we’re finding our direction.” The campaign backfired as many analysts, including myself, felt the expensive gadget crossed the line. We cannot be bought off with a shiny thing! came the feedback. For the record, I gave mine to a local scout troop.
Reflecting on these memories, I can’t help wondering why vendors give things to analysts in the first place. I can think of several very good reasons, not least that branded swag (as it’s called on the tech conference circuit, also variously freebies and gizzits) serves a solid, general purpose – in a market that is a maelstrom of competing brands, how better to maintain a presence than with a pair of brightly coloured, belogoed socks (I am a sucker for these), or a handy bag for all your cables (thank you, Red Hat)?
At analyst events, swag has an additional purpose. People want it, you see – despite millennia of evolution, we can’t escape our inner, bird-brained desire to accumulate shiny things. Seasoned AR pros know the best way to get analysts to fill in a feedback form at the end of an event, is to have a row of white cardboard boxes behind the desk: even the most cynical of industry pundits will be spotted, lined up at the counter with their filled-in forms, then fumbling at the box with all the excitement of a kid at a lucky dip. A USB cup warmer, you say? Cool!
Even with such, rational (albeit awkward) reasons to deliver branded paraphernalia into the hands of usually sage industry influencers, there has to be a line. I’ve not been present at AR pro meetings, but I can imagine the “What did you give them?” topic has come up. From the analyst perspective, swag handouts can feel a bit like party bags: anyone with kids will know the angst that goes into what to give the kids that deign to turn up to your little munchkin’s birthday event. What starts with a pencil and a lollypop can turn into an all-out war between over-competitive parents looking to show just how good they are at not ignoring their offspring.
In my experience, every now and then the party bags would see a reset, returning to pencils and lollipops and a collective sigh of relief from all involved (all apart from the kids, who won’t hold back on the “what the heck is this” in the car on the way home – they get used to it). Similarly the golden rule with swag is to always make sure it stays on the side of fun and fitness for purpose – a thank you for attending, a coax for feedback, a simple way to keep front of mind.
Even cost can become less relevant, if the cost has purpose. I have accepted a Dell PDA from Dell, and an Amazon Echo from AWS for example, knowing that I will then use the things and see whether the vendors were talking sense. Software vendors might struggle with this, but when I think about it, I don’t know why software licenses or service packages aren’t on the figurative table. And for analysts, I’d share another golden rule: if you’re feeling uncomfortable about talking about an expensive vendor ‘gift’ publicly, then you should ask yourself why – and possibly give it back.
All in all a little bit of thought, including adding to skips full of broken electronics and plastic mountains, can go a long way. (I am immediately reminded of a towel from Hewlett Packard, emblazoned with the words, “The next e: e-Services.” That was a great towel, so good that it outlasted e-Services, and indeed HP’s entire software division.)
Perhaps, most of all, the one thing we can see in swag is that it is a necessarily small token of gratitude. The analyst industry is built on relationships: less an exclusive club, and more an ecosystem of evaluators. As analysts we recognise that we are enabled to connect with technology vendors so that we can correctly represent the market: this is not a right, but a most welcome facet of the otherwise complex and fast-moving industry within which we work. Similarly I hope vendors, and their analyst relations teams, recognise the challenge to analysts as they look to keep abreast of it all: spending half a day or more at an event is both necessary and challenging, given the sheer volume of change.
So, let’s be glad of what we have – relationships – and if AR teams choose to mark them in some way, that should be fine. Perhaps the point is not to worry about spending too much: it’s the thought that matters far more. The chair didn’t last forever, by the way, but we did get a good few years out of it, taking turns to lift our feet and stare at the sunset. Whilst I can’t honestly say it did anything to change my views on Cisco (which were generally reasonable), I do still have fond memories of the AR team that made it possible.
March 2023
03-13 – How can industry analyst firms work better with early stage startups?
How can industry analyst firms work better with early stage startups?
Jon caught up with Analyst Relations specialists Robin Schaffer and Chris Holscher about their research report, “State of Startups with Industry Analysts,” conducted with the University of Edinburgh.
Jon: Thanks for joining me Robin and Chris, and thank you for sharing your research! Let’s get to it – what does it tell you about the opportunity for start-ups to work with analysts? There’s the obvious stuff – is it as simple as, ‘analysts understand the market’? And do analysts care about what startups are up to, or do they focus on more established firms?
Robin: But we didn’t get any real traction around the concept that analysts are not interested in startups. We didn’t have anybody say, “Well, they’re not relevant to my research”. What we did get (from the analysts) was that firms are keen to tailor their offerings more to this increasingly important segment. Meanwhile, many start-ups don’t know much about working with analysts, and what possibilities exist.
Chris: Analysts told us they want to hear from start-ups much earlier than startups believe they would be relevant to them – months, if not years earlier than the startups would get feedback from reference customers. But they, especially the bigger firms who may have startup specific offerings, target these offerings more in a way that makes sense to startups at a later stage, unless the startup has a fully analyst relations-savvy person on board.
This gap creates an opportunity. We can see what startups really want and need at earlier stages, so that they see the value of investing, or at least engaging with the analyst community. And meanwhile, allow Analysts to get in touch with startups at that earlier point in a meaningful way. If firms want to engage with startups earlier, they should better reflect the dynamics of startups’ individual journeys.
Jon: I’m thinking about the value flow from analysts to startups, and back. The analysts get a great deal of value out of understanding what startups are up to. Take Honeycomb, for example. This was formed by people working at Facebook that were just fed up with the fact that they couldn’t work out where operational problems were, and created a solution to that problem. And that kind of kick-started the Observability space.
So, it’s of massive value to analysts to keep tabs on that sort of thing. The value isn’t always accessible or understood in the other direction, is your point.
Chris Holscher: We can see that in the data – what startups know about the role of Industry Analysts, what value they expect, what they’re prepared to pay, or they’re not prepared to pay, at which stage, how they organize to bring the benefits into their wheelhouse, and what the analyst packages look like.
We need to develop new kinds of thinking, both on the startup side, (and they’re already on it, that is what they want), but also on the analyst house side. For example, for a company of only 10 people, their total funding is say 200K, so of course they don’t have the funds to spend 50K now. But, they will not always have 200K total funding. They will come into their first million, 2 million, 5 million funding. So, why not give them something now that makes sense for them at this early point in their journey?
Jon: RedMonk popped into my head as an organization. Steve, James and the team. They are already very developer friendly, so that they are having those conversations with the same bunch of people. They were arriving at a very early stage. They’ve then got a retainer model, which is kind of, “Use us, and if you find you are not getting value, then stop.”
Most importantly, it is starting those relationships at a very, very early stage, so that when the company is much bigger, they’ve still got those relationships. They’re not kind of swooping in and saying, well, they were speaking to us now, hey, we’re really cool and yeah, of course you want to be my friend… just because I just won the lottery.
Robin: The interesting thing is, startups see working with analysts as a marketing thing. And the marketing aspect of it is real, and generally it doesn’t cost anything. The real value that needs to be re-educated, is that analysts can be part of the development of your company, of your segmentation, of your messaging, of you know all that inbound stuff, right? And that they need it early.
Chris: The SSIA data shows this very clearly. And it’s no surprise. An industry analyst leads 1,000 – 2,000 interactions with tech buyers, vendors, investors every year. That is not at sales and marketing level, but really nuts and bolts, with reference customers when the vendor is not on the phone, with direct access to pilot products. That’s a breadth and depth of insight that buyers of complex technology really value because it protects them from (let’s say) overly confident marketing.
This is why earlier research has shown that mentions in analyst publications are the #1 shortlisting criterion. You just cannot ignore this if you want to break into a B2B tech market as a startup – especially if you’re innovative, disruptive, category defining, and so on.
Jon: In my area, the whole DevOps space right now, there’s loads of companies going, “you know what, people need a better view over the development process.” Or they’ll say, “They shouldn’t be writing code, they should be using some form of higher-level way of doing it.” But they’re all doing it in their own way, and they don’t realize that 15 other organizations have been solving the same problem.
if you’re a small company, you don’t necessarily see you’ve discovered another route up the same mountain. And it’s important to, because you need to know how to differentiate, but you also need to know what you’re missing that the other companies have worked out already. Because if you want to get acquired, you want to be the perfect jigsaw piece to fit in someone else’s puzzle.
Chris: So, it’s very much about the process, and the analyst is kind of the catalyst to that
Value generation of the process. There’s also the element of “what’s holding analysts back from liaising with startups?” Most startups don’t even have analyst relations on their radar. Many that do, don’t understand how to play it, or they have misconceptions about it. They think it’s a very transactional thing, or they just repurpose their investor pitch, or their sales or marketing pitch.
Then they are frustrated that this didn’t really work, and the analyst is frustrated because he said, “Well, there goes another 30 minutes of my precious time wasted” So what they learn is, although I really would like to speak to all these innovative companies, but I cannot afford the time to do this, because I’m not getting useable information out of the interactions. I’m constantly being sold to.. That creates the mismatch from the other end.
It’s such a shame, because whenever that happens it means that a startup has just burned their one golden ticket to getting on the radar of maybe the most trusted market influencer in their segment. And you cannot buy to be prioritized on their calendar. So instead of standing on the shoulders of a giant – if the analyst is convinced of their vision and abilities to deliver – they must continue fighting an uphill battle against other PR noise.
Jon Collins: I think the analyst industry is both highly necessary, and also a bit broken. If we’re not fixing it, it carries on the way it is. I think it is about people spending time to discover things that they can present as market insights to people that need them, that’s massively valuable to a lot of organizations. It’s about kind of promoting trust, establishing the role of insights.
But too often it’s perceived as enabling the buying cycle, which it is in part, but that isn’t the only thing. We can all buy more, but we all just end up with the same paraphernalia, and that’s what enterprises have ended up with. So, it has to be more than just buying. It has to be about architecture, has to be about delivery. We’re not only in the game of selling people more stuff that they’re not going to use.
We should be in the game of enabling people to understand and then get value out of making the right decisions about technology.
Chris: You mentioned trust. One of the first things that we asked was, how’s your level of trust in analysts? One of the findings was, the more that startups actually engage with analysts, the trustworthiness curve goes steeply up, and the knowledgeable curve goes up steeply. In our times, where everything is so transactional, that’s a glowing endorsement.
The more you work with someone, the more you see, it’s actually not pay for play. It’s actually pay for time, and of course that time that I spend with the guy will inform his knowledge about my company. One thing that I tend to tell my clients…of course analysts are biased, they’re humans. Mostly they are biased towards the companies that they actually know about.
Jon: We had this discussion as part of building our research library. We’ve had vendors say “Well, yeah, we can’t be bothered to be in your report. We don’t know who you are…” And later they say, “What you’ve written about us is completely wrong!” But we will have evaluated to the best of our ability based on the information available to the vendor’s own prospects. So, what does that say about the vendor? We go from this kind of disdain-to-agitation cycle reasonably regularly. It’s so much easier if we can build trust from the start.
Chris: You said earlier that this industry is broken in a way, and we’re not sure how to fix it. I have a feeling, if smart analyst firms recognise and understand this enormous avalanche of new technology companies coming into the market every year, and they manage to connect with them in a smart and more flexible way, this might be part of the solution. It’s the smarter firms, the more agile thinkers, who are more likely to be successful with these young companies to whom agility is everything. I’m sure that can balance out the analyst market a little bit, too.
Jon: We have an internal principle around defensibility, which is, you can say anything you like as long as you’ve got evidence to support what you’re saying, and I think it does come back to the startups and keeping us real.
If we are engaged with startups, like blockchain distributed storage for example. We can carry on saying well storage is all about the things that storage used to be about, or we can look at blockchain-based storage, and change our perspectives, because it’s given us new data. Our job is to observe, and, as I say, derive insights from data. Therefore we need that data in order to have the insights that are balanced towards what’s actually happening.
Chris: That’s the beauty of the entire game, there’s actually no right or wrong, just perspectives in your specific user/corporate context, whatever your strategy is. You’ll have your own perspective on a certain technology, certain architecture, or a certain methodology. Your perspective will be what you figured out for yourself at this very point in time. It may completely change another day, but to be able to make up my mind, I need all the best perspectives that I can get.
Jon Collins: A really good example is a LinkedIn article by Tony Baer, saying, “Datamesh is not a technology, and there is no such thing as a Data Mesh ‘System’ or a ‘Data Mesh Software Company.’” Whilst Tony has broad shoulders, it took a little courage to put that out there, because it questions what has become a belief system around the data mesh concept. And that’s the job. We can all be cynics, but everything has got to go back to the data!
Thank you for your time Robin and Chris – and the research is available here.
03-17 – Thinking Strategically About Software Bills of Materials (SBOMs)
Thinking Strategically About Software Bills of Materials (SBOMs)
Where did SBOMs spring from? As someone who (let’s say) has been around the block a few times, I’ve often felt confronted by something ‘new’, which looks awfully like something I’ve seen before. As a direct answer to the question, I believe it was US.gov wot dunnit, when in 2021 the White House released an executive order on improving cybersecurity. To whit, Section (4)(e)(vii): “Such guidance shall include standards, procedures, or criteria regarding… providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website.”
The driver for this particular edict, cybersecurity, is clear enough, in that it can be very difficult to say exactly what’s in a software package these days, what with open-source components, publicly available libraries, web site scripting packages and so on. If you can’t say what’s within, you can’t say for sure that it is secure; and if it turns out it isn’t, you won’t be aware of either the vulnerability, or the fix.
But more than this. Understanding what’s in your app may turn out to be like descending into the mines of Moria: level upon level of tunnels and interconnections, rat runs of data, chasms descending into the void, sinkholes sending plumes of digital steam into the air. If you want to understand the meaning behind the term “attack surface” you only have to recall Peter Jackson’s movie scene in which untold horrors emerge from long-forgotten crevices… yes, it’s highly likely you are running software based on a similar, once glorious but now forsaken architecture.
It is perfectly fair that the US Government saw to mandate such an index as the SBOM. Indeed, it could legitimately be asked, what took them so long; or indeed, why weren’t other organizations putting such a requirement on their requests for proposals? Note that I’m far from cynical about such a need, even if I remain healthily skeptical about the emergence of such a thing into the day to day parlance, as though it had always been there.
Let’s go back a few steps. I can remember working with software delivery and library management back in the 1980s. We had some advantages over today: first, all the software, everything above the operating system at least, was hand-crafted, written in Pascal, C and C++, compiled, built and delivered as a singular unit. Oh, those halcyon days! Even a few years later, when I was taking software packages from a development centre in Berlin, the list of what was being delivered was a core element of the delivery.
What changed is simple – the (equally hand-crafted) processes we had were too slow to keep up with the rate of innovation. By the late 1990s, when e-commerce started to take off, best practice was left behind: no prizes existed for doing it right in an age of breaking things and GSD. That’s not a criticism, by the way: it’s all very well working by the book, but not if the bookstore is being closed around you because it is failing to innovate at the same pace as the innovators.
Disrupt or be disrupted, indeed, but the consequences of operating fast and loose are laid out before us today. As an aside, I’m reminded of buying my first ukulele from Forsyths, a 150-year old music shop in Manchester. “It’s not that cheaper is necessarily worse,” said the chap helping me choose. “It’s more that the quality assurance is less good, so there’s no guarantee that what you buy will be well built.” In this situation, the QA was pushed to the endpoints, that is, the shop assistant and myself, who had to work through several instruments before finding a mid-range one with reasonable build and tone.
Just as ukuleles, so used cars, and indeed, software. The need for quality management is not an absolute, in that things won’t necessarily go wrong if it is not in place. However, its absence increases risk levels, across software delivery, operations, and indeed, security management. Cybersecurity is all about risk, and attempting to secure an application without an SBOM creates a risk in itself – it’s like theft-proofing a building without having a set of architecture plans.
But as we can see, the need for better oversight of software delivery (oversight which would provide the SBOM out of the box) goes beyond cybersecurity. Not that long ago, I was talking to Tracey Regan at DeployHub about service catalogs, i.e. directories of application elements and where each are used. The conversation pretty much aligned with what I’m writing here, that is: as long as software has been modular, the need has existed to list out said modules, and manage that list in some way. This “parts list” notion probably dates back to the Romans, if not before.
The capability (to know an application’s constitution) has a variety of uses. For example, so that an application could, if necessary, be reconstituted for scratch. In software configuration management best practice, you should be able to say, “Let ‘s spin up the version of the application we were running last August.” In these software-defined times, you can also document the (virtualised) hardware as code, and (to bring in GitOps) compare what is currently running with what you think is running, in case of unmanaged configuration tweaks.
This clearly isn’t some bureaucratic need to log everything in a ledger. Rather, and not dissimilar to the theories behind (ledger-based) Blockchain, having everything logged enables you to assure provenance and accountability, diagnose problems better, keep on top of changes and, above all, create a level of protection against complexity-based risk. So much of the current technology discussion is about acting on visibility: in operations circles for example, we talk about observability and AIOps; in customer-facing situations, it’s all to do with creating a coherent view.
If it was ever thus, that we needed to keep tabs on what we deliver, the fundamental difference has moved from a need for speed (which set the agenda in the last couple of decades), to the challenges of dealing with the consequences of doing things fast. Whilst complexity existed back in the early days of software delivery—Yourdon and Constantine’s 1975 paper on Structured Design existed to address it—today’s complexity is different, requiring a different kind of response.
Back in the day, it was about understanding and delivering on business needs. Understanding requirements was a challenge in itself, with the inevitable cries of scope creep as organisations tried to build every possible feature into their proprietary systems. The debate was around how to deliver more – in general users didn’t trust software teams to build what was needed, and everything ran slower than hoped. Projects were completist, built to last and as solid as a plum pudding.
Today, it’s more about operations, management and indeed, security. The need for SBOMs was always the case; for needing to know what is delivered, then roll back if it is wrong, remains the same. But the problems caused by not knowing are an order of magnitude greater (or more). This is what organisations are discovering as they free themselves for legacy approaches and head into the cloud-native unknown.
So many of today’s conversations are about addressing the problems we have caused. We can talk about shift-left testing, or security by design, each of which are about gaining a better understanding earlier in the process, looking before we leap. We’ve moved from scope creep to delivery sprawl, as everything is delivered whether it is wanted or not. The funnel has flipped around, or rather, it has become a fire hose.
Rather than requiring ourselves to lock down the needs, we now need to lock down the outputs. Which is why SBOMs are so important—not because everybody likes a list, but rather, because our ability to create an SBOM efficiently is as good a litmus test as any, for the state of our software delivery practices, and consequent levels of risk.
So, let’s create SBOMs. In doing so, let’s also understand just how deep the rabbit hole goes in terms of our software stack and the vulnerabilities that lie within, and let’s use that understanding as a lever, to convince senior decision makers that the status quo needs to change. Let’s assess our software architectures, open our eyes to how we are using external libraries, open-source modules and scripting languages. Let’s not see anything as bad, other than our inability to know what we have, and what we are building it upon.
Any organization requested to provide an SBOM could see it as a dull distraction from getting things done, or as a tactical way of responding to a request. But taking this attitude creates a missed opportunity, alongside the risk: I can’t offer concrete numbers, but chances are the effort required in creating an SBOM as a one-off won’t be much different from instigating processes that enable it to be created repeatably, with all the ancillary benefits that brings.
This isn’t a “let’s go back to how things used to be” plea, but a simple observation. Software quality processes exist to increase efficiency and reduce risk, both of which have costs attached. Get the processes right, and the SBOM becomes a spin-off benefit. Get them wrong, and the business as a whole will face the consequences.
03-28 – GigaOm Research Bulletin #002
GigaOm Research Bulletin #002

Welcome to GigaOm’s research bulletin for March 2023
Hello again, plenty to report but first, a few words about our Chief Analyst, Michael Delzer, who passed away earlier this year. Michael’s expertise, wisdom, energy, patience, and good humour inspired all of those who worked with him. He was above all a people person, as one work colleague noted: “While Michael is undoubtedly one of the smartest people I have ever met, I will remember him first and foremost for his kind and open heart.” Rest in peace, Michael.

Research Highlights
See below for our most recent reports, blogs and articles, current press quotes, and where to meet our analysts in the next few months. Any questions, reply directly to this email and we will respond.

Trending: Analyst Ivan McPhee’s Radar for Software-Defined Wide Area Networks, released in January, is our top Radar read right now. The keyword is convergence, in this thoroughly researched report evaluating the offerings of 20 vendors.
**We are currently taking briefings on:**Agile Planning and Portfolio Management, AIOps, API MGT, Edge Platforms (CDN+), FinOps, Hybrid Cloud Data Protection, Password Management, Policy as Code, Process and Task Mining, SaaS Management, SSA (SASE), UCaaS, Value Stream Management.
Warming up are: Application Security Testing, Anti-Phishing, SIEM and DDoS Protection. In the next bulletin, we should have information on our 2023 Q3 reports schedule, which is being finalised as we speak. So watch this space for both.
Recent Reports
We’ve released 19 reports in the month since the last bulletin. Quick stat: We have 73 reports already published or scheduled so far for 2023, on track to hit our goal of covering 120 technology categories.
In Analytics and AI, we have released reports on Data Lakes and Lakehouses and Data Science.
In Cloud Infrastructure, we have published reports on Integration Platform as a Service (IPaaS), Cloud Observability, Alternatives to Amazon s3, Kubernetes for Edge Computing and Managed Kubernetes. And in Storage, we have covered Unstructured Data Management for both Business-Focused Solutions and Infrastructure-Focused Solutions.
In the Security domain, we have released reports on Cybersecurity Incident Response, Domain Name System (DNS) Security, Next-Generation Firewalls and Security Awareness and Training. And in Networking, Network Access Control, Edge Colocation and Cloud Networking.
In DevOps, we have a report on GitOpsand CI/CD for Kubernetes. And in Software and Applications, we have a report on E-Signature Solutions.
Blogs and Articles
We’ve published several technical blogs including:
To UEBA or not to UEBA? – that is the question Our Security analyst Chris Ray looks at how UEBA is more than a security monitoring tool.
CXO Insight: Do we really need Kubernetes at the Edge? Category lead for Storage and Kubernetes Enrico Signoretti shares his thoughts on edge computing solutions.
And a couple on the analyst industry and nature of being an analyst:
How can Industry Analyst firms work better with Early Stage Startups? Jon caught up with AR specialists about their research on the state of startups with industry analysts.
Chairs, towels and GPS devices: where’s the line for analyst event swag? On a lighter note, Jon addresses event swag and its ramifications.
And finally, a chance to read last month’s bulletin if you missed it.
Press Quotes
GigaOm analysts have been quoted in a variety of publications across the past month. If you have any needs, let us know.
GigaOm HCI Reports | Blocks & Files – Alistair Cooke
Primary Enterprise Storage Radar | Blocks & Files – Max Mortillaro and Arjan Timmerman
Cyber Security Training | Computer Weekly – Jamal Bihya
Sustainability | ZDNET – Geoff Uyleman
Cyber Leaders of the World | centraleyes – Chris Grundemann
Microsoft and ChatGPT | The Stack – Jon Collins
Where To Meet GigaOm Analysts
In the near future, you can expect to see our analysts at Kubecon/Cloud Native Conin Amsterdam, and RSA Conference in San Francisco. Do let us know if you want to fix a meet.
As ever, for news and updates, add analystconnect@gigaom.com to your lists, and get in touch with any questions. And thank you for your feedback so far on this bulletin. We’re making improvements as we go, as we understand your needs better.
All the best and speak soon!
Jon Collins, VP of Research
Claire Hale, Engagement Manager

May 2023
05-03 – Touchpoints, coalescence and multi-platform engineering — thoughts from Kubecon 2023
Touchpoints, coalescence and multi-platform engineering — thoughts from Kubecon 2023
Kubecon, held at Amsterdam’s RAI conference centre this year, was bigger than in 2022. Nothing untoward there you might say, but I mean Bigger. Double the attendees. By my visual estimates, the expo area was three times the size. It felt like the conference was growing up, a point I will come back to. But meanwhile, I thank organisers for maintaining a smaller-stand format, which kept the step count under control.
Over the three days I met dozens of companies, large and small, and most had a similar icebreaker — “What are you seeing this year?” Questions like this are the mainstay of being an analyst, like you’re able to maintain a complete and comprehensive overview of everything that’s going on in a complex and dynamic field, then map it onto a randomly placed set of brightly coloured stands and people movements, and pop out some pithy conclusion.
Spoiler alert: I can’t, because nobody could, and besides, the artist currently known as cloud native is still on a journey. Nonetheless I used the frequent positing of the same question to test some ideas and build a picture. Call it crowdsourcing if you like, though I am more minded to quote Arthur Conan Doyle, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
So, what did I see this year? I would use the following keywords: touchpoints, coalescence, platforms. Two are vague, yet specific; and one is specific, yet (it turns out) vague. I will start with touchpoints as this was the first, shimmering image that reflected the themes of the conference. But first, some more about the nature of the conference itself.
Imagine large halls full of (often younger, generally male) attendees, watching one or two people standing on a stage in front of multiple, superbright, wall to wall screens. In and out of the halls move these folks, shuffling between the keynote rooms and smaller sessions. At one end of the RAI, the curved, glass-ceilinged greenhouse of its expo hall was the closest thing to natural light anybody will experience for four days.
Within the hall, stands stretch. At one end, booked-out massage chairs and a rather forlorn creative corner. To the side, dining spaces mapped out by round tables and longer benches, any risk of the austere broken up by clusters of beanbags, green plastic plants and a large tree-like structure topped with a pink lampshade. Everywhere, people are walking, talking, clustered around screens, eating, drinking coffee, taking the occasional nap.
And what about said stands? Apart from a few, likely expensive exceptions these were mostly square, each no more than a blank wall and a standing desk for a sponsoring company to customise. Nonetheless each was different if familiar, not least as cloud native has its own colour palette, purple and dark green, black and red, ironically clashing against the more pastel-like themes of the conference itself.
Backdrops boldly state purpose — solving challenges of container security, or automated deployment, or visibility on performance. Together, these make a picture, of solutions to an emerging set of problems, caused by an agreed alignment towards massively distributed, cloud-based, microservices architectures with Kubernetes as the common orchestration tool and control plane.
There’s the rub. Whilst some very large applications have been built, deployed, used in this way, for many, the main challenge is one of solving for what is still a work in progress. Single solution providers offer multiple, overlapping approaches to solve the similar problem — API management versus service mesh, for example. Should you build an application using an all-encompassing environment, or piece together multiple tools to deliver something more custom?
As per one discussion, which happened to take place over a table covered in lego bricks, this is less about that old decision point between all-in-one solutions vs best of breed; rather, this is more like the legos, where all options are possible . We’re working at the component level rather than the application level, with all potential configurations catered for: customisation is no longer a differentiator, and more a cause of potential discomfort.
But, touchpoints. Just as each vendor (and, to one side, each CNCF project) does its own thing, so it develops, matures. Individual solutions are growing, covering more space, solving the higher-order challenges that come with scale. Just as a city might form around a river, with shops appearing at corners of roads, with common paths being discovered, so are providers part of a bigger, growing system that is maturing as a whole.
This was reflected in the interfaces between deployment and management tools, or the extensions to OpenTelemetry to cover logs (it won’t stop there) as part of its broader adoption, or the integration of data management within monitoring solutions. Increasingly, such extensions have come from customer demand as vendors find how their unique solution needs to respond to scenarios outside the lab, or as they hit their own glass ceilings of adoption.
Just as touchpoints stem from multiple ways to achieve the same goal, so there was a palpable feeling of coalescence, of the coming together of solution sets, or packages that built on top of others. Don’t want to have to configure everything on AWS yourself, all those namespaces and security policies? How about you use our management overlay, it’ll take care of all of that. Looking for a way to replicate cloud functionality on-prem? We have you covered.
By building in, building on top of, replicating functionality for different deployment types, we’re seeing the formation of what could (loosely, at this stage) be called a common architecture. Some pieces were already in place, like the aforementioned service mesh, or the newly CNCF-graduated capabilities for GitOps. The bigger theme, however, is that both organisations and vendors have something to build to, which will inevitably result in an acceleration of progress.
An inevitable, yet flawed conclusion is that everything else ends up as one platform. Platform engineering was the topic of several conversations but, don’t be fooled into thinking this means all the stands are going to pack up and we’ll be left with a handful of big providers. Some companies may choose to back a single horse — indeed, smaller companies may have no choice. But we’ve already seen the cost management issues caused by putting all eggs into a single hypervisor’s basket.
Meanwhile, the very nature of technological change means that a single, simple, all-things-for-all-people platform will always be challenging. Such things exist, and serve a clear purpose, but there’s a trade-off between using a standardised software infrastructure that does most things pretty well, or making use of more innovative features from smaller providers. Indeed, this dilemma is directly reflected as one axis of our radar reports.
Another counterpoint is the association between Kubernetes-plus-containers and the emerging popularity of WebAssembly, that re-imagining of Java virtual machines and byte code approaches for the microservices world. Both will exist, and both have their strengths; and, frankly, both are on a journey towards maturity. Who knows what is round the corner, but the chances are it will build in some core ML capability, across build, deploy and operate.
Rather, the skill will lies in what we might call multi-platform engineering (can I say MPE?). Platform engineering already exists in many organisations, as the group putting together frameworks upon which others can build their applications. I would extend the role of this group to cover understanding all options, past, present, and future, to deliver a coherent set of managed services so others could benefit.
That is, the job isn’t just to understand and deliver a platform, but to enable applications to work across multiple clouds, multiple stacks, multiple CI/CD toolchains, operations and security capabilities. Whether or not that sounds like a big ask, that’s still the job. And yes, it can encompass decisions across on-premise systems and legacy applications, all of which make up the overall estate.
The MPE group may find that a single provider, or a small number of them, can meet the majority of needs. In which case, hurrah for that — but don’t get complacent. A strong risk stems from the old adage, “When all you have is a hammer…” — whilst the primary goal is to deliver stability within a world of constant change, the group needs to ensure its recommendations remain fresh and appropriate.
Equally, whilst the resulting end-to-end environment may be well-defined, the MPE group needs to acknowledge its role as empowering and enabling first. Based on experience, the danger of such a group is that, over time, it might become inwardly facing, focused on its own goals rather than those of the people it serves. As one panel speaker said, it is up to the MPE group to act as a product group, at the behest of its users— not just developers, but the business as a whole. I’m not particularly proud to have coined the term silo-isation, but the point stands.
As a final point, a challenge for analysts is the self-fulfilling prophecy of having conversations about your own opinions — I could easily have responded to “What are you seeing this year?” by rambling on (heaven forbid) about the need for better-governed applications, policy-based design, shift-left, business value and so on.
Nonetheless I will proffer that these are all aspects of a more mature approach, one which the cloud-native world is moving towards (see also: SBOMs, FinOps et al). A multi-platform architecture will by consequence build in better manageability, and indeed, many of the touchpoints between tools and platforms talk to this need.
So, to a pithy conclusion, written even as an aeroplane carried me, and my fried brain, away from Amsterdam. Even as one person said, as we stood on the balcony looking over the expo hall, “it’s the Wild West down there,” another looked across the stands, the people, the flyers, socks and other paraphernalia, and remarked on the clear signs of ‘adulting’ across the piece.
The cloud-native world is growing up and filling out, breaking through its youthful vim even as it delivers on its promise. There’s lots of work still to do, and potholes on the road ahead. But by taking a multi-platform engineering perspective, organisations will be putting the building blocks in place for the future.
05-09 – Delivering on the Goals of Sovereign Cloud
Delivering on the Goals of Sovereign Cloud
Every now and then, an emerging technology trend thrusts itself onto the boardroom agenda and becomes a strategic issue seemingly out of nowhere. So it is with sovereign cloud: when we speak to digital leaders at enterprise organizations, they name it as one of their biggest headaches.
Sovereign cloud has emerged from decades-old data protection legislation, centering on how personal and other sensitive data is managed and processed. While such regulations are rooted in the last millennium, over recent years, they have broadened in scope and impact, responding to the significantly increased potential for data misuse and other security risks from the proliferation of cloud adoption.
Personal data regulations like the pan-EU GDPR, national laws across 36 African countries, and statewide regulations such as California’s CCPA, have been supplemented with industrial data and cloud security legislations, for example, the US Cloud Act, or the Cloud Infrastructure Services Providers in Europe (CISPE)’s code of conduct.
The resulting, complex web creates tangible challenges to international business, as organizations stipulate which regional and local laws must be upheld to do business. While the differences between regulations may be small, for example, on what information can be stored, the task of working out where data can be situated and how it needs to be managed falls on technical leadership.
A further exacerbating factor is the more dynamic geopolitical landscape of recent years. Changes to governments and increased global conflicts have undermined trust in international data treaties. Whereas before, it might have been acceptable to host certain data in another country, enterprises now want to store data locally as a buffer against future changes.
These and other factors have driven a comprehensive and urgent need to manage locally sourced data in a way that satisfies local laws and creates confidence that future risks will be mitigated. Straightforward in theory, perhaps, but any response almost immediately hits reality. Organizations already store vast quantities of data across multiple cloud and software-as-a-service providers, which offer only partial visibility and transparency on how and where that data is stored.
Meanwhile, of course, enterprises see data as a major pillar of innovation. Simply repatriating (or indeed deleting) data is not always an option: this could run counter to business goals, even if it were practical. Instead, data needs to be stored, managed, and processed by cloud and software service providers in such a way that addresses the challenges described here.
The term ‘sovereign cloud’ has emerged to describe this response. However, technology vendors, cloud providers, legislative unions, governments, and standards bodies are grappling with the shape of the problem space, which continues to expand to data types other than customer data–for example, operational telemetry and security information.
Business leaders face challenges today and need answers now; they cannot wait for sovereign cloud frameworks and solutions to be defined. So how do they address the dilemma between innovation and control in their business strategies, protecting information as a strategic asset and remaining compliant while still being able to innovate? And what do they need to consider in practical terms?
Sovereignty Challenges and Opportunities
To answer this question, we can consider what sovereign cloud means in real terms. We would first advise separating:
- The principle of data sovereignty, also referred to as digital sovereignty, which impacts how various platforms and data types are stored, compliant, accessed, and securely managed.
- Data-related problems facing the organization, including challenges caused by going cloud-first before sovereign cloud, increased in importance.
Considering principle first, we can identify multiple generic challenges to a business based on this but let’s consider them as a review of an organization’s data assets. While many organizations do not have a good handle on how their data is stored, classified, and managed, it is never too late to review what good should look like in terms of a new or revised enterprise architecture.
An organization’s business models will largely dictate its data needs. For example, enterprises act with suppliers and deliver goods and services to customers, with multiple stakeholders (partners, regulated governmental organizations) involved en route. Specific personal data (e.g., healthcare data or governmental metadata, which must be stored on sovereign soil) may be subject to more rigorous controls than other data types.
In reviewing business models and data requirements, the additional step (brought by data sovereignty) is to consider local jurisdictional needs–treating each as first-class citizens, table stakes to doing business.
Geographic or localized policies and constraints on data storage, processing, and management can contribute to an overall data sovereignty picture. This will reveal multiple business-related challenges that go to the heart of the sovereignty dilemma. For example, consider how users, customers, and businesses want access to their own data wherever they are, which will dictate the kinds of security controls required: for example, it is convenient to use online banking when on holiday, and global companies need to view information about clients wherever they are situated.
If it doesn’t already, the role of the chief digital officer can expand to report on this to be able to answer the question, “What is our sovereignty exposure?” Having collated, debated, and understood what data sovereignty should look like for the organization (the first challenge), attention can turn to how data is currently stored, processed, and managed.
We can consider this in terms of the following:
- The differences and gaps between principle and practice
- What the architecture must look like in practice across cloud providers and on-premise
- How data can be managed according to these needs, based on existing and planned solutions
- What operational considerations should be taken into account
This, too, will reveal several challenges. The foremost issue is that gaps will seem onerous to the point of appearing insurmountable. Simply put, “We’re supposed to manage data for clients in countries X, Y, and Z in a certain way, and we’re not—but we can’t see how we can.”
Rest assured that answers do exist, and multiple benefits come from addressing the challenges. A sovereign cloud smart strategy puts the organization back in control of its data assets, reducing cloud provider lock-in and, indeed, unlocking innovation. In addition, addressing this now will put many enterprises ahead of the competition. There may also be reduced operational costs, as running a well-managed data architecture (required to address sovereignty) is less expensive.
Alongside the benefits, it is worth reporting on the costs of inaction. Addressing sovereignty is a legal requirement, not an option: the risks of inaction can be measured in terms of jurisdictional fines and restrictions on doing business. Meanwhile, the costs of adopting a piecemeal approach are likely greater than thinking strategically due to initial duplication of effort and subsequent needs to align multiple smaller strategies and deployments.
The Solution Approach
So, how to address these challenges and define a way forward? We have already highlighted the need to conduct a strategic review of the organization’s business models, data architectures, and current classifications, incorporating sovereignty. This review will highlight discrepancies in existing architectures and practices and should also offer a set of strategic priorities–these form the backbone of the cloud sovereignty strategy, with benefits set out to support the business case.
With this in place, the organization can move from strategy to action. While the cloud sovereignty strategy may address a business problem, it will be addressed with technology solutions first. As we have already said, there isn’t a one-size-fits-all solution. Providers are still building their capabilities, and in the hybrid/multi-cloud world, enterprises need to look for capabilities that run across providers and jurisdictions.
However, a clear need exists for platforms that offer data sovereignty, residency, and access, including the ability to, for example:
- Classify data by type, importance, policy, and locality
- Apply controls to data centrally and locally
- Customize platforms and associated data policies
- Deliver reports necessary to show compliance
- Move data and workloads between providers without lock-in
- Integrate with other reporting capabilities, e.g., ESG
Given how this space is evolving, technology leadership should look at existing features and roadmaps, as well as functionality within the platforms offered by hyperscalers, as cloud-agnostic software stacks or as third-party capabilities. You will likely need a combination of all three, ensuring they meet your own goals, such as jurisdictional control, local deployment, data portability, and overall cost of ownership.
By assessing the market landscape this way, decision-makers can identify which parts of the strategy can be addressed with existing providers versus where the organization needs augmentation, or change, of provision. This accounts for the costs and overheads of migrating application and data architectures if they’re already deployed and used.
With this information in place, you can create a delivery plan. Be in no doubt that the impact of the cloud sovereignty strategy will be felt both broadly and deeply. You can expect to see:
- Business process change to incorporate sovereignty aspects
- Organizational change, for example, in-country technical staff
- Technical deployments, including instantiation of data stores and backup systems
- New automations, for example, around data classification and policy management
- Operational improvements, including keeping metadata and telemetry in-country
- Skills requirements across both technical and business teams
- Supplier management adaptations to work with local partners
Cloud sovereignty touches everybody in the organization, particularly those in a global role. So, fundamentally, delivering on the strategy requires a change management approach with the usual elements of communication and engagement.
Conclusion
As we have seen in this short expose, sovereign cloud must be addressed both strategically and holistically, as it impacts the entire organization. Unfortunately, the technology industry is still in catch-up regarding solutions provision; nonetheless, enterprises cannot afford to wait, as this leaves them legislatively exposed.
Of course, it is impossible to put together a cloud sovereignty strategy overnight, done correctly. However, there is no time like the present. Sovereignty is a necessity, not an option, and the market landscape will become more clearly mapped out in the coming months.
Equally, after attempts to make the journey to a singular cloud provider, most organizations today still operate a hybrid, multicloud environment. By aligning data sovereignty with this broader cloud strategy, organizations can deliver the architectures they need to drive innovation, whether elements of these run with local providers, hyperscalers, or, indeed, on-premises.
Fundamentally, a well-controlled data architecture becomes an asset to the business rather than a liability. Let nobody underestimate the scale of the challenge, but unlocking sovereignty also provides the keys to the digital enterprise.
05-25 – GigaOm Research Bulletin #003
GigaOm Research Bulletin #003

Welcome to GigaOm’s research bulletin for May 2023

Hi and welcome back. We’re delighted to share our July-September syndicated research schedule, for Radar and Sonar reports. Thank you for your input on potential resource conflicts! Keep watching this space as we update to the end of the year and beyond, and see below for specifics.
Jon joined our illustrious CTO Howard Holton in our inaugural episode of The Good, The Bad, and The Techy podcast, this one covering Low Code, RPA and all things platform. Do tune in!
Research Highlights
See below for our most recent reports, blogs and articles, current press quotes, and where to meet our analysts in the next few months.

Trending: Attack Service Management released in February is our top Radar read right now. “An organization’s attack surface is dynamic; it can change daily, if not more often, and tracking these changes in an automated fashion is crucial for an ASM solution,” says author Chris Ray.
**We are currently taking briefings on:**Enterprise & Cloud Native Data Storage, Service Mesh, CSPM, Cloud Native Data Protection, Ransomware Solutions, Data Loss Prevention, and API Security.
Warming up are: Data Observability, DDI (DNS, DHCP and IPAM), and Vulnerability Management.
Recent Reports
We’ve released 16 reports in the period since the last bulletin.
In Analytics and AI, we have released reports on Data Pipelines and Streaming Data Platforms.
For Cloud Infrastructure and Operations, we have Cloud Resource Optimization and IT Service Management (ITSM), and in Storage, we have covered Object Storage for both High Performance and Enterprise.
In the Security domain, we have released reports on Deception Technology, Extended Detection & Response (XDR), Zero-Trust Network Access (ZTNA), Operational Technology (OT) Security and Identity as a Service (IDaaS). And in Networking, we have covered Cloud & Managed Service Providers (CSPs & MSPs), Large Enterprises & SMBs, Network Service Providers (NSPs) and Network Observability.
And in Software and Applications, we have a report on Robotic Process Automation.
Blogs and Articles
We’ve published several blogs including:
Thinking Strategically about Software Bills of Materials (SBOMs) – Jon Collins looks at where SBOMS have sprung from.
CXO Insight: Delivering on Edge Infrastructure – Enrico Signoretti delivers another insight into the challenges and variables that come with deploying Edge.
Not Your Father’s Primary Storage – Max Mortillaro dives into the new era of storage.
Andrew Green details how to differentiate between Edge, Cloud & 5G.
… and finally, Jon provides his take on Touchpoints, Coalescence and Multi-Platform Engineering following his visit to KubeCon in Amsterdam.
Press Quotes
GigaOm analysts are quoted in a variety of publications.
IFS Platforms | Forbes – Jon Collins
Cybersecurity | Tanium – Jon Collins
Unstructured Data Management | Blocks & Files – a review of Arjan Timmerman & Max Mortillaro’s Radar report.
If you need comment for your publication, let us know.
Where To Meet GigaOm Analysts
In the near future, you can expect to see our analysts at Infosecurity in London, and Black Hat USA. Do let us know if you want to fix a meet.
For news and updates, add analystconnect@gigaom.com to your lists, and get in touch with any questions.
All the best and speak soon!
Jon Collins, VP of Research
Claire Hale, Engagement Manager
P.S. Here is last month’s bulletin if you missed it.

June 2023
06-21 – The Fundamentals of Buying a Security Solution Today
The Fundamentals of Buying a Security Solution Today
Cybersecurity incidents aren’t something that happens to other people. Organizations today accept that it’s not about whether a breach will occur but when. But while board members have moved from disinterest in security to acceptance that it must be addressed, that doesn’t mean the cybersecurity problem is solved. The landscape continues to change, and new threats emerge daily. Plus, and here’s the big one, the cloud redefines the threat surface from a “keep the bad guys out” philosophy to a must provide end-to-end protection across a massively distributed architecture.
Decision-makers planning to build or migrate workloads to the cloud require a range of (potentially new) security capabilities, operating in tandem to maximize protection and enable early detection and response to security threats. Next-generation cloud firewalls and software-defined wide area networks (SD-WANs) need to be defined, deployed, and managed alongside existing solutions such as virtual private networking (VPN) and endpoint protection. So, where to start and what to consider?
Three alternatives exist: rely on security capabilities in existing cloud platforms, buy direct from the hyperscalers (AWS, Azure, or GCP) and see what these platforms offer in terms of value-add or “premium” features, or bring in a third-party solution designed for the job. Many organizations look at the second (value-add) approach as a way of upgrading from the lowest common denominator. These offerings are often considered the easy button, as they can be deployed at the flick of a switch (or the check of a box).
Assessing Need
But do they provide the security that you need? As a company that evaluates products according to defined criteria, we would advocate for starting with a list of requirements rather than just assuming hyperscaler capabilities will deliver. While they cover a good proportion of what you want, remember that security breaches happen by exploiting the gaps in your architecture. So it’s up to you and your organization to review these gaps and mitigate the risks they cause. Assumptions are dangerous at the best of times and even more so when it comes to cybersecurity.
So, yes, review your use cases, check the problem considerations on your desk, and put together a checklist for what security solutions need to deliver. For example, do you need the application awareness offered by next-generation firewalls or the selective routing to different SaaS services according to role? Do you need inbound traffic scanning or botnet detection? Does your staff need mobile access to services, and if so, what endpoint capabilities? You should also consider future proofing—using security tools with the features you need today and anticipate needing in the future. For example, many firms plan to move to a zero-trust approach to application access, so they should consider security solutions that offer zero-trust enforcement. No two organizations are the same, so answering these questions will give you a solid starting point for decisions.
The next issue many may already be familiar with is skills and staffing. A big challenge for organizations today—which I, too, have faced as a buyer—is hiring for security. I simply can’t hire unless I’m in Silicon Valley because nobody is open for work. While this may look like a more temporary challenge, it has a bearing on the security solution and approach I may take (again, security is about risk mitigation: a lack of expertise is a significant source of exploitable risk).
For example, suppose I can assemble a team (or have one already) with skills associated with a particular vendor; I should stick with that and deploy across my cloud workloads and data centers rather than bringing in yet another one-off security tool. Alternatively, I could outsource my security capabilities to a specialist third party and work with the tooling they recommend.
Note that the checklist of capabilities your business needs remains yours. The process starts with defining the problem, then the criteria, and how to test against them. What are your mission-critical workloads in the cloud, and what protection do they need? How is the threat landscape evolving, and what do potential solution vendors need on their roadmaps to mitigate against new threats? If you’re unable to get this far, it is a sign that you lack sufficient expertise. As a first step, talk to your existing vendors about what problem you’re looking to solve, given your architecture.
In terms of options, you can start with the “easy button”—you don’t want to rule out what may be the most straightforward solution. Even if you don’t go for what the hyperscalers provide, you’ll still be looking for a first-party cloud offering, delivered as a service and operating seamlessly with your provider platform. Whatever you have on your shortlist, the next step is to run a proof of concept (PoC). To repeat, you create unnecessary risk if you just assume a solution delivers. Plus, running a PoC enables you to assess real costs across the service, staffing, configuration, and operations.
Elements of Trust
Closing off these considerations, you should think about who you trust, not only to deliver but to partner with along the next stages of your journey. As a CISO, I don’t buy features. If you managed to get into my office, I assume that you meet the feature/function requirements of my engineering teams. By the time we speak directly, the question is: can I trust you? While I may have a picture of costs, I don’t want to reduce this to a marketplace decision. More importantly, if things go wrong, if I have a zero-day breach or a bug that causes the service to go down, will you be there, working alongside my engineers to solve it? I’ve had that experience with vendors, including Fortinet, and that builds a lot of trust.
Elements of trust can be configured and encapsulated in policies, but it also includes credibility and reputation based on partner knowledge and, indeed, analyst reports but, even more so, personal experience. A level of trust might be offered to a new provider, but full trust requires time, and it is tough to get back if it’s lost. Trust is a fundamental thing we’ve had since living in caves.
A final thought to remember is that you will constantly be iterating. The technology landscape isn’t done evolving, creating new threats that must be continuously considered and addressed. That checklist of requirements you created? You’ll need to keep that under constant review, using it to hold your vendors to account for their existing features and product roadmaps. It’s not your job to accept the status quo; rather, you’ll need to adopt a proactive approach to how you partner, with an open, collaborative, honest vendor dialog based on trust. Only on this path forward can you continue protecting your data assets, your stakeholders, and your business.
July 2023
07-17 – Can Open Source save AI?
Can Open Source save AI?
Do excuse the clickbait headline, but isn’t everything we write these days done in order to drive some algorithm, somewhere? As it happens, I did just attend a very interesting event; and it was, topically enough, about open source and AI. But am I writing about it just because it was interesting, and I wanted to share some thoughts? Or is it all about the SEO, plus some behavioural psychology tricks I need to apply to guarantee measurable clicks, thus pushing it up the rankings of social sites and indeed, looking good on internal, aggregated dashboards? It’s like our robot overlords have already won, and all we have left to do is welcome them.
But I digress. Returning to our sheep (as they say in French, and I will return to the question), there was much to learn from the launch of OpenUK’s latest research on the economic impact of open source software (OSS) on UK industry, and more broadly, its GVA – Gross Value Add. OpenUK is a relatively recent national industry body, formed directly to “move open technologies – not only OSS but open data, open standards and open innovation – onto the UK radar,” according to its CEO and opening speaker, Amanda Brock.
OpenUK’s public purpose is to develop UK leadership and global collaboration in open technology, which essentially means stimulating the symbiosis between UK organisations and open technology. Power to OpenUK’s elbow, that’s what I say — I recommend interested parties take a look at the research (led by chief research officer, Dr Jennifer Barth) and act on its findings. In a nutshell, OSS brings over £13 billion of value to the UK, being 27% of UK Tech contribution to it and sees plans to invest an amount of £327 million. By my reckoning, that’s roughly a 41x planned return on investment.
I know it’s not as simple as that, in that the spend is into a global pool of developers, innovators, providers and others. But nonetheless — and Amanda made this point — many of the solutions built on top of OSS end up being US-based, including UK-founded companies such as Weaveworks (for GitOps) and Snyk (Development Security). UK investors are traditionally reticent compared to those in the Bay area, and need a clearer understanding of what OSS brings as a result. And conversely, OSS creates more opportunities for skills development and the creation of new enterprises, furthering the goals of our multi-island nation on the global stage.
The Jeff Goldblum-sized fly in the ointment is AI, which has come out of seemingly nowhere to be this year’s hot topic. Not quite true — we’ve heard a lot about AI in recent times — but it did look like it was going the same way as 3D televisions, before Midjourney and ChatGPT came along. Not ironically, this landed right in the middle of both the OpenUK research cycle (which had to spawn a second research report mid-way) and UK legislation on AI (which has had to be rewritten in flight to take large-scale models into account).
AI is a significant area for the open technology world, first in terms of software (the most used AI platform, TensorFlow, is open source), but then also for data. Wikipedia was founded on open principles, both using open source and releasing its open data on an open content platform, so it was no coincidence that its founder Jimmy Wales was in attendance. The recent developments in generative AI directly relate to the availability of open data sources — “50% of ChatGPT input is Wikipedia,” says Jimmy, who is cool with this. “That’s what it’s for.”
So, to the question, can Openness save AI? The answer is no, not by itself, but it can go some way to providing the tools we need to deliver it, in a way that will benefit society in general (and therefore the UK in particular), moving the technology into the hands of the many. One reason is that, like OSS, the AI genie is out of the bottle. “We can’t assume there are six companies we can regulate,” says Jimmy, pointing to the millions of hobbyist developers that are already playing with Midjourney via Discord, or writing their own versions of generative AI software. AI can learn from the OSS world, the power of individual responsibility — we can’t blame the tools, but we can legislate against what people are creating, he suggests. “You could always use Photoshop to create an image; it just wouldn’t look very real – it’s now going to look more real.”
That’s not to say that we do without general legislation at a corporate and national level, but this needs to be aimed at the consequences of AI, rather than its inevitable, more general use. “The one thing that’s inevitable is that governments are going to regulate – if that’s too top-down, it’s going to be too hard. But the opposite approach, individual responsibility with the right level of governance, bottom-up and principles-based, that’s the better approach,” says Amanda. As highlighted by Chris Yiu, Director of Public Policy at Meta, this goes with the transparency and openness that are (the clue’s in the name) mainstays of OSS. If the AI genie has spawned lots of little genies, we can use them as a network of peers to create a more solid result.
I can agree, as long as the responsibility and openness is applied at all stages of the delivery cycle — there’s a lot to unpack about “the right level of governance” across data collection and management, cybersecurity and access management, process best practices and jurisdictional questions (what’s legal in one country may not be in another, and may be unethical in both). For example, if I could use data from the Strava open API to build a picture of people likely to suffer medical issues and then I publish it, who would be responsible? Or if I created the code and left it lying around?
It does strike me that post-Brexit Britain is in a unique position to set a different agenda from either the EU, which is looking at top-down regulation, or the US, which has a habit of playing a bit faster and looser with privacy than we might like. At which point, organisations such as OpenUK might find themselves with their work cut out — it’s one thing to advocate for more acceptance of OSS, but quite another structurally to find yourselves as the most important people in a newly created, yet critical space. That’s a good problem to have, but not one to be taken lightly.
We have time to get this right. Nobody in the room felt AI was a runaway train: even though examples exist of AI-driven challenges, they remain the exception rather than the norm (said Chris Yiu, “We are a long way off anything approaching super-intelligence.”) Nonetheless, we already need independent organisations who get this stuff to advise on the best way forward, working with policymakers. Perhaps open source models, and the open method of creating new ones, can indeed counter the worst potential vagaries of AI; and right now, we need all the help we can get as we work out a new understanding of the impact of the information age, both in the UK and beyond.
At which point, we can keep our robots where they need to be, to a sigh of relief for even the most fearful of the AI-embracing future.
07-18 – Reviewing Architectural and Data Aspects of Microservices Applications
Reviewing Architectural and Data Aspects of Microservices Applications
Microservices architectures have reached the mainstream. A 2016 Cloud Native Computing Foundation (CNCF) survey found that 23% of respondents had deployed containers in production; by 2020, this figure had jumped to 92%. In last year’s survey, attention turned to the proportion of applications using containers, and 44% of respondents reported that containers were used for most or all production applications.
To date, much of the focus around microservices has been on the applications being built, rather than on broader aspects like data management. To navigate a path toward better adoption, let’s start by exploring the terminology behind microservices. We can then consider their challenges and how to respond from an architecture and data management perspective.
The Terminology Behind Microservices
Microservices are built on the twin pillars of architectural principle and web-based practice. We can trace these pillars back through service-oriented architectures (SOA) and object orientation, each serving as a foundation for scalable, adaptable, and resilient applications.
The advent of the web, together with protocols such as HTTP and SOAP, catalyzed the creation of highly distributed applications that use the internet as a communications backbone and cloud providers as a processing platform. It was from this foundation that microservices emerged as the basis for cloud-native applications.
The final pieces of the microservices puzzle were put in place when engineering teams rallied around the Docker container format, giving microservices a standard shape and interface. Adoption of Kubernetes as an approach to container deployment and orchestration soon followed.
Today, container-based, Kubernetes-orchestrated microservices are the standard for building cloud-native applications, and adoption has been swift. Just three years ago, 20% of enterprise organizations had deployed Kubernetes-based microservices. At the time of writing, a majority of enterprises have deployed Kubernetes.
The advantages of the microservices are legion, not only because of the modular approach to application building, but because so many elements of the stack are available. A broad range of data management platforms, application libraries, security capabilities, user interfaces, development tools, and operational and third-party capabilities are built on (or support) microservices.
As a result, developers need only care about the core of their application. Teams can build and deploy new functionality much faster, responding to the urgency that organizations face around digital transformation. The drive to transform has also fanned the cloud-native flames, creating yet more impetus toward microservices.
Data Challenges of Microservices Adoption
Certain aspects of microservices create new opportunities and reset the context for application builders:
Microservices design best practice: Careful design is needed to ensure each microservice is self-contained and minimizes dependencies with others. Otherwise, microservices can end up too large and monolithic or too small and complex.
Stateless and stateful communication: Microservices-based applications work best when communication of state is minimized—that is, one microservice knows very little about the condition of another. State must be stored somewhere—directly or derived from other data.
Third-party API and library management: A huge advantage of the microservices model is that applications can build on third-party libraries, stacks, and services. These integrations are generally enabled by an application programming interface (API).
Use of existing applications and data stores: A microservices application may depend on an existing application or data store, which cannot be changed for regulatory or cost reasons. Even if accessible via API, it may not have been designed for a distributed architecture.
Rate of change: Microservices applications tend to be developed according to Agile and DevOps approaches, which set expectations for fast results and continuous delivery of new features and updates.
Each of these brings its share of challenges for engineering teams, each of which brings data management ramifications. These include:
Performance, Scalability, and AvailabilityAs a microservices application becomes more complex, managing the network of states across it becomes increasingly difficult, creating communication overheads. The relationship between microservices and how data is stored and managed can become the greatest bottleneck due to data distribution and synchronization challenges across the architecture. Wait states at API gateways can also reduce performance, impacting scalability and causing availability risks. Legacy data stores may lack cloud-native features required for microservices, creating additional overheads in terms of interfacing.
Maintainability and FragilityThe inherent complexity of microservices applications can make them harder to maintain, particularly if microservices are too large, too small, or if data pathways are sub-optimal. Maintenance overheads can conflict with DevOps approaches; simply put, the troubleshooting and resolution of issues can slow down development and creation of new features across data management and other parts of the architecture.
Manageability and SecurityThe above sets of challenges can manifest in terms of operational overheads. Day-two operations for microservices applications require a detailed grasp of the application architecture, what is running where, and the sources of issues. Particular issues can arise in the relationship with runtime APIs and legacy data stores. Meanwhile, application complexity and use of third-party libraries expands the attack surface for the application, increasing security risk.
Addressing Challenges – Review and Improve
Here are a few things to consider to ensure you start building microservices applications the right way. First, if the problem is architectural, think about the solution architecturally by taking data management and other aspects into account. Second, understand that no organization operates in a greenfield environment.
A strength of the microservices approach is its notion of right-sizing, or what we might call the Goldilocks principle; microservices can be too big or too small, but they can also be just right. This means they operate standalone, contain the right elements to function, and are developed and maintained by domain experts.
Usefully, you can apply this principle to a new design or an existing application. While it is not straightforward to get a microservices architecture right, analysis of the problem space and creation of an ideal microservices model can take place relatively quickly. This model can then be mapped to the existing architecture as a review process.
This exercise identifies areas of weakness and offers opportunities to resolve performance bottlenecks and other issues—not least of which is how data is stored and managed. It may be that a part of the application needs to be refactored, which is a decision for engineering management. For example, you could look toward architectural patterns, such as caching and aggregators for data movement, strangler and façade for legacy systems, and Command and Query Responsibility Segregation (CQRS) for scalability, performance, and security.
We give an example of CQRS below, representing a payment application and using Redis as the data store. In this example, two microservices are deployed: one manages payment approvals and makes updates; the other enables queries on payment history based on a cached version of the data store such that performance impact is minimized. The Redis Data Integration capability tracks updates and updates the cached version in real time, further reducing the load on the microservices.

A second facet of the architecture review is to consider the data management, API gateways, third-party services, development, and operational tooling already in use. As the example shows, a data management platform such as Redis may already have features that can support the application’s architectural needs. The same point applies across other platforms and tooling. We advise working with existing suppliers and reviewing their solutions to understand how to meet the needs of the application under review without having to deploy additional capabilities.
Conclusion
In summary, microservices is not about rewriting your application in its entirety. By reviewing your architecture alongside existing third-party capabilities, you can establish a roadmap for the application that addresses existing challenges around scaling, performance, security, and more, preventing these aspects from impairing application effectiveness in the future.
Ultimately, no organization can assume that microservices will “just work.” However, the modularity of microservices means that it is never too late to start applying architectural best practice principles to existing applications and benefit from improved scalability, resiliency, and maintainability as a result. Plus, you may find out that you can do so much more with what you already have.
August 2023
08-10 – What’s the future for WebAssembly?
What’s the future for WebAssembly?
I was fortunate to sit down with Matt Butcher, CEO of Fermyon, and discuss all things application infrastructure, cloud native architectures, serverless, containers and all that.
Jon: Okay Matt, good to speak to you today. I’ve been fascinated by the WebAssembly phenomenon and how it seems to be still on the periphery even as it looks like a pretty core way of delivering applications. We can dig into that dichotomy, but first, let’s learn a bit more about you – what’s the Matt Butcher origin story, as far as technology is concerned?
Matt: It started when I got involved in cloud computing at HP, back when the cloud unit formed in the early 2010s. Once I understood what was going on, I saw it fundamentally changed the assumptions about how we build and operate data centers. I fell hook, line and sinker for it. “This is what I want to do for the rest of my career!”
I finagled my way into the OpenStack development side of the organization and ran a couple of projects there, including building a PaaS on top of OpenStack – that got everyone enthusiastic. However, it started becoming evident that HP was not going to make it into the top three public clouds. I got discouraged and moved out to Boulder to join an IoT startup, Revolve.
After a year, we were acquired and rolled into the Nest division inside Google. Eventually, I missed startup life, so I joined a company called Deis, which was also building a PaaS. Finally, I thought, I would get a shot at finishing the PaaS that I had started at HP – there were some people there I had worked with at HP!
We were going to build a container-based PaaS based on Docker containers, which were clearly on the ascent at that point, but hadn’t come anywhere near their pinnacle. Six months in, Google released Kubernetes 1.0, and I thought, “Oh, I know how this thing works; we need to look at building the PaaS on top of Kubernetes.” So, we re-platformed onto Kubernetes.
Around the same time, Brendan Burns (who co-created Kubernetes) left Google and went to Microsoft to build a world-class Kubernetes team. He just acquired Deis, all of us. Half of Deis went and built AKS, which is their hosted Kubernetes offering.
For my team, Brendan said, “Go talk to customers, to internal teams. Find out what things you can build, and build them.” It felt like the best job at Microsoft. Part of that job was to travel out to customers – big stores, real estate companies, small businesses and so on. Another part was to talk to Microsoft teams – Hololens, .Net, Azure compute, to collect information about what they wanted, and build stuff to match that.
Along the way, we started to collect the list of things that we couldn’t figure out how to solve with virtual machines or containers. One of the most profound ones was the whole “scale to zero” problem. This is where you’re running a ton of copies of things, a ton of replicas of these services, for two reasons – to handle peak load when it comes in, and to handle outages when they happen.
We are always over-provisioning, planning for the max capacity. That is hard on the customer because they’re paying for processor resources that are essentially sitting idle. It’s also hard on the compute team, which is continually racking more servers, largely to sit idle in the data center. It’s frustrating for the compute team to say, we’re at 50% utilization on servers, but we still have to rack them as quickly as we can go.
Okay, this gets us to the problem statement – “scale to zero” – is this the nub of the matter? And you’ve pretty much nailed a TCO analysis of why current models aren’t working so well – 50% utilization means double the infrastructure cost and a significant increase in ops costs as well, even if it’s cloud-based.
Yeah, we took a major challenge from that. We tried to solve that with containers, but we couldn’t figure out how to scale down and back up fast enough. Scaling down is easy with containers, right? The traffic’s dropped and the system looks fine; let’s scale down. But scaling back up takes a dozen or so seconds. You end up with lag, which bubbles all the way up to the user.
So we tried it with VMs, with the same kind of result. We tried microkernels, even unikernels, but we were not solving the problem. We realized that as serverless platforms continue to evolve, the fundamental compute layer can’t support them. We’re doing a lot of contortions to make virtual machines and containers work for serverless.
For example, the lag time on Lambda is about 200ms for smaller functions, then up to a second and a half for larger functions. Meanwhile, the architecture behind Azure functions is that it prewarms the VM, and then it just sits there waiting, and then in the last second, it drops on the workload and executes it and then tears down the VM and pops another one on the end of the queue. That’s why functions are expensive.
We concluded that if VMs are the heavyweight workforce of the cloud, and containers are the middleweight cloud engine, we’ve never considered a third kind of cloud computing, designed to be very fast to start up and shut down and to scale up and back. So we thought, let’s research that. Let’s throw out that it must do the same stuff as containers or VMs. We set our internal goal as 100ms – according to research, that’s how long a user will wait.
Lambda was designed more for when you don’t know when you want to run something, but it’s going to be pretty big when you do. It’s for that big, bulky, sporadic use case. But if you take away the lag time, then you open up another bunch of use cases. In the IoT space, for example, you can work down closer and closer to the edge in terms of just responding to an alert rather than responding to a stream.
Absolutely, and this is when we turned to WebAssembly. For most of the top 20 languages, you can compile to it. We figured out how to ship the WebAssembly code directly into a service and have it function like a Lambda function, except the time to start it up. To get from zero to the execution of the first user instruction is under a millisecond. That means instant from the perspective of the user.
On top of that, the architecture that we built is designed with that model in mind. You can run WebAssembly in a multi-tenant mode, just like you could virtual machines on hypervisor or containers on Kubernetes. It’s actually a little more secure than the container ecosystem.
We realized if you take a typical extra large node in AWS, you can execute about 30 containers, maybe 40 if you’re tuning carefully. With WebAssembly, we’ve been able to push that up. For our first release, we could do 900. We’re at about 1000 now, and we’ve figured out how to run about 10,000 applications on a single node.
The density is just orders of magnitude higher because we don’t have to keep anything running! We can run a giant WebAssembly sandbox that can start and stop things in a millisecond, run them to completion, clean up the memory and start another one up. Consequently, instead of having to over-provision for peak load, we can create a relatively small cluster, 8 nodes instead of a couple of 100, and manage tens of thousands of WebAssembly applications inside it.
When we amortize applications efficiently across virtual machines, this drives the cost of operation down. So, speed ends up being a nice selling point.
So, is this where Fermyon comes in? From a programming perspective, ultimately, all of that is just the stuff we stand on top of. I’ll club you in with the serverless world—the whole kind of standing on the shoulders of giants model vs the Kubernetes model. If you’re delving into the weeds, then you are doing something wrong. You should never be building something that already exists.
Yes, indeed, we’ve built a hosted service, Fermyon Cloud, a massively multi-tenant, essentially serverless FaaS.
Last year, we were kind of waiting for the world to blink. Cost control wasn’t the driver, but it’s shifted to the most important thing in the world.
The way the macroeconomic environment was, cost wasn’t the most compelling factor for an enterprise to choose a solution, so we were focused on speed, the amount of work you’ve got to achieve. We think we can drive the cost way down because of the higher density, and that’s becoming a real selling point. But you still have to remember, speed and the amount of work you can achieve will play a major role. If you can’t solve those, then low cost is not going to do anything.
So the problem isn’t the cost per se. The problem is, where are we spending money? This is where companies like Harness have done so well as a CD platform that builds cost management into it. And that’s where suddenly FinOps is massive. Anyone with a spreadsheet is now a FinOps provider. That’s absolutely exploding because cloud cost management is a massive thing. It’s less about everyone trying to save money. Right now, it’s about people suddenly realizing that they cannot save money. And that’s scary.
Yeah, everybody is on the back foot. It’s a reactive view of “How did the cloud bill get this big?” Is there anything we can do about it?
**I’m wary of asking this question in the wrong way… because you’re a generic platform provider, people could build anything on top of it. When I’ve asked the question, “What are you aiming at”? People have said, “Oh, everything!” and I’m like, oh, that’s going to take a while! So are you aiming at any specific industries or use cases?**The serverless FaaS market is about 4.2 million developers, so we actually thought, that’s a big bucket, so how do we refine it? Who do we want to go after first? We know we are on the early end of the adoption curve for WebAssembly, so we’ve approached it like the Geoffrey Moore model, asking, who are the first people who are going to become, “tyre kicker users”, pre-early adopters?
We hear all the time (since Microsoft days) that developers love the WebAssembly programming model, because they don’t have to worry about infrastructure or process management. They can dive into the business logic and start solving the problem at hand.
So we said, who are the developers that really want to push the envelope? They tend to be web backend developers and microservice developers. Right now, that group happens to be champing at the bit for something other than Kubernetes to run these kinds of workloads. Kubernetes has done a ton for platform engineers and for DevOps, but it has not simplified the developer experience.
So, this has been our target. We built out some open-source tools and built a developer-oriented client that helps people build applications like this. We refer to it as the ‘Docker Command Line’ but for WebAssembly. We built a reference platform that shows how to run a fairly modest-sized WebAssembly run time. Not the one I described to you, but a basic version of that, inside of your own tenancy.
We launched a beta-free tier in October 2022. This will solidify into production-grade in the second quarter of 2023. The third quarter will launch the first of our paid services. We’ll launch a team tier oriented around collaboration in the third quarter of 2023.
This will be the beginning of the enterprise offerings, and then we’ll have an on-prem offering like the OpenShift model, where we can install it into your tenancy and then charge you per-instance hours. But that won’t be until 2024, so the 2023 focus will all be on this SaaS-style model targeting individuals to mid-size developer teams.
So what do you think about PaaS platforms now? They had a heyday 6 or 7 years ago, and then Kubernetes seemed to rise rapidly enough that none of the PaaS’s seemed applicable. Do you think we’ll see a resurgence of PaaS?
I see where you are going there, and actually, I think that’s got to be right. I think we can’t go back to the simple definition of PaaS that was offered 5 years ago, for example, because, as you’ve said before, we’re 3 years behind where a developer really wants to be today, or even 5 years behind.
The joy of software – that everything is possible – is also its nemesis. We have to restrict the possibilities, but restrict them to “the right ones for now.” I’m not saying everyone has to go back to Algol 68 or Fortran! But in this world of multiple languages, how do we keep on top?
I like the fan out, fan in thing. When you think about it, most of the major shifts in our industry have followed that kind of pattern. I talked about Java before. Java was a good example where it kind of exploded out into hundreds of companies, hundreds of different ways of writing things, and then it sort of solidified and moved back toward kind of best practices. I saw the same with web development, web applications. It’s fascinating how that works.
One of my favorite pieces of research back in my academic career was by a psychologist using a jelly stand, who was testing what people do if you offer them 30 different kinds of jams and jellies versus 7. When they returned, she offered them a survey to ask how satisfied they were with the purchases they had made. Those that were given fewer options to choose from reported higher levels of satisfaction than those that had 20 or 30.
She reflected that a certain kind of tyranny that comes with having too many ways of doing something. You’re constantly fixated on; Could I have done it better? Was there a different route to achieve something more desirable?
Development model-wise, what you’re saying resonates with me – you end up architecting yourself into uncertainty where you’re going, well, I tried all these different things, and this one is working this. It ends up causing more stress for developers and operations teams because you’re trying everything, but you’re never quite satisfied.
In this hyper distributed environment, a place of interest to me is configuration management. Just being able to push a button and say, let’s go back to last Thursday at 3.15pm, all the software, the data, the infrastructure as code, because everything was working then. We can’t do that very easily right now, which is an issue.
I had built the system inside of Helm that did the rollbacks inside of Kubernetes, and it was a fascinating exercise because you realize how limited one really is to roll back to a previous state in certain environments because too many things in the periphery have changed in addition. If you rolled back to last Thursday and somebody else had released a different version of the certificate manager, then you might roll back to a known good software state with completely invalid certificates.
It’s almost like you need to architect the system from the beginning to be able to roll back. We spent a lot of time doing that with Fermyon Cloud because we wanted to make sure that each chunk is sort of isolated enough that you could meaningfully roll back the application to the place where the code is known to be good and the environment is still in the right configuration for today. Things like SSL certificates do not roll back with the deployment of the application.
There’s all these little nuances. The developer needs. The Ops team platform engineer needs. We’ve realized over the past couple of years that we need to build sort of haphazard chunks of the solution, and now it’s time to fan back in and say, we’re just going to solve this really well, in a particular way. Yes, you won’t have as many options, but trust us, that will be better for you.
The more things change, the more they stay the same! We are limiting ourselves to more powerful options, which is great. I see a bright future for WebAssembly-based approaches in general, particularly in how they unlock innovation at scale, breaking the bottleneck between platforms and infrastructure. Thank you, Matt, all the best of luck and let’s see how far this rabbit hole goes!
08-11 – GigaOm Research Bulletin #004
GigaOm Research Bulletin #004

Welcome to GigaOm’s research bulletin for August 2023

Hi, and welcome back!
Our CEO Ben Book has taken GigaOm from a boutique analyst company to what is now recognized as a leading analyst firm, redefining the nature of analysis in the process. Here he looks at how the market for research has evolved and how GigaOm has shifted to align with the needs of end-user organizations as they prepare for a data-driven, digital future.
Our latest podcast discusses how organizations could implement technical strategies instead of spending most of their time putting out fires. Give it a listen!

Research Highlights
See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months. Any questions, reply directly to this email and we will respond.

Trending: Cloud Observability, released in March, is one of our top Radar reads right now. “Monitoring and observability are crucial IT functions that help organizations keep systems up and running and performance levels high”, says authors Ron Williams and Sue Clarke.
**We are currently taking briefings on:**Cloud File Storage, eDiscovery, Ransomware Protection, PTaaS, RSLM, DAM, DPUs, and Cloud-based Data Protection.
Warming up are: Autonomous SOCs, Container Security, Data Warehouse, Scale Out File Storage, DPUs, Cloud Network Security, Incident Response Platforms.
Recent Reports
We’ve released 18 reports in the period since the last bulletin.
In Analytics and AI, we have released a report on AIOps.
For Cloud Infrastructure and Operations, we have SaaS Management Platforms (SMPS), Value Stream Management (VSM), and Cloud FinOps, and we have covered Hybrid Cloud Data Protection for both Large Enterprise and Small & Medium Sized Businesses (SMBs), and in Storage, we have covered Kubernetes Data Storage for both Cloud-Native and Enterprise.
In the Security domain, we have released reports on Secure Service Access, Security Policy & Code, Password Management, Security Information and Event Management (SIEM), Anti-Phishing, and Distributed Denial of Service (DDoS) Protection. And in Networking, we have covered Edge & Core Routing and Edge Platforms.
And in Software and Applications, we have a report on Agile Planning & Portfolio Management (PPM), and Unified Communications as a Service.
Blogs and Articles
We’ve published several additional blogs over the past couple of months, including:
- Andrew Green asks, How Would a Distributed SIEM Look? and discusses The True Value-Add of Container Networking Solutions.
- Jamal Bihya explains how we need to strengthen the human firewall using Security Awareness Training.
- Howard Holton gives his take on the Themes & Trends at RSA 2023
- Kerstin Mende-Stief says that, For Sustainability, Buildings & Energy are Strategic Resources.
- Paul Stringfellow explains how the MOVEit Transfer hack is right on trend, provides the Top Trends from InfoSec Europe, and why Microsoft takes Entra to the edge.
… and finally, our VP of Finance & HR, Elizabeth Kittner, asks the question, How Can CPA’s Ethically Interact with ChatGPT?, and was also internationally named as one of the Top 50 Women in Accounting in 2023!
Where To Meet GigaOm Analysts
You can expect to see our analysts at Black Hat USA this week, and Open Source Summit Europe in September. Do let us know if you want to fix a meet.
For news and updates, add analystconnect@gigaom.com to your lists, and get in touch with any questions.
Thanks and speak soon!
Jon Collins, VP of Research
Claire Hale, Engagement Manager
P.S. Here is the last bulletin if you missed it!

September 2023
09-14 – DevOps efficiency needs to put business goals first
DevOps efficiency needs to put business goals first
Here’s a quick story. As a business analyst, I worked at a publisher in Oxford. We were interviewing, process diagramming, and so on – that’s not the interesting bit. Meanwhile, in parallel, they brought in consultants from a small company in Birmingham, UK. Two or three folks.
The building we were working in was long and thin, and had an empty upper floor. These guys came in with a massive roll of brown paper, 7 feet high. They shot this roll all the way down the empty floor. They then went and spoke to people and asked them what systems they used, what forms they filled and so on.
Then they stuck literally everything along that sheet of paper. They took printouts of their screens or the forms they’d fill in, stuck them on and joined them up with black tape so you could see the linkages.
When they’d finished, after a few weeks, they got everyone in the company upstairs and said, “There you go!” Everyone was just wowed, “Oh my goodness, I fill that in, then it goes over there, and then nothing happens to it?” or, “That bit is exactly the same as that bit, but done by two different people?” and so on.
It was the best gift the consultants could have given the firm. Was it worth $20,000 (or whatever it cost), just to stick some bits of paper on a big sheet of brown paper? Yes, absolutely. 100%.
The power of visualization is fantastic, and I’ve seen it many times over the years. I recently spoke to a start-up that enables DevOps process mapping and dashboards. They asked me what I thought of what they were doing, particularly given (so they told me) that I was a bit of a skeptic.
So, I told them the story above. In my experience, software development processes are notoriously hard to lock down despite all the efforts to define methodologies and structures. We can go into the reasons over a beverage, but the result is a continued lack of visibility. As the adage goes, if you can’t measure, you can’t manage. And software development is notoriously difficult to measure.
So, what to say to solutions vendors attempting to crack the code of process visibility in the DevOps space? The question is less about the need, nor the epiphanies that can be achieved with a software package (or post-its on a whiteboard, or a roll of brown paper), and more to do with how to succeed when, historically, many have tried and become, with the best will in the world, a point in time fix.
The challenge is twofold. First, nobody in the (non-technical) organization cares enough about software processes to allocate a budget to such tools. For some reason, the business still thinks that software runs itself – it can’t be that hard to write if you have good people in, correct? Everyone just assumes that it’s just smart people creating things.
However, anyone who has built software at any scale knows what a knotty mess we can get into without the right controls. As a strange kind of good news, we’re in a period of belt-tightening, where CIOs are being asked to justify how much IT is costing – the adage extends to, “If you can’t measure, you can’t have any more money,” which certainly focuses the mind.
So, yes, the demand for efficiency can be met with spend-to-save initiatives, which in turn fuels interest in software process tooling, categorized as value stream management, software development analytics or similar. When looking at multiple providers looking to solve a complex problem in similar ways, I often analogise multiple paths up the same mountain – and this space is no different.
To run with this analogy a little, I see multiple vendors, at various stages of development, going up different routes on that mountain. This brings us to the second challenge – that no organization has yet found a repeatable path to the top.
Everyone gets exhausted after a while and starts slowing down. In the VSM report, we have leaders and challengers, incumbents and new entrants all addressing the problem in their own way. Start-ups arrive in the space often through some personal epiphany, their “brown paper roll” moment, if you will.
They arrive, and up the mountain they head, they’re running and running, and they start to slow down… and eventually, over time, they just become a feature in someone else’s platform. I’m reminded of Spanish world champion mountain runner Killian Jornet’s exploits – while we may all aspire to be like such an athlete, he is an exception, not the norm.
What to do about this? One answer, of course, is not to mind too much. A vendor can acknowledge that it will always be a tactical tool, to be brought in when things aren’t going so well. For the vendor, that results in a certain route to market – to switch analogies, a tool for mechanics servicing the plane, rather than the cockpit centerpiece.
A second option is to scope according to what is feasible, vertically rather than horizontally – if you’re going to be in the cockpit, then do one thing well rather than looking to control the whole aircraft. That way, the audience can be defined more precisely and, by extension, the value brought to stakeholders.
Which brings us to the third option. To consider the use cases in which the tool can deliver value. It’s all very well that a smaller group recognises the scale of a problem, in this case; software development is complex and tends to chaos without the right checks and balances. But it’s a big assumption that others – budget holders up to board level – will reach the same conclusion without a giant roll of brown paper to guide their thinking.
So, if the challenge is that the majority don’t want to solve the problem, however clear it is to the minority, then what are the scenarios in which the majority see it as important? Is there another need that the majority are willing to put their wallets behind? And start from there, rather than the development process and cost efficiency?
Right back at DevOps’ manifesto, the Phoenix Project, itself based on Eli Goldratt’s novelisations of project management best practice, the point was always about business challenges – getting customers onto the website, increasing sales, improving supply chains and the like. However, too many tools and approaches are introspective and focused on improving the means, not the end.
We all know just how hard it is to build software, but see Point One: nobody outside of IT cares that much. If we can’t get the money to make necessary improvements, that’s one thing. And it’s absolutely true that the problem exists. But to assume that people will magically ‘get’ that it needs to be solved is quite another.
So, if people can’t be bothered to solve it, what scenario would cause them to want to solve it? Too often in this trade, we’re forced to wait for situations where the problem becomes a crisis – just after a security breach, when a compliance audit is approaching, or when a software package is reaching the end of life.
Alternatively, I would take a design thinking-driven approach, which maps out and prioritizes business results – this is as applicable to end-user businesses as solution providers. For companies, which specific business needles can be shifted through software process improvements, and by how much? And for vendors, what does the composite picture look like across the customer base?
Note, however, that whilst these scenarios might be events worth putting money behind, they are not the end game – which can be described by a company’s vision, mission and strategy. The business itself is the mountain – your own, or that of organizations you serve as a provider. If you are heading up, ask yourself first, what does the top look like? What will you have achieved when you get there?
If the answer can’t be described in business terms, it becomes far less likely that you will arrive. Bluntly, only hold the software process improvement kick-off meeting with a clear picture of your company’s top three targets for the coming period, and how process improvement, tools or anything else will directly make them happen.
And, as a vendor, if you’re only looking to raise investor capital and for a quick exit, I’d argue that’s a false summit. In these digitally transforming times, misalignment between technology delivery and business goals is possibly the biggest cause of bottom-line inefficiency. Be prepared to kick off the journey with a map to get to the top, if you want to get anywhere at all.
09-25 – The Cloud-only Era is Done
The Cloud-only Era is Done
As ever in this job, it’s by putting multiple conversations together that I start building a picture. Over recent months, I’ve spoken to big enterprises and small businesses, global providers and small startups, new faces and old colleagues. It’s a privilege to spend time making sense of it all.
Even better is that it affords the opportunity to test hypotheses. As I was listening to the radio last week, Young Ones comedian Ade Edmondson was talking about his days on the live circuit. Above all, he said, it allowed him the opportunity to find out what was funny, what his audience hooked into.
Perhaps being at The Comedy Club isn’t that different to being an analyst, even if the jokes aren’t so funny. From the vantage point of being a panel guest at a recent CIO event, I found a room abuzz with today’s challenges and realities — money is no longer free, complexity is rife oh, and cloud is no longer the thing.
These concerns are 100% linked. My hypothesis was, and remains, that the biggest problem with cloud is nothing to do with technology per se; it’s more (just like a real cloud) about its lack of hard edges. All who remember the advent of VM sprawl (and previous incarnations) now recognise it is similar to cloud.
Cloud versus on-prem is a bit like self-publishing. The publishing industry may have been old, stuffy, hard to get into, and so on, but it acted as a filtering mechanism for readers. Take away the boundaries, and of course, you open the door to a great deal more creativity, and more of everything possible.
There’s a place for hard boundaries: just as necessity is the mother of invention, so constraint begets thinking harder about prioritization, controls, quality and so on. Honestly, there’s a place for both — I have self-published, and it’s a great model. But it, like cloud, comes with consequences that can’t just be brushed under the carpet.
To be clear, this isn’t an anti-cloud rant. I deliberately said, “Cloud is no longer the thing,” but it is absolutely a thing. A very important thing. The sea change probably started a year or so ago but was rubber-stamped by the cost of money. No particular bone to pick, more an observation that I have tested and honed across dialogues.
I’m more interested in the consequences. A few days ago, I spent time with NetApp and some of their customers, talking about all things tech. What became obvious was just how profound this shift from “the thing” to “a thing” actually is, not just for providers and customers but the entire ecosystem of tech supply, deployment and operations.
We can (and do, with abandon) throw in terms like “hybrid multi-cloud,” but what’s becoming clear is the impact this has on the bit in the middle. If you will be running AWS, Azure and GCP alongside your on-premise or hosted applications, who manages all that? How do you architect, procure, secure, run, break/fix/replace?
The answer, frankly, is unclear right now. We moved from the early days of more open cloud and SaaS architectures to more walled garden approaches (remember when you could cross-post across nascent social media sites). And the same applies to the hyperscalers, who are not seeing the meteoric growth they once were.
If I could put my money anywhere, it would be on tech providers that work horizontally across both cloud and on-prem. This is a curate’s egg — consider the concerns from enterprises around Broadcom’s planned acquisition of VMware, or the open-source controversy about changes to Red Hat’s licensing models.
There’s definitely going to be money to be made from helping organizations control costs across what has become a highly distributed, unconstrained technology architecture. In my view, the harvest is not yet ripe — now’s the time to weave the baskets, not pick the still-sharp fruit. But there you go.
This also plays into systems integrators, technology resellers, managed service providers and so on. The big item that we have lost is responsibility. Back in the day, end-user companies used to look for a limited number of tech vendors to work with, limiting to a handful of (“throat to choke” – not the nicest analogy) strategic partners.
Today, they no longer have that. AWS may be a strategic partner, but then so will everyone else. If not at the corporate level, the other players will likely be embedded within departments or geographies. That, to me, is the new vendor opportunity, but it can only be achieved by actually supporting multi-cloud and on-prem models.
And meanwhile, the opportunities are rich for those skilled at navigating this newly accepted reality. For some workloads, microservices models will be best; for others, it’ll be (shock, horror) monolithic stacks. Sometimes, a migration may be cost-effective, and other applications might be best left to run inefficiently but still effectively.
Historically, If I had a soapbox, it’s been about bringing back old governance skills – risk management, quality management, configuration management, process management and so on. The good news is that you get these out of the box when you think architecturally, building to last rather than breaking things as a business strategy.
So, I’m thinking I can put that soapbox away for the time being, as well as the hybrid one I first stood upon in 2007, not because I disliked cloud models, but because I could never see that any one ring would rule them all (sorry, Sauron). We’re entering a new phase of tech, requiring new skills and realigning with the old.
This is true for all stakeholders across end-user organizations and the companies that serve them. We’re back to the old business model formula of what you want to keep in-house against what you want to get from your suppliers. Enterprises large and small, the floor is yours.
09-26 – Planning for Data Sovereignty in a Multicloud World
Planning for Data Sovereignty in a Multicloud World
With its roots in privacy law emerging from the UK and Europe, data sovereignty has become a global concern. Increasingly, international trade requires compliance with local data laws—covering nations, states, or other jurisdictions such as the EU. Governments, rather than corporations or hyperscalers, are setting the rules: Not only can they require data to be stored in-country, but some go further, stipulating that local providers deploy and operate systems storing or processing data.
As data sovereignty drivers largely concern cloud-based infrastructure, we see sovereign cloud solutions emerge to address these requirements. However, we also recognize that most, if not all, enterprises will have deployed infrastructure from multiple cloud providers as well as running hosted and on-premises systems. Which is to say, they will be operating a multicloud architecture.
While cloud providers can say they offer sovereign cloud, they currently only do so for their own offerings. Organizations that leverage sovereign cloud features and management tooling from each provider face the inefficiencies of configuring and operating each environment as independent “sovereign silos.” The model can also be costly because a hyperscaler’s solutions must be defined, priced, and deployed for each jurisdiction (for example, using AWS Outposts where a local zone is unavailable).
Organizations therefore need to look to multicloud options for implementing sovereign cloud across all the data storage and processing capabilities they use. A viable option to consider are platform-agnostic and multicloud-aware tooling, platforms, and services, which can operate across a combination of cloud providers and hosting types.
In our CxO Decision Brief for sovereign cloud in a multicloud architecture, we consider the needs and benefits of data sovereignty and the multicloud solutions that enable it. In particular, we explore Broadcom and VMware’s offerings in this area. We recognize that:
- Broadcom brings a platform-agnostic enterprise software portfolio, covering operations, security, and governance across all platforms and providers. It is therefore suited to enterprise hybrid IT environments, including multicloud architectures.
- VMware brings significant platform-agnostic infrastructure capability as well as tools enabling local development and deployment of infrastructure to meet sovereignty goals. The company works with local integrators to deliver services, keeping them in-country.
Solutions from Broadcom and VMware enable customers to manage data and applications across cloud and on-premises infrastructure, enabling interoperability and portability and reducing the risks and operational overheads of sovereignty. Broadcom’s broader enterprise partnership and research-led approach enables the company to work with its customer organizations to deliver on their evolving sovereignty goals. In further support, the companies offer capabilities for data protection, compliance, and security.
One way or another, organizations must get on top of data sovereignty challenges—it’s the law and can be an increasing cost to business. Nonetheless, we recognize that many current applications and services were not created with sovereignty in mind. Sovereignty represents a new, yet compulsory, non-functional requirement to be applied across the multicloud and on-premises IT estate, including retrofit onto existing applications.
Our CxO Decision Brief advocates starting on the right foot with sovereign cloud, defining an overall strategy for data sovereignty that works across multiple cloud providers and the existing application portfolio. This lets an organization consider its overall needs based on the countries in which it operates and on the applications that must be prioritized.
On this basis, technical leaders can determine which applications and repositories can be left in situ and/or refactored, migrated, modernized, or even decommissioned. If this sounds like an application rationalization initiative, it shares several facets—the main difference is the legislative driver. Similarly, it should be considered a change program in terms of gaining stakeholder buy-in at all levels, assuring measurable outcomes, deploying in a controlled manner, and so on.
On the positive side, data sovereignty managed within a multicloud environment enables better management of cloud-based services, reinforces data protection across the architecture, allows for better portability between cloud providers, and creates cost reduction opportunities compared to running individual clouds as sovereign silos. It also offers a firmer basis for innovation (for example, enabling services to be delivered in a broader set of jurisdictions, responding to local service needs, or allowing a broader view of environmental, sustainability, and governance (ESG) reporting and cloud cost management goals).
Multicloud tools in a sovereign cloud environment can also catalyze a move toward more distributed architectures, such as shared microservices and infrastructure-as-code approaches–defining in advance what can run where according to policy. For example, specific elements of an application can be deployed and run in-country, keeping data sovereign while enabling management from another jurisdiction. This facilitates a move away from the blunt instrument, “journey to the cloud” use of a limited set of global cloud providers and toward an increased level of discernment of what should be run where.
Overall, enterprises with international operations can not treat data sovereignty lightly. Whether seen as strategic or not, it will impact technical departments in terms of application architecture, operations, delivery, supplier management, and other areas. Nor can it be seen as something to happen down the line—organizations already need to comply if they wish to trade or deliver services, so it’s better to solve the challenge now rather than wait for the consequences of non-compliance.
With appropriate, multicloud-oriented data management and application tools, data sovereignty can nonetheless be seen as a catalyst for organizations to architect better for the future. In our CxO Decision Brief, we suggest steps to consider across planning, testing, deployment, and operation. Over the next one to three years, organizations will enjoy a window of flexibility before they face getting squeezed out of certain markets by other organizations that already have their houses in order. This time should be spent planning strategically and building partnerships and skills that position data sovereignty as an inherent element of the architecture.
October 2023
10-13 – Case Study: Ingram Micro
Case Study: Ingram Micro
“GigaOm and Ingram Micro work with partners to drive strategic growth and deliver more value to technology buyers. GigaOm goes beyond technology to enable partners to better connect technology solutions with customers’ business operating models, people and process organizational maturity, and transformational aspirations.
“Before GigaOm, partners were challenged to access this level of go-to-market positioning and messaging, strategic roadmap advice, and sales enablement. Ingram Micro and partners have benefited tremendously from the great strategic advice, research, and sales enablement produced by GigaOm.”
–Karl Connolly, technologist and field CTO at Ingram Micro*.*
Context setting
Ingram Micro is one of the world’s largest distributors of IT systems and services, with operations in 61 countries and reaching nearly 90% of the world’s population. Ingram Micro works with 1,500 original equipment manufacturer (OEM) vendor partners and 170,000 technology solution provider customers across the cloud, AI, data, infrastructure, security, storage, and networking. Ingram Micro’s mission is to enable technology channel partners to accelerate growth, run better and more profitably, and deliver more value for their customers—business and technology decision-makers.
Ingram Micro turned to GigaOm to enable its reseller and OEM partners to drive strategic growth by connecting their technology solutions to the strategic operational business models, organizational maturity, and transformation goals of customer organizations.
Karl Connolly is technologist and field CTO at Ingram Micro. He says the relationship with GigaOm provides unique value to Ingram Micro’s channel-focused business.
“GigaOm understands customers’ motivations so they can help partners better connect their technology solutions to how customers run their business,” says Connolly. “Ingram Micro is a big brand that is recognized, but is one step removed from the end customer by nature of its channel-focused business model. Its capabilities are not necessarily understood or known to end customers and resellers—gaining the voice and trust of end customers always takes place via an Ingram Micro reseller.”

Figure 1. Karl Connolly, Technologist and Field CTO, Ingram Micro
The goal was to help end-user organizations make better-informed technology choices to enable their businesses, explains Connolly. “The list of choices of technology solutions available to a customer can be daunting, and with each vendor competing for said customer, informed decisions can be clouded by misrepresentation, ambiguity, and partner preference.”
Ingram Micro understood that to get a clearer picture of a customer’s business and operational model requires organizational people, process maturity, and transformational aspirations. A better understanding of how to engage with end-user organizations would positively impact Ingram Micro, vendor, and partner sales revenue.
Why GigaOm?
The company has worked with analyst firms before, including IDC, Gartner, and Forrester. However, as Ingram Micro developed its go-to-market approaches and channel sales strategies, it saw GigaOm’s business and technical practitioner-led approach, covering C-level leaders, architects, and engineers, as a major asset.
“GigaOm’s advisors are experienced strategists, practitioners, and engineers who have been IT buyers and consumers,” explains Connolly. “This unique perspective provides us with the ‘voice of the customer,’ allowing us to connect technology solutions to C-level, line of business, and architect business value-centric strategy based on customer’s operational business models, organizational maturity, and transformation goals, which has proven instrumental in shaping our go-to-market strategy.”
GigaOm’s unique position as a C-level, line of business, architect, and engineer practitioner-led advisory, research, and enablement company, along with its voice-of-the-customer understanding, enables it to present technology in a way end-user organizations can connect with.
“GigaOm helps us and our partners make the case to end users for adoption of a technology area based on brand or specific product by being the voice of the customer,” says Connolly. “The research and insights enable informed decisions based on experience, testing, and unbiased assessment from the end user perspective. We are afforded opinions, perspectives, and facts that aren’t attainable from other firms, or from end users directly.”
GigaOm sees research as a tool to enable stakeholders on all sides: Vendors and partners understand how to talk to customers better, and end-users become better equipped to decide between complex offerings. It was this flexible approach that brought Ingram Micro and GigaOm together.

Figure 2. GigaOm and Ingram Micro Engagement Model
Aspects of GigaOm’s offering align with Ingram Micro’s vision, strategy, and approach, not least GigaOm’s brand value and partnership approach. “GigaOm has built a good solid brand and has credibility, and the DNA of the company fits with ours on channel partners,” says Connolly.
“We particularly appreciate GigaOm’s strong connections with many of the OEMs supported by Ingram Micro. This synergy has further enhanced the value of their insights for our business,” says Connolly. “We and our partners can also generate revenue with GigaOm by reselling GigaOm research and services to end-user customers; that is a unique capability.”
Solution and Approach
The partnership between Ingram Micro and GigaOm has been directly targeted at relationship building and enabling the company to engage in more strategic conversations about technology solutions. To kick things off, GigaOm CTO Howard Holton presented to Ingram Micro solution architects covering strategic areas such as CAPEX to OPEX.
These customer-led perspectives helped solutions teams better understand how to connect with OEM vendors based on a balanced perspective of their offerings. “Having an unbiased expert in Howard is invaluable,” says Connolly. “Often, teams are informed by the vendor, which can be limiting.”
As a result of the engagement, the Ingram Micro team gained a firmer foundation for discussing solutions with vendors and solution providers, enabling them to better drive customer conversations.
In addition, GigaOm participated in several conversations across multiple service providers and other partner firms, including T-Mobile, Betacom, and Otava. The goal was to develop effective business and technical enablement and sales strategies with customers based on their industry vertical, with C-level, line of business, architect, and engineer positioning, messaging, benchmarking, market evaluation, and cost analysis services from C-level to engineer. “GigaOm’s openness and willingness to build custom engagements for the partner was very well received, with the right set of assets delivered,” says Connolly.
Benefits
Overall, GigaOm’s engagement with Ingram Micro enabled the company to hone its strategies and sales plays based on the language of the customer. “GigaOm’s service of advisory and validation is a step above what its peers provide. The value of the advisory, coming from the vantage point of one who has done it, is more compelling than that of one who has studied or read up on a subject,” says Connolly.
Not only this, but the interaction helped Ingram Micro identify new opportunities within its partner portfolio. A specific example is Betacom: “GigaOm was the reason Ingram Micro became aware of Betacom, which is becoming a strategic partner for Ingram Micro in the 5G and industrial manufacturing space,” says Connolly. Manufacturing and Operational Technology (OT) is undergoing rapid digital transformation and is a relatively new industry vertical for Ingram Micro—an unexpected benefit was from GigaOm’s Holton, who had previously worked and consulted at major industrial companies.
“Our advanced solutions team leveraged Howard’s operational business insight and strategic expertise to gain understanding of the manufacturing and OT buyer across real world insights, considerations, and buying process advice. This helped the team get informed and prepared for partner meetings and our MxD partnership.”
GigaOm’s research and insights have directly impacted growth for Ingram Micro and its partners, says Connolly. “GigaOm services enable driving increased share of wallet by promoting more of what Ingram Micro offers to its partners in product, services, and solutions.”
Connolly says this direct benefit emanates from several factors:
Thought leadership: “An advisory company like GigaOm that has a consumer community consisting of tech and business influencers is a good way to be seen as thought leaders. GigaOm can potentially help buyers understand their options as it pertains to a new concept, program, or technology to support their operating model and transformation goals.”
Credibility & people cost savings: “GigaOm can help less mature or cost-conscious partners gain consulting capabilities without the need to staff CIOs, field CTOs, architects, and engineers in-house that could cost upward of $1 million dollars, instantly providing credibility across a broad domain of markets, services, and solutions.”
Independence: “GigaOm can act as a symbiotic extension to a partner, as it has no desire or aspiration to become a VAR and is purely there to help the partner uncover more opportunity.”
Opportunity: “GigaOm insights enable partners and Ingram Micro to stay current on macro themes and the OEMs filling those spaces, which has tangential and heretofore unmeasured value.”
Content: “Candid feedback on some of the materials and presentations we gave has been helpful in shaping how future content can be crafted and delivered to better connect with customer’s business operating model, organizational maturity and transformation goals.”
Next Moves
Ingram Micro and GigaOm will continue to build on the current success with individual partners and customers, and Ingram Micro will use the GigaOm partnership to further its position with service providers. “GigaOm will aid our partners to better connect with customers, gain visibility and mindshare in the market, specifically as we engage with our MNO and private network provider partners who can use enablement research and advisory services from GigaOm to become better known as leaders in a specific area, such as private 5G or connected workers,” says Connolly.
And what about Ingram Micro? “Beyond additional sales opportunities and new partnerships, our relationship with GigaOm can inform our portfolio and the solutions we offer our partners, elevating us beyond the traditional role that distribution plays.”
10-27 – What’s the Score?
What’s the Score?
Why have we been making changes to the GigaOm Key Criteria and Radar reports?
We are committed to a rigorous, defensible, consistent, coherent framework for assessing and evaluating enterprise technology solutions and vendors. The scoring and framework changes we’ve made are directed toward this effort to make our assessments verifiable, ground them in agreed concepts, and ensure that scoring is articulated, inspectable, and repeatable.
This adjustment is designed to make our evaluations more consistent and coherent, which makes it easier for vendors to participate in research and results in clearer reports for end-user subscribers.
What are the key changes to scoring?
The biggest change is to the feature and criteria scoring in the tables of GigaOm Radar reports. Scoring elements are weighted as they have been in the past, but we do so in a more consistent and standardized fashion between reports. The goal is to focus our assessment scope on the specific key features, emerging features, and business criteria identified as decision drivers by our analysts.
Scoring of these features and criteria determines the plotted distance from the center for vendors in the Radar chart. We are extending our scoring range from a four-point system (0, 1, 2, or 3) to a six-point scoring system (0 through 5). This enables us to recognize truly exceptional products against those that are just very good. It affords us greater nuance in scoring and better informs the positioning of vendors on the Radar chart.
Determining vendor position along the arc of the Radar chart has been refined as well. Analysts previously were asked to determine where they believed solutions should be positioned on the radar—first, to determine if they should occupy the upper (Maturity) or lower (Innovation) hemisphere, then to identify position left-to-right, from Feature Play to Platform Play. Similar to how we’ve extended our feature and criteria scoring, the scheme for determining quadrant position is now more granular and grounded. Analysts must think about each aspect individually—Innovation, Maturity, Feature Play, Platform Play—and score each vendor solution’s alignment accordingly.
We have now adapted how we plot solutions along the arc in our Radar charts, ensuring that the data we’re processing is relevant to the purchase decision within the context of our reports. Our scoring focuses primarily on key differentiating features and business criteria (non-functional requirements), then, to a lesser extent, on emerging features that we expect to shape the sector going forward.

For example, when you look at Feature Play and Platform Play, a feature-oriented solution is typically focused on going deeper, perhaps on functionality, or on specific use cases or market segments. However, this same solution could also have very strong platform aspects, addressing the full scope of the challenge. Rather than deciding one or the other, our system now asks you to provide an independent score for each.
Keep in mind, these aspects exist in pairs. Maturity and Innovation are one pair, and Feature and Platform Play the other. One constraint is that paired scores cannot be identical—one “side” must be higher than the other to determine a dominant score that dictates quadrant residence. The paired scores are then blended using a weighted scheme to reflect the relative balance (say, scores of 8 and 9) or imbalance (like scores of 7 and 2) of the feature and platform aspects. Strong balanced scores for both feature and platform aspects will yield plots that tend toward the y-axis, signifying an ideal balance between the aspects.
But you have to make a choice, right?
Yes, paired scores must be unique; the analysts must choose a winner. It’s tough, but in those situations, they will be giving scores like 6 and 5 or 8 and 7, which will typically land them close to the middle between the two aspects. You can’t have a tie, and you can’t be right on the line.
Is Platform Play better than Feature Play?
We talk about this misconception a lot! The word “platform” carries a lot of weight and is a loaded term. Many companies market their solutions as platforms even when they lack aspects we judge necessary for a platform. We actually considered using a term other than Platform Play but ultimately found that platform is the best expression of the aspects we are discussing. So, we’re sticking with it!
One way to get clarity around the Platform and Feature Play concepts is to think in terms of breadth and depth. A platform-focused offering will feature a breadth of functionality, use-case engagement, and customer base. A feature-focused offering, meanwhile, will provide depth in these same areas, drilling down on specific features, use cases, and customer profiles. This can help reason through the characterization process. In our assessments, we ask, “Is a vendor deepening its offering on the feature side, or are there areas it intentionally doesn’t cover and instead relies on third-party integrations?” Ultimately, think of breadth and depth as subtitles for Platform Play and Feature Play.
The challenge is helping vendors understand the concept of platform and feature and how it is applied in scoring and evaluating products in GigaOm Radar reports. These are not expressions of quality but character. Ultimately, quality is expressed by how far each plot is from the center—the closer you are to that bullseye, the better. The rest is about character.
Vendors will want to know: How can we get the best scores?
That’s super easy—participate! When you get an invite to be in our research, respond, fill out the questionnaire, and be complete about it. Set up a briefing and, in that briefing, be there to inform the analyst and not just make a marketing spiel. Get your message in early. That will enable us to give your product the full attention and assessment it needs.
We cannot force people to the table, but companies that show up will have a leg up in this process. The analysts are informed, they become familiar with the product, and that gives you the best chance to do well in these reports. Our desk research process is robust, but it relies on the quality of your external marketing and whatever information we uncover in our research. That creates a potential risk that our analysis will miss elements of your product.
The other key aspect is the fact-check process. Respect it and try to stay in scope. We see companies inserting marketing language into assessments or trying to change the rules of what we are scoring against. Those things will draw focus away from your product. If issues need to be addressed, we’ll work together to resolve them. But try to stay in scope and, again, participate, as it’s your best opportunity to appeal before publication.
Any final thoughts or plans?
We’re undergirding our scoring with structured decision tools and checklists—for example, to help analysts determine where a solution fits on the Radar—that will further drive consistency across reports. It also means that when we update a report, we can assess against the same rubric and determine what changes are needed.
Note that we aim to update Key Criteria and Radar reports based on what has changed in the market. We’re not rewriting the report from scratch every year; we’d rather put our effort into evaluating changes in the market and with vendors and their solutions. As we take things forward, we will seek more opportunities for efficiency so we can focus our attention on where innovation comes from.
November 2023
11-03 – 5 questions for Simon Pilar, Clario: The Business Case for Observability
5 questions for Simon Pilar, Clario: The Business Case for Observability
Simon Pilar has quite the title – Director DevOps & DLC Toolchain R&D Platform Engineering for healthcare technology provider, Clario. We spoke at a recent Dynatrace event, Innovate EMEA, to learn more about how management tooling, Observability and AIOps are helping achieve Clario’s automation goals.
Thank you for joining me, Simon. Perhaps let’s start with – What is Clario looking to deliver to its customers through software?
Clario is an equipment and software provider working across clinical trials and associated data and device management. This means pulling together clinical evidence – data, images, scans, and other information – from a wide variety of equipment types and delivering this to the people running the trials, all within a stringent regulatory framework.
We were founded in 1972, and we now operate across 120 countries. A major part of our business is software – we have about a thousand developers. Like most organizations, we have been modernizing our platforms to deliver leading-edge services to our pharmaceutical and medical customers, such as Bring Your Own Device and AI-driven insights.
So, why does Observability matter to Clario?
In a clinical trial, the most important person in the room is the patient – this drives our innovation. We want to be known for providing the very best customer experience, which means making things as easy as possible for patients and clinicians.
As a result, our systems need to store, secure and process clinical data quickly, with the results delivered fully and at quality. When a patient gets invited to a trial, they likely have a disease, and maybe the drug in the trial could improve their life. But if a doctor needs feedback from the analysis and it is delayed, the patient could have to wait or could even get excluded from the trial.
Things like that can happen, so you really do influence people’s lives with the system. That’s what I tell my team – what you do is really important because you can influence not only the results but also whether people can participate in a clinical trial.
This is where observability fits – it is mandatory for modern architectures. Our management tooling monitors application processes and ensures data flows are working. But with microservices and everything around it, it’s tough to analyze all the application data. It’s like looking for a needle in the haystack. Software like Dynatrace has become essential.
If there is an incident, first, you need to invest time to find the root cause and describe it to others so you can fix it. Then, you need to be able to tell your key stakeholders what happened and why did it happen. We need to sit in front of customers, such as pharmaceutical companies, asking about incidents. We don’t want to be in a situation where we can’t explain what is happening.
New operations capabilities and tools are appearing, such as AIOps. Do they make your life easier?
Better tools are appearing all the time, but that’s not the issue. For example, AIOps is pretty easy to deploy, but the harder thing is building your process around it. That’s all about the transformation of IT. It’s not so easy to go from traditional IT structures to how it will be in the future. You need to create new processes around creating technology, including things like infrastructure as code. Everything is code.
The new tooling and the new processes need to work together: get the right automation in place, and things like AIOps become more straightforward as they support what you are automating.
How do you approach this and instigate change?
In my 23-year career, I learned you can’t over-communicate with change. It takes time, and you need team influence to implement it and show the benefits. To drive this, I choose people who are techies and love to try new things. Let them play with the new stuff, and it spreads out automatically.
Kubernetes was an example of that. We played with it in IT, then showed it to some developers, and it started to spread, and now the technology is in use. That’s the cool thing with nerds (I consider myself one!) – you can use the curiosity of people to start genuine, transformational change. You need to find an influencer on the team, get them on your side, and then they influence others.
I can worry that we were behind, but when I look at other sessions at the Dynatrace conference, I think we’re making good progress. Plus, I have ten things written down to do or to look at, ideas of how we can do things better.
How did you package the business case for management tools for the C level to get it?
I’d say two things – efficiency and capability. Agility wasn’t the main driver: in our industry, we don’t have the business model to release software quickly. It’s not like that because a clinical trial and the software around it needs to be validated. Instead, a lot was on efficiency – you need to invest money to get savings out of it. Plus, you need the capability to work with your customers.
Platforms like Dynatrace create the opportunity to measure more business events – as I said earlier, when you sit in front of a Pharma customer, you need to explain why something happened. You don’t want to find yourself in a situation where you don’t know why something failed or how they were affected.
That’s true also for the C-level. Having a solution that can tell you, “These ten clients were affected, and this function caused the problem.” Clearly, if you can fix the problems in advance, you don’t need to have these conversations because it’s just running!
What magic wand would you wave over the technology industry, particularly the management space?
That would involve AI, which Dynatrace does now, but even more – for example, such that not only techies can get information out of the software. Then, the next big thing we are already planning is to increase automation to resolve issues, for example, automatically increasing resources. If there’s a problem, fix it automatically and email me to say it’s been done.
We’re nearly there. The more we can automate, the less we have to train people, and the faster we can transform. Automation is driving this transformation of IT to DevOps – Shift Left, development teams taking responsibility for automating infrastructure. This is so crucial. How often we have had a situation where something failed, and someone said, that’s not my responsibility; it’s the cloud team. Oh, and it worked in my test environment!
So, with more automation, we can level up our culture and mentality to deliver better, take responsibility and move such discussions into the past. You can use technology to its full potential, but only with the right culture and mentality.
11-03 – GigaOm Research Bulletin #005
GigaOm Research Bulletin #005

Welcome to GigaOm’s research bulletin for November 2023

Hi, and welcome back!
GigaOm’s partnership with Ingram Micro clearly shows how we bring something new and unique to the industry. Read more here from Karl Connolly, Field CTO at Ingram Micro, about how our practitioner-led research and advisory approach enables partner, product and sales teams at Ingram and beyond.
Plus, we’ve listened to your feedback on our research process, report formats and radar scoring and have implemented a raft of changes as a result. You can find specifics of scoring changes here, and you will already start to see differences in research and briefing requests. As always, we welcome any comments you may have!
Finally, our latest podcast discusses the importance and role of observability in today’s market landscape from both an enterprise and an end-user standpoint. Give it a listen!

Research Highlights
See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months. Any questions, reply directly to this email and we will respond.
Trending: Patch Management, released in September, is one of our top Radar reads right now. “Monitoring and observability are crucial IT functions that help organizations keep systems up and running and performance levels high. Knowing that a problem is likely to occur means that it can be rectified before it impacts systems.”, says author Ron Williams.
We are currently taking briefings on: Primary Storage, K8 Data Protection and SOAP.
Warming up are: Vector Database, CCaaS, DNS Security, App & API Security, K8 Resource Management, Data Governance, Data Lake, Data Center Switching, eSignature, Threat Hunting Solutions, SASE, ASM, ITAM and CIEM.
Recent Reports
We’ve released 24 reports in the period since the last bulletin.
In Analytics and AI, we have released a report on Data Observability and Data Catalogs.
For Cloud Infrastructure and Operations, we have Application Performance Management (APM), Cloud Management Platforms (CMPs) and Patch Management, and in Storage, we have covered Cloud Based Data Protection.
In the Security domain, we have released reports on Cloud Security Posture Management (CSPM), Continuous Vulnerability Management, API Security, Data Loss Prevention, Application Security Testing, Security Orchestration, Automation & Response (SOAR), Endpoint Detection & Response (EDR), Multifactor Authentication (MFA) and Ransomware Detection.
And in Networking, we have covered Network Detection & Response (NDR), DDI, Service Mesh, Network Validation and Network as a Service (NaaS).
And in Software and Applications, we have a report on Digital Experience Platforms (DXPs), E-Discovery and Intelligent Document Processing (IDP).
Quoted in the Press
GigaOm analysts are quoted in a variety of publications.
- Generative AI | CIO.com – Jon Collins
- Management in the modern workplace | TechFinitive – Ben Stanford
- Is AI the solution to acquiring accounting talent? – Elizabeth Kittner
- Zero Trust Security | TechRepublic – Howard Holton
- Renewable Energy | upsite – Ben Stanford
…and Jon was recently interviewed for an episode of ‘The State of Startups with Industry Analysts‘.
Blogs and Articles
We’ve published several additional blogs over the past couple of months, including:
-
Let’s introduce GigaOm’s new Field CTO, Darrel Kent, whose first blog for GigaOm covers the Business Value of AI. With his systems engineering background, Darrel examines how to translate IT solutions into business value for acquisition and implementation.
Other blogs include:
- Andrew Green explains why a Standalone SOAR is Alive and Kicking, NaaS – enterprise networking as an innovation platform, tells us why Enterprise MFA is actually really cool, and asks, SIEM or SOAR, will they or won’t they?.
- Paul Stringfellow asks, Is there a case for Microsoft as your only enterprise Security Partner?, and as Microsoft joins the party, is it time to try MDR?.
- Ivan McPhee talks about NDR: a simple acronym describing a complex and dynamic area.
- Jon questions, What’s the future for WebAssembly?, plus why he thinks DevOps efficiency needs to put business goals first and the Cloud-only Era is Done.
Where To Meet GigaOm Analysts
In the next few months you can expect to see our analysts at VMWare Explore Europe, Black Hat Europe and Mobile World Congress Barcelona. Do let us know if you want to fix a meet.
Jon recently interviewed our VP of Sales, Adrian Escarcega about his role at GigaOm. Take a listen here!
For news and updates, add analystconnect@gigaom.com to your lists, and please get in touch with any questions. Thanks and speak soon!
Jon Collins, VP of Engagement
Claire Hale, Engagement Manager
P.S. Here is last month’s bulletin if you missed it.

11-17 – 5 questions for Koby Avital: on business-first transformation and strategic vendor partnerships
5 questions for Koby Avital: on business-first transformation and strategic vendor partnerships
Koby Avital is a strategic advisor at enterprise software development company NearForm and a board member of the Linux Foundation – he has served as EVP of Technology Platforms for Walmart, and acted in technology leadership roles for FitBit, Priceline.com, and Paypal. I sat down with him to understand how to deliver on digital transformation goals, based on his ten years’ experience in this field.
What do you see as the central problem with digital transformation?
Most large companies have huge budgets and thousands of engineers working on digital and organizational transformation. In the last ten years, I have spent most of my time working on making these transformations successful. I have watched organizations try to deliver on it, but fail to satisfy the business and make the transition in their first attempt.
With digital transformation, many organizations look out at the promised land in front of them but have serious obstacles to get there. Why? Because it is not just a tech play; it is about culture, people, focus, and constant tradeoffs around what ‘good enough’ means. When CTOs and CIOs attempt to transform their tech ecosystem, they find this is most likely the most complex initiative of their professional career.
As they look to shift their on-premise/private instances to the cloud, the transformation outcome is pretty much a lateral move of their tech stack and applications with, hopefully, some improvements for a better path to the future once they arrive in a cloud-based state.
The trouble is, to do that, they find the need to allocate the company’s best people to do the job, which means taking them out of their day-to-day sustaining role of supporting the business. As these people help make the shift, they’re not working on the business functions, and the outcome is some level of business stagnation and unhappiness.
As a consequence, technology is not enabling the business fast enough. A transformation could take two or three years, even when you just perform lift and shift with the necessary native cloud adjustments. There’s always the lingering question: where is that money coming from?
Companies moving to the cloud cannot wipe the table and start from square one. Just because everyone says to do it, that doesn’t make it right. Leaders try to do this but don’t know how, so they go to the big consultancy firms and give them the ‘keys to the kingdom’ to do the transformation for them. As good as these may be, they cannot work magic to reach the desired state for the business and, too frequently, get half a job done.
So, what’s the alternative?
These experiences taught me that digital transformation is about business first, then people, then technology. Whatever technology you use, make sure it is right for the business, and innovate on what you already have invested in and not on something you don’t have. I call this “innovation in place”, growing where you have already planted.
Companies have a huge amount of legacy, which runs the business and brings in revenue. You should not ‘throw out the baby with the bathwater’ while attempting to follow a tech dream! The people managing, operating, and maintaining this legacy have invaluable business, tribal, and operational wisdom, so monetizing this (or at least figuring out its worth) is very important before you even consider replacing or augmenting it.
Cloud migration isn’t a silver bullet; it needs to respond to the needs of the business in increments and evolve side by side with what you have. You want to embark on a strategy you can deliver on without wishful thinking or cloud myths. This has become my passion, building a multi-cloud environment in the right way, as an evolution and not a revolution.
Don’t get me wrong. Cloud is an important enabler and provides you with capabilities that are almost impossible to develop. Public cloud provides excellent ‘best of breed’ services for workloads to use and consume, such as services that are difficult and expensive to develop in-house.
However, saying, “We have to move toward microservices, serverless, etc.,” is not always a good idea. The hope is that these technologies will solve the problem, when they won’t. They’re good for specific workloads and models, but are not an answer in themselves. You need to make them work for your organization, not the other way around.
Cloud infrastructure for compute and storage workloads should be considered and dealt with like a utility – like with electricity, I connect to the outlet and can use resources how I want. Doing so (and some abstraction is needed) gives the organization freedom of choice in terms of what cloud to use (including private) and where to use it while optimizing on location, cost and operability.
How does this affect relationships with the cloud providers?
Tech modernization and transformation are expensive and should be seen as a big, hairy, audacious goal – a BHAG. You cannot underestimate or delegate it to a vendor or an external consulting firm with complete confidence, as that can lead to an undesirable outcome.
You cannot deliver transformation in a vacuum; it must be done mostly from within. But as we said, cloud migration is a once-in-a-lifetime event for most of the technical staff who are not trained or experienced, and may have no resources or capacity to experiment with the technologies involved. Key factors beyond tech transformation are people skills and capabilities, and the business appetite to handle the costs of transition.
Cloud providers do a good job offering discounts to cover the ‘double bubble’ of resource utilization during migration/modernization, while running old next to new to validate point solutions or apps. However, this never covers the end-to-end impact on the organization: not least, the opportunity cost due to talent re-allocation to support the transformation, whilst losing focus on business progress.
I did not witness hyperscalers or consultants covering this, less tangible spending. It is a good business, reflected as a common theme that I’ve seen around headquarters of organizations, private and governmental, in the size of vendor and supplier buildings compared to those of the organizations concerned. The optics aren’t good – it appears like they are making more money from the companies than the companies are making for themselves from the cloud transformation.
In previous jobs, when I ran the numbers, I found that a company could execute workloads on a private cloud a lot cheaper. This model becomes particularly cost-effective when you have static workloads with little deviation in the usage pattern. You still need the public cloud for bursting, and to react to unpredictable usage patterns or seasonality. It’s not simply about the cost per unit of processing, but the cost models inherent in the architecture.
Cloud providers design their environments like a parking lot for small cars – a car in and a car out: they are load-balanced and tuned for such a pattern to achieve relatively high resource consumption, because the cars are defined to be the same (think memory management). But what if the new tenants are big organizations that scale in much bigger chunks? What if you drive 18-wheelers? It’s a different model for scale, utilization and cost.
It’s not about deciding one or the other, but getting the best out of both. Architecturally, this means creating an abstraction layer, not only for all the Cloud providers but also for private cloud, legacy systems and edge infrastructure. Each hyperscaler, Google, Microsoft, and AWS, looks like a fort, but with an abstraction layer and the right connectivity, you enable symmetrical deployment across all of them.
My main project at Walmart was to build its modern multi-cloud architecture. When this strategy was presented to the public cloud leaders, they recognized immediately that we were practically commoditizing the public cloud and asked them to continue developing and improving their best-of-breed services for us to use. This changed the nature of our relationship. I said, “Guys, you should develop platforms and services that will dovetail with us, be cross-cloud and support our ecosystem. The multi-cloud is a conglomerate of technologies that must interoperate during dev and runtime. It’s no longer a vendor-customer relationship. If you want us to win, we are in this together.”
How does this apply to other vendors?
This brings us to another principle – there’s no emotion in technology decision-making. Please avoid that. You hear statements like, Oh, we don’t like IBM, we don’t like Mainframe, we don’t like this vendor or that vendor. I say, why?. Does it do the job or not? If it doesn’t, talk to the vendor or try to make it work first. They don’t want to lose you, so bring them into the game. Make it a win-win, or live and let live. CTIOs, please don’t forget that you are always short on time – your priority is to buy time, so you can focus on the right things and work one slice at a time.
So, start building partnership relationships with your vendors, favoring whether they help you get the job done. Strategic vendor relationships are critical to your success, but many companies let an account manager and procurement drive the partnership and its strategy.
At an organization where you spend millions of dollars on vendors, you need to elevate partnerships to the executive level – don’t talk unless you talk to the vendor CxO and build mutual understanding, give and take. Bring them on board to gain from success through more adoption and consumption of their services, and let them feel the pain when it is not working. You are the customer, so look to be dealt with with respect, not a choke hold. The vendor lock-in model is like putting enterprises against the wall, but it can become a successful partnership if they provide you with what you need.
At the start, you said it’s business first, then people, then technology. Where do people fit?
My heart is in technology, but you cannot separate it from people. We are in a people business that produces technology. You’ve got to look after them first. Think about how and with whom you are doing it. The tech world today belongs to the grinders first. I’m a ‘grinder’, grinding the wheel from day to day, a little here and there, oiling the squeaky wheels. That’s what moves the world forward.
Only when you have that can you pause, think, and innovate. Innovation without the discipline to take it forward, integrate it and build it into what you have will have little impact. It is like building a Galapagos island that does not interact with the rest of the continent but is great for a demo.
I always tell my team that it is much more beneficial to use the power of water, a wave at a time and, with no fatigue, versus the power of a Bazooka. You get a big splash from it that fades fast with little damage/benefit. Water fills cracks and gaps that we must close to make things consistent and predictable.
I work with the grinders first and less with the innovators. Grinders get the organization going and buy time, through incremental improvements. They need to have the confidence that transformation is possible and good for them, not just an extra load to make the leaders look good. They know where the issues are and, in many cases, how to fix them, but leaders often fail to face the brutal facts and want the problems to disappear. The brutal facts have to be communicated up the chain of command, and fast.
Tell me a funny story!
Do you know the story about the three envelopes? Today’s Executives leave a company after two or three years, so they avoid the reality of their decisions.
So, an executive comes to a company. Things are complex, lots of problems. On their desk, the predecessor has left three envelopes. On them is a note – “Any time you have a serious problem, open the envelopes in this order.”
A year passed, and nothing has changed. Things are still very complex. So they think, ok, I’ll open the first envelope up, and in the first envelope it says, “Blame your predecessor.” They blame the predecessor, which works for a while until reality sinks in: nothing has changed.
Then comes the time to open the next one, and it says, “Do a reorganization.” As you know, when you do a reorg, everything slows down. It takes time to resurface. So, another year passed by. The executive opens the last one, which says, “Prepare three envelopes.”
11-22 – Heads up: New Research Calendar for GigaOm Radar Reports
Heads up: New Research Calendar for GigaOm Radar Reports
We’re excited to announce the release of our online research calendar, covering the scheduling of GigaOm Radar reports. These are practitioner-led technology evaluations based on a process akin to an end-user procurement decision—the end summary is a two-dimensional chart that will feel familiar to anyone who has worked with analysts before; but the content is targeted more at technologists looking to set a workable strategy, or make a practical deployment decision.
To deliver on this, our analysts kick off with a research outline to set the scope of the evaluation—this is a hugely important part of the analytical process, particularly as technology categories don’t stand still. They evolve, split, merge and converge, and sometimes defy expectations. You can see a good example of this in GigaOm analyst Dana Hernandez’ blog, “Intelligent Automation vs. Hyperautomation.”
At the same time, we set the table stakes and differentiating criteria for the technology category. These we send to qualifying vendors, requesting a briefing so we can run through how well their solution meets our criteria. In parallel, we draft what we call the Key Criteria report—this maps directly to the request for proposals (RFP) document an end-user organization might produce as part of its procurement process.
Once we get information from vendors, we score each solution and send the results back to the vendors for fact checking. If briefings were not available, we evaluate based on our own experience and publicly available information—reflecting what a buyer might do. We’ll send this for fact checking as well, though it should be noted that a vendor’s window to discuss the nuances of their product will be largely closed at this point!
Finally, we send the report to vendors for a five-day courtesy preview period before we publish. You can see the timings of these processes in the research calendar: kick-off, briefings, fact check, delivery, and publication. The report itself is available to our subscribers (at $14.95/month, it’s less than a gym membership), and vendors can also license the report.
That’s it for the process. As you can imagine, there are a lot of moving parts behind the scenes, across over 100 technology categories! In scheduling, we look to avoid major public holidays, big events and conferences, and overlaps with schedules from other analyst firms. Our new research calendar covers the next six months, up to May 2024, and we expect to extend that in the coming months. For now, you can extrapolate current dates but do get in touch if you have any queries.
The new online schedule is just one part of GigaOm’s ongoing improvement efforts. Watch this space for the report enhancements and other process improvements we’re rolling out, based on GigaOm initiatives, and your feedback—always greatly appreciated and acted upon where possible. Hat tip to our own internal team who made this possible, and to our readers’ ongoing support. Onward and upward!
11-28 – Seven reasons why generative AI will fall short in 2024
Seven reasons why generative AI will fall short in 2024
Generative AI is a thing. Let’s go further and say it’s a big thing, with lots of promise. But that doesn’t mean it will deliver out of the gate. We asked some of our analysts what will get in the way of generative AI in the short term. “The mark for 2024 is how bad early and rampant adoption of fully understood AI models is going to affect longer-term adoption,” says our CTO, Howard Holton. Agrees senior analyst Ron Williams, “Some CIOs may rush to say that AI is going to change the world instantaneously. It won’t.”
Why not, you may ask. Read on – forewarned is forearmed!
- Badly formed answers will not reflect the business at hand, even if they appear to
Howard: Companies are absolutely going to ask badly formed questions about their business. They’re going to get a response that sounds reasonable, but will likely be wrong because they don’t know what the hell they’re doing.
Ron: AIs can hallucinate. Unless you have the background to understand that something is completely insane, you will believe it. Only because you have the knowledge can you evaluate the answers.
- Model and algorithm selection will need more effort than perceived
Howard: Setting these models up is not trivial. Businesses are going to make some missteps, from small to huge.
Ron: Many in the press and the AI community have made it seem like training a model is something you do before breakfast, but it’s not. When you train a model, you have to address:
- Which algorithm is going to be best for a particular question?
- What bias is inherent in the way the learning model was created?
- Is there a way to explain the answer that you’re getting?
The bias problem is huge. For example, in IT Ops, if you initially train all of your large language models on a lot of desktop information, when you ask it questions, it will be biased towards desktop. If you train it on, let’s say, infrastructure, it will be biased towards that.
- Model training won’t take the business into account
Howard: Businesses will feed models tremendous amounts of business data and ask questions about the business itself and will get it wrong. We will have companies that think they’re training because they’re using one of the private GPTs that ChatGPT enables on the marketplace. This isn’t training at all; it’s manipulating a model. Early results are going to get them excited.
Ron: The business data that they’re going to be feeding this with, whether it’s coming from their salesforce or wherever, they’ve never done this type of thing before. Some of the answers will be massively wrong, and making decisions on those will be difficult to impossible.
- Organizations will look to change their structures even before they are on top of it
Howard: 2024 will see companies grossly restrict their operations and hiring, thinking generative AI will help solve the problem. I don’t think we’ll see layoffs, but I think we will see like, hey, I don’t think we need to hire somebody for this. We can fill this role with AI or get enough of an offset with AI. And I think it’s going to go spectacularly, horribly wrong.
- Organizations will go for low-hanging fruit but underestimate the higher branches
Ben Stanford, Head of Research: AI can enable teams to shortcut the menial stuff to add more value. But it feels like it might be a little bit like, oh, it made me write these emails a lot faster, and I could do these things really quickly, and then they start running out of steam a little bit because you have to be reasonably sophisticated to use it in a meaningful way and trust it.
There’s low-hanging fruit, but you must consider how you can implement it in a business to yield value. The question is, do businesses see it that way or say, we can cut headcount? Management in many structures are rewarded by how many people they can fire, and this looks like one of the perfect excuses to do that.
- Organizational structures will not be set up to benefit
Jon Collins, VP of Engagement: It’s not about whether AI will be useful, but will people be able to drive it properly? Will people be able to put the right data into it properly? Will organizations be organized such that an output from some generative thing changes behaviors? If you get that kind of insight and automatically set up that new business line, that’s fair enough. But if you go, that’s interesting. Now we need to have ten committee meetings, then things are no further.
Howard: Knowledge is not information; information is not knowledge. Giving the information to a junior analyst doesn’t suddenly provide them with knowledge.
Ron: There is an assumption that junior people will be able to use the answers, and AI will provide them with the knowledge and the abilities of a senior person: no, not exactly; if you don’t understand the answer or ask the right question.
- Vendors will focus on short-term gain
Howard: We can absolutely blame the big vendors for what they’re doing ‘selling’ their products. They don’t care if executives misinterpret the marketing, then turn around and buy solutions but find out later that, “Oops, we’re now in a three-year contract on something that doesn’t have the value they said it did.”
So, what to do about it?
In consequence, say our analysts, business leaders will hit a trough of confusion when they try to deal with the consequences of getting things not quite right. So, what to do? We would say:
- Start anyway, but don’t assume everything is working well already. 2024 is a great year to experiment, build skills and learn lessons without giving away the farm.
- Workshop what parts of the business can benefit, bringing in outside expertise potentially to really think outside the box – outside insights, productivity and experience, and into product design, process improvement, for example.
- Rather than hoping you can trust models and data sources outside your control, think about the models and data that can be trusted today – for example, smaller data sets with clearer provenance.
Overall, be excited, but be careful and, above all, be pragmatic. There may be a first-mover advantage to generative AI, but beyond this point, there are also dragons, so keep your eyes open and your sword sharp. Even with AI, the first thing to train is yourself.
December 2023
12-08 – Five security, networking and management predictions for 2024
Five security, networking and management predictions for 2024
So, how are the more engineering levels of tech evolving – security, networking, and management? We spoke to some of our experts in these areas – lead analysts Andrew Green and Paul Stringfellow, operations lead Ron Williams, head of research Ben Stanford, and CTO Howard Holton. Let’s see what they had to say.
1. Security categories are evolving, but this is causing more confusion than clarity
Andrew: Security vendors no longer know what products are what and how to position themselves – and it will get worse before it gets better. We’re seeing this across security market categories. There’s a kind of mishmash – like Network access controls (NAC) and extended detection and response (XDR), or security incident and event management (SIEM) versus security operations and automation response (SOAR). Zero-trust network access (ZTNA) is a genuinely confusing term right now, because too many vendors are using the term to mean different things.
2. Networking vendors with their own infrastructure can pivot
Andrew: I’m seeing how networking vendors with their own infrastructure are becoming better positioned to deliver new products and services. If you have your own network backbone and operate stuff like data centers or points of presence, you can pivot into different products more easily. For example, consider the NaaS vendors doing multicloud networking – it’s all because they have the pipes. This is also about capacity – if you have your own hardware deployed already, you can just use the overhead capacity or reallocate some of that capacity to develop new products.
3. Vendors are bolting AI chat interfaces onto their security and management software tools
Paul: It’s a risk that people assume that having a chat interface will make security problems disappear or give junior analysts the resource and information that senior analysts might have. The reality is you’re still going to need people who absolutely understand the space, security or otherwise.
Ron: Management tools vendors will want to say they’ve done something with generative AI. However, the process of going from monitoring to observability, to where you have predictive AI, then to intelligence, where you have generative AI and can ask questions about the entire business or the way in which the business operates, is two years out.
4. AI is having an impact on social engineering attacks
Ron: AI is empowering the hacking community to get more creative in how they can disturb the business. For example, one of the casinos in Vegas was hacked because of voice cloning. Someone heard the voice, they knew who it was and they did something, and it was wrong.
Ben: Phishing attacks are getting way better. Already, bad actors can make them sound much more plausible in multiple languages, at scale. They can nuance and refine them much quicker. A/B testing of their effectiveness has gone up as well.
Ron: We’ll see vendors coming out with tools that use AI to attack the AI attacks. I know, this sounds weird. We could have an AI attack hallucinate, and then tools deliver hallucinatory responses.
5. Businesses need to rethink their communications in the light of Spam
Ben: If I get a message now, I don’t trust it; I assume it’s spam. Businesses need to think about a strategy for how to communicate if end users are mistrustful of corporate communications.
Howard: We have to start restricting the information that gets put into an e-mail and shift it to something else. Chat is the logical place, especially as it’s become increasingly valuable to organizations.
12-11 – 2024 predictions redux: data, storage, infrastructure, and services platforms revisited
2024 predictions redux: data, storage, infrastructure, and services platforms revisited
In this, the third in our series of predictions blogs, we turn to the increased focus on platforms – that is, coherent and managed layers of technology, rather than point solutions or best-of-breed deployments. Let’s see what our expert analysts, Iben Rodriguez, Andrew Green, Ron Williams, Paul Stringfellow, and Ben Stanford, have to say about platforms in all their guises. TL;DR – With a still-moving tech landscape, enterprises can make decisions based on cost-effectiveness, building skills, and keeping it simple at the core.
Iben: Two opposing things are happening. Solution providers that came out with a point solution to solve a specific problem are now trying to become platform players. Big vendors are absorbing point solutions and building out their platforms. At the same time, the opposite is happening, where CIOs want to provide microservices or be more dynamic.
Ron: Sure, platforms are becoming more sophisticated and diverse in what they can do, but that doesn’t mean every platform is an expert at everything. When you look at platform X, it may do very well in three or four categories. Platform Y does well in two or three others. Yet the marketing is that each platform now takes care of everything. That is a risk to enterprises because they may not get the best they need for any particular purpose.
Jon: Perhaps organizations have had enough of marketing. I’m seeing enough around cloud repatriation back to on-premise, and a move away from Kubernetes, back to monoliths, to suggest a rebalancing. Rather than trying to build applications as massive, distributed, cloud-native things, how about we just build them on a stack like we used to? Enterprises are taking stuff off cloud and building it in the old way, and sometimes finding that it’s cheaper or more efficient to do that.
Ron: Keep in mind that it’s a two-sided coin. Time to market drove the microservice business. Monolith works if you have a well-defined application that won’t change massively, and I don’t have time to market problems with it. But for things that have to change weekly or monthly, the microservices model will continue to be very strong, even though it’s complex.
Jon: Yes, we’re going around the buoy again. Outsourcing begat right sourcing, and then offshoring became smart shoring. Do we need a new term, like “right-clouding”? I’m sure there’s a better term, but it’s about keeping responsibility. It’s another iteration of the “Oh crap, we couldn’t just give away the farm and expect the farm to run itself” cycle.
Andrew: It depends on the type of organization. Startup developers are more comfortable and have experience working with three cloud environments, but it is harder to learn about and work with twenty on-premises hardware/software vendors. Though interestingly, I see more products purpose-built for cloud use cases than for hybrid environments, despite widespread agreement that hybrid is the future.
Jon: Perhaps that will change with AI demand. Data management platform providers will benefit from AI, whether AI delivers or not. It will drive demand for data management products – including pipelines, catalogs, and everything else. Everyone will want to do more with data and use more data in the process. They will come up against governance, compliance, sovereignty, and quality challenges and will need to address these as they go.
Paul: The storage industry could step back into disciplines that have been largely forgotten, such as cleansing data sets or more easily moving data sets around – for example, putting a data set into a training model much more easily than trying to lift the whole thing, Also revisiting technology like cloning – instead of copying lots of data sets, creating versions that can be removed once they are used.
Ben: There’s a people question with all this. We’re negotiating the rocks in AI, trying to work out what’s going on. If you’re an IT leader, your understanding of the choices you’re making, the complexity you’re buying, and the staffing requirements for a skilled IT team to help you make good decisions – these areas are getting tougher, not easier.
Iben: For enterprises, this means keeping things simple first, and building on top. In Gordon Ramsay’s cooking show, he goes into a restaurant, and the first thing he does is look at the menu. The poorest performing restaurants have huge menus that are super complicated with too many offerings. So, he says, “You’ve got to get your menu down to one page. Make it really simple. Focus on doing these things well.” Then, customers can order something that’s not on the menu. “If you feel comfortable doing it, you can do it.”
For further reading on enterprise strategy for platforms and how these drive vendor relationships, check out our interview with Koby Avital!
12-22 – GigaOm Research Bulletin #006
GigaOm Research Bulletin #006
Welcome to GigaOm’s research bulletin for December 2023
Hi, and welcome to the last bulletin of 2023! As we near the end of the year, we would like to express our huge gratitude to all who have engaged with us in our research processes, given us feedback, welcomed us at events, and otherwise worked with us proactively this year.
As part of our ongoing process of improvement, we are excited to announce the release of our Online Research Calendar, covering the scheduling of GigaOm Radar reports over the next two quarters. This is being updated regularly as part of our PMO process, so the information is coming straight from the source. We are juggling 120 topics across (many) hundreds of vendors, legions of analysts, and numerous external events – so bear with us if schedule conflicts arise, we will endeavour to accommodate! Please do let us know any feedback you may have.
And what about GigaOm in 2024? For us, this year was a time of change, of creating a sustainable and efficient research platform that we can build upon. This work is never done of course, but increasingly we are turning our attention to services across end-user businesses, vendors, and channel organizations. Engage, enable, and empower with research; that’s the name of our game!
On which note, in one of the latest instalments of our podcast, The Good, The Bad & The Techy, some of our analysts and leadership team got together to discuss the realities of finance and technology investment in today’s market – Do give it a listen.

Research Highlights
See below for our most recent reports, blogs and articles, and where to meet our analysts in the next few months. Any questions? Let us know. 
Trending: Autonomous Security Operations Center (SOC) released in November is one of our top Radar reads right now. “There’s no way around automation to cope with today’s demands, and most security providers share a vision for how a security operations center will work in the future,” says author Andrew Green.
**We are currently taking briefings on:**Enterprise Firewalls, Kubernetes for Edge, Disaster Recovery and Business Continuity, Alternatives to Amazon S3, Cloud Observability, GitOps, Deception Technologies, Unstructured Data Management (Business and Infrastructure focused), Time/Series Database, Cloud Performance and Cloud Networking.
Warming up are: SaaS Security Posture Management, Network Operating Systems for Cloud, MSP/NSP, Enterprise, and SMB, Cloud FinOps, Streaming Data Platforms, Data Pipeline, Microsegmentation, XDR, DSPM, Object Storage for Enterprise and High Performance Workloads.
Recent Reports
We’ve released 18 reports in the period since the last bulletin.
In Analytics and AI, we have released a report on Data Warehouses.
For Cloud Infrastructure and Operations, we have Data Processing Units (DPUs), Incident Response Platforms and API Functional Automated Testing and in Storage, we have covered Distributed Cloud File Storage, High Performance Cloud File Storage and Scale-Out File Storage.
In the Securitydomain, we have released reports on Ransomware Prevention, Autonomous Security Operations Center (SOC), Penetration Testing as a Service (PTaaS), Cloud Network Security, Container Security, Threat Intelligent Platforms (TIPs), Data Security Platforms (DSP), and User and Entity Behavior Analytics (UEBA).
In Networking (or is that security as well), we have covered Software-Defined Wide Area Networks (SD-WAN).
And in Software and Applications, we have a report on Regulated Software Lifecycle Management (RSLM) and Digital Asset Management.
Blogs and Articles
Predictions for 2024! – Our analysts give their predictions in Tech for 2024 in our series of blogs, Seven reasons why generative AI will fall short in 2024, Five Security, Networking and Management Predictions for 2024 and 2024 Predictions Redux: Data, Storage, Infrastructure, and Services Platforms revisited.
Other blogs include:
- Dana Hernandez discusses Intelligent Automation vs Hyberautomation.
- Jon asks Five questions of Simon Pilar of Clario on Observability and AIOps, and Five questions of Koby Avital, Strategic Advisor, X-EVP, on business-first transformation and strategic vendor partnerships.
Quoted in the Press
GigaOm analysts are quoted in a variety of publications. Recently,
- Security Awareness Training | SHRM – Jamal Bihya
- Cloud Storage | Blocks & Files – Max Mortillaro & Arjan Timmerman
Where To Meet GigaOm Analysts
In the next few months you can expect to see our analysts at Mobile World Congress Barcelona, Tech Show London, and KubeCon & CloudNativeCon Paris . Do let us know if you want to fix a meet.
For news and updates, add analystconnect@gigaom.com to your lists, and please get in touch with any questions. Thanks, and wishing you a very Happy Festive Season and a prosperous New Year!

2024
Posts from 2024.
January 2024
01-05 – On Microsoft’s Radius, and building bridges between infra, dev and ops
On Microsoft’s Radius, and building bridges between infra, dev and ops
First, a story. When I returned to being a software industry analyst in 2015 or thereabouts, I had a fair amount of imposter syndrome. I thought, everyone’s now doing this DevOps thing and all problems are solved! Netflix seemed to have come from nowhere and said, you just need to build these massively distributed systems, and it’s all going to work – you just need a few chaos monkeys.
As a consequence, I spent over a year writing a report about how to scale DevOps in the enterprise. That was the ultimate title, but at its heart was a lot of research into, what don’t I understand? What’s working; and what, if anything, isn’t? It turned out that, alongside the major successes of agile, distributed, cloud-based application delivery, we’d created a monster.
Whilst the report is quite extensive, the missing elements could be summarized as – we now have all the pieces we need to build whatever we want, but there’s no blueprint of how to get there, in process or architecture terms. As a result, best practices have been replaced by frontiership, with end-to-end expertise becoming the domain of specialists.
Since my minor epiphany we’ve seen the rise of microservices, which give us both the generalized principle of modularization and the specific tooling of Kubernetes to orchestrate the resulting, container-based structures. So much of this is great, but once again, there’s no overarching way of doing things. Developers have become like the Keymaster in The Matrix – there are so many options to choose from, but you need a brain the size of a planet to remember where they all are, and pick one.
It’s fair to bring in science fiction comparisons, which tend to be binary – either sleek lines of giant, beautifully constructed spaceships, or massively complex engine rooms, workshops with trailing wires, and half-built structures, never to be completed. We long for the former, but have created the latter, a dystopian dream of hyper-distributed DIY.
But we are, above all, problem solvers. So, we create principles and tools to address the mess we have made—site reliability engineers (SREs) to oversee concept to delivery, shepherding our silicon flocks towards success; and Observability tools to solve the whodunnit challenge that distributed debugging has become. Even DevOps itself, which sets its stall about breaking down the wall of confusion between the two most interested parties, the creators of innovation, and those shovelling up the mess that often results.
The clock is ticking, as the rest of the business is starting to blink. We’re three to four years into much-trumpeted ‘digital transformation’ initiatives, and companies are seeing they don’t quite work. “I thought we could just deploy a product, or lift and shift to the cloud, and we’d be digital,” said one CEO to us. Well, guess what, you’re not.
We see the occasional report that says an organization has gone back to monoliths (AWS among them) or moved applications out of the cloud (such as 37 Signals). Fair enough – for well-specced workloads, it’s more straightforward to define a cost-effective architecture and assess infrastructure costs. For the majority of new deployments, however, even building a picture of the application is hard enough, let alone understanding how much it costs to run, or the spend on a raft of development tools that need to be integrated, kept in sync and otherwise tinkered with.
I apologize in part for the long preamble, but this is where we are, coping with the flotsam of complexity even as we try to show value. Development shops are running into the sand, knowing that it won’t get any easier. But there isn’t a side door you can open, to step out of the complexity. Meanwhile, costs continue to spiral out of control – software-defined sticker shock, if you will. So, what can organizations do?
The playbook, to me, is the same one I have often used when auditing or fixing software projects – start figuratively at the beginning, look for what is missing, and put it back where it should be. Most projects are not all bad: if you’re driving north, you may be heading roughly in the right direction, but stopping off and buying a map might get you there just a little bit quicker. Or indeed, having tools to help you create one.
To whit, Microsoft’s recently announced Radius project. First, let me explain what it is – an architecture definition and orchestration layer that sits above, and works alongside, existing deployment tools. To get your application into production, you might use Terraform to define your infrastructure requirements, Helm charts to describe how your Kubernetes cluster needs to look, or Ansible to deploy and configure an application. Radius works with these tools, pulling together the pieces to enable a complete deployment.
You may well be asking, “But can’t I do that with XYZ deployment tool?” because, yes, there’s a plethora out there. So, what’s so different? First, Radius works at both an infrastructure and an application level; building on this, it brings in the notion of pre-defined, application-level patterns that consider infrastructure. Finally, it is being released as open source, making the tool, its integrations, and resulting patterns more broadly available.
As so often with software tooling, the impetus for Radius has come from within an organization – in this case, from software architect Ryan Nowak, in Microsoft’s incubations group. “I’m mostly interested in best practices, how people write code. What makes them successful? What kind of patterns they like to use and what kind of tools they like to use?” he says. This is important – whilst Radius’ mechanism may be orchestration, the goal is to help developers develop, without getting bogged down in infrastructure.
So, for example, Radius is Infrastructure as Code (IaC) language independent. The core language for its ‘recipes’ (I know, Chef uses the same term) is Microsoft’s Bicep, but it supports any orchestration language, naturally including the list above. As an orchestrator working at the architectural level, it enables a view of what makes up an application – not just the IaC elements, but also the API configurations, key-value store and other data.
Radius then also enables you to create an application architecture graph – you know what the application looks like because you (or your infrastructure experts) defined it that way in advance, rather than trying to work it out in hindsight from its individual atomic elements like observability tools try to do. The latter is laudable, but how about, you know, starting with a clear picture rather than having to build one? Crazy, right?
As an ex-unified modeling language (UML) consultant, the notion of starting with a graph-like picture inevitably makes me smile. While I’m not wed to model-driven design, the key was that models bring their own guardrails. You can set out what can communicate with what, for example. You can look at a picture and see any imbalances more easily than a bunch of text, such as monolithic containers, versus ones that are too granular or have significant levels of interdependency.
Back in the day, we also used to separate analysis, design, and deployment. Analysis would look at the problem space and create a loose set of constructs; design would map these onto workable technical capabilities; and deployment would shift the results into a live environment. In these software-defined days, we’ve done away with such barriers – everything is code, and everyone is responsible for it. All is well and good, but this has created new challenges that Radius looks to address.
Not least, by bringing in the principle of a catalog of deployment patterns, Radius creates a separation of concerns between development and operations. This is a contentious area (see above about walls of confusion), but the key is in the word ‘catalog’ – developers gain self-service access to a library of infrastructure options. They are still deploying to the infrastructure they specify, but it is pre-tested and secure, with all the bells and whistles (firewall configuration, diagnostics, management tooling and so on), plus best practice guidance for how to use it.
The other separation of concerns is between what end-user organizations need to do and what the market needs to provide. The idea of a library of pre-built architectural constructs is not new, but if it happens today, it will be an internal project maintained by engineers or contractors. Software-based innovation is hard, as is understanding cloud-based deployment options. I would argue that organizations should focus on these two areas, and not on maintaining the tools to support them.
Nonetheless, and let’s get the standard phrase out of the way – Radius is not a magic bullet. It won’t ‘solve’ cloud complexity or prevent poor decisions from leading to over-expensive deployments, under-utilized applications, or disappointing user experiences. What it does, however, is get responsibility and repeatability into the mix at the right level. It shifts infrastructure governance to the level of application architecture, and that is to be welcomed.
Used in the right way (that is, without attempting to architect every possibility ad absurdum), Radius should reduce costs and make for more efficient delivery. New doors open, for example, to making more multi-cloud resources with a consistent set of tools, and increasing flexibility around where applications are deployed. Costs can become more visible and predictable up front, based on prior experience of using the same recipes (it would be good to see a FinOps element in there).
As a result, developers can indeed get on with being developers, and infrastructure engineers can get on with being that. Platform engineers and SREs become the curators of a library of infrastructure resources, creating wheels rather than reinventing them and bundling policy-driven guidance their teams need to deliver innovative new software.
Radius may still be nascent – first announced in October, it is planned for submission to the cloud native computing foundation (CNCF); it is currently Kubernetes-only, though given its architecture-level approach, this does not need to be a limitation. There may be other, similar tools in the making; Terramate stacks deserve a look-see, for example. But with its focus on architecture-level challenges, Radius sets a direction and creates a welcome piece of kit in the bag for organizations looking to get on top of the software-defined maelstrom we have managed to create.
February 2024
02-23 – Transformational Training as Lived Experience: 5 Questions for Heather MacDonald, Pluralsight
Transformational Training as Lived Experience: 5 Questions for Heather MacDonald, Pluralsight
I spoke with Heather MacDonald, principal consultant for technology training and online learning platform Pluralsight, about how to align learning with the broader goals of an organization.
JC: Heather, you were previously VP of strategy at a midsized bank—what brought Pluralsight into your life?
HM: I was in charge of strategy, change management, internal communication, employee engagement, women in tech, workforce of the future, and data analysis for the executive team. I wondered, could I take everything I’ve learned and see how it applied across larger enterprises? I came over to Pluralsight to do this.
My career path has been everything under the sun: construction and retail, restaurants and nonprofits, big and small companies. This allowed me to identify many common patterns across multiple sectors.
Also, I wanted to make workplaces more equitable so everyone would have the opportunities I did. I started at the bottom. My first job was as a construction admin for my dad, who didn’t have the budget to hire a full-time professional. From there, I never stopped learning, never stopped taking on responsibilities, and always showed up so I wouldn’t let down the people who opened doors for me.
On paper, I am not the typical candidate for the job that I’m doing. I don’t have the education, certifications, or time in a Big Four consulting company. What I do have is decades of lived experience, and I think the same can be true for so many other people. They also need that first door to be opened for them, then the understanding of how to open doors for themselves. That’s what I set out to do across the strategies and programs I create.
JC: You’ve literally lived the experience of self-development and growth; that expression was made for you. How does your job map into practice? Is it individual customers, or is it broader?
HM: I mostly work with individual customers to co-create the right solution based on their program maturity and pain points. We look at things like, “What is the strategy for this specific client? What are they hoping to achieve, learning and development-wise, and how does that connect to the business strategy?” Then, we distill that into actionable steps to support the learning and development of their people.
Part of it is sharing broader thought leadership about what these strategies look like and what they are in practice. This could be writing blog posts, presenting on webinars, or hosting workshops both locally and globally.
Then some of the work is internal facing. Because my role touches everything in Pluralsight—I’m collaborating with sales, customer success, product, and other teams—we work together to figure out the best solution for our customers and how to help them achieve it.
JC: You touched on discerning the best strategies to help people. I’m a great believer in imposter syndrome—how do you approach arriving at a new company full of smart people?
HM: What I’ve realized in this job is that no matter which industry they sit in or how big they are, the issues people face are super common and sometimes self-evident. If you’ve worked broadly in business, you can see the bigger landscape.
The problem is that everyone wants a silver bullet. They want Pluralsight to fix 100% of their problems overnight. That won’t work, so we need to work through that. You need to spend time learning about the organization to help the organization learn and improve. I never want to be seen as a consultant who thinks I know better and only gives orders. I want to walk alongside someone on their transformation journey to ensure they can be successful.
It’s like learning to drive. You don’t hand the keys to a Lamborghini to a 16-year-old and say, “Good luck, have fun, and I’ll see you in an hour.” They need to learn the book stuff, then go on the range and drive in a controlled environment. But people want to give you their Lamborghini and say, “Go ahead, figure out my entire company.” Even if you’re a great driver, you don’t just start driving and understand there’s a problem with the alternator, or that you need new tires.
In general, though, we do see patterns repeating. For example, we’ve all worked in places where change strategy starts at the top, and the executives and often senior leaders fully get it; they are bought in. But you hit layers seven, eight, and nine, and those people have no idea why they are here and why they matter. “I’m just a cog in the wheel,” they think, so how can they be bought into company-level change?
From a strategy perspective, it’s about stepping back and saying that if you as a business aren’t working through change management and communications effectively, none of this matters. You’ll never get anywhere if you can’t communicate down, up, and across. You need to create the environment and safety for the changes your organization needs to make.
At the pace of technological change and evolution, we can’t expect any one person to know it all anymore. We have to step back and say it’s more about collaborative and real-time learning and making sure people can fill the needs they have today. That’s where mentorship and practitioner support come in.
JC: Often, new organizations haven’t done the masterclass of business growth; they’re learning on the spot. Meanwhile, bigger companies are not able to change. They’re siloed. It’s less about telling them how to do the stuff they’ve been doing for 20 years and more about helping them understand how to align with the new. We all need that collaborative, transformational stuff. You don’t learn the theory and then suddenly change.
JC: Darrel Kent, one of our lead analysts, said that when he’s helping newer organizations, they’re learning old principles for the first time—how do you address that?
HM: It’s not the fault of executives who have been in business for decades. What worked back then was to go to school, get a degree, get a job, work your way up, and you could afford to buy the house with the white picket fence, drive the nice car, and feed your family on one income. Legacy industry execs, like those in banking, utilities, and telecom, sometimes feel like what worked for them should work for everyone and don’t understand why folks are pushing for more remote work and different benefit options.
With all that has happened in our world, we’re in a time where that plan for career success doesn’t work anymore. You can get a good job and still not be able to buy a house, buy a car, or afford a family. You can go to a top-tier school and still not get a job because you don’t have the experience. We have to honor where you’ve been and acknowledge that if we want to remain competitive and grow, we must make incremental changes.
We can’t expect every executive leader to understand how to navigate a fully hybrid and remote environment. That is challenging for people who are used to doing it one way because that worked for them before. So, how do we support the top layer of executives and leaders to learn the skills and capabilities they need to continue leading companies? We need to set egos and titles aside and realize we’re all learning through this. We need to step back and collectively figure out how the world of work is going to look going forward, and honestly, it’s likely to keep changing over time. Anyone looking for a static way of leading is going to get left behind.
JC: I have to say I am slightly disappointed it’s not old guys smoking cigars and sitting in big leather chairs dictating letters anymore! I was looking forward to that.
HM: Ha! I still encounter people who say, “Could you fax me that agenda?” No, you can open the attachment. It’s one page with three bullet points. “Oh…can you print it?” No, we’re saving trees today. This agenda doesn’t need to be put in a filing cabinet.
JC: Given quick fixes aren’t an option, how do you put a strategy together that will work for such a range of people?
HM: I tell people considering our service that we’re not consultants who tell you everything you’re doing was wrong and then disappear. We help you start to make progress toward your transformational change goals. By nature, transformation does not happen overnight. It takes time, effort, and evolution.
We go back to basics and the foundation of OK, you’re trying to upskill a workforce. The disparity between your organization’s least and most technical person is probably massive. So, how do you get everyone on the same page?
It’s not the same for every organization, but you can figure out what will fit most people. Some, who are reasonably skilled and have a decent amount of time, can self-select into a program. Then, figure out the outliers, the people who are super far behind or ahead. What do they need to be doing? It will require different solutions for them.
In cybersecurity training, for example, maybe your warehouse teams need the most attention today because someone clicked on a phishing email and caused a data breach. You need to think about what cybersecurity training looks like for people in a warehouse. What works as cybersecurity training for people in an office setting is not going to be the most applicable, or effective, way to train people in a warehouse or who have roles that aren’t tied to a desk.
Even if you are reasonably technical in your role, that’s no protection. We have to make sure everyone understands that one bad email could take down your entire company. We don’t want people to be scared and paralyzed, but we want them to have a strong enough sense of awareness that they don’t click on the thing that could be a bad link.
Even cybersecurity professionals at the top of their game who have been doing this forever are having to adapt because everything keeps changing. Attacks that happened yesterday are not the attacks that will happen tomorrow. There’s constant anxiety of, “Am I going to be the person that misses the thing that takes down my company?” That group needs a different level of tech skills development support and engagement to ensure we’re not burning out the people who need to be well rested and prepared if things go wrong.
JC: Oh, this resonates. When I used to do security awareness training, we tried to help people think a bit more—about leaving passwords on a Post-it, for example. There’s more to learning than pointing people at a training manual.
HM: Yes, indeed, it’s the 70-20-10 model for learning. 70% of learning needs to be hands-on and experiential, like labs, job rotations, and stretch assignments. 20% of it should be social learning like mentoring, communities of practice, coaching, or buddy systems. The last 10% is formal learning, videos, books, college courses, and certifications. Formal learning is good for gaining knowledge but doesn’t translate into wisdom until you put it to work.
If you can’t contextualize what you’ve learned, you’re book smart. With that 70%, you can fill the gap between “I learned a thing” versus “I know what this means within the context of my role, my business, the economy, and the world around me.” It’s the difference between learning something and bringing it into your own lived experience.
JC: Thank you so much, Heather!
HM: My pleasure.
02-26 – All About the GigaOm Radar | Explainer Video
All About the GigaOm Radar | Explainer Video
We’re delighted to announce the launch of our in-depth GigaOm Radar explainer video, which tells you everything you need to know about GigaOm Radar reports: how the GigaOm Radar chart works, how we put it together, and how to read the results.
Simply put, the GigaOm Radar report is written by engineering leaders to help other engineering leaders make engineering-led decisions.
So many reports we see today are based on market factors such as vendor market share, vendor incumbency, customer share, and so on. But CTOs, VPs of engineering and operations, and data architects all need to know how a product will fit their needs at a technical level. That’s what the GigaOm Radar report sets out to do.
As the video shows, we do rank solutions, just not according to market factors. Rather, we focus on delivery capability, strength of offering across differentiating features and business criteria, and speed of movement according to vendor proactiveness and roadmap.
In addition, there’s no magical place where the best products exist to the detriment of the rest. GigaOm recognizes that some buyers might want a broad platform that covers a wide set of needs reasonably well, and other buyers might want a specific solution to meet a certain need.
For product and service vendors, the GigaOm Radar acts like the solution architect or pre-sales engineer in the (virtual) room, who can discuss end-user needs in more detail and address any technical practicalities. Meanwhile, technology leaders can use the GigaOm Radar as a decision-making framework, which they can apply to their own scenarios.
Fundamentally, the GigaOm Radar is a learning tool to drive purchase decisions. All products have strengths and weaknesses, and the GigaOm Radar makes for more straightforward, facts-based decision-making, optimizing efficiency and reducing deployment risk.
So, settle back, get a beverage of your choice, and dive in. For reference, the video comprises the following:
0:00-4:20—Introduction to GigaOm Radar concepts and purpose 4:20-6:45—Explaining the Leader, Challenger, and Entrant rings 6:45-end—How GigaOm scores and plots solutions, and how prospective customers can build a shortlist
Enjoy! And of course, if you have any questions or feedback, don’t hesitate to get in touch.
02-29 – Digital Transformation Not Working? Here’s a Five-Point Plan That Can Help
Digital Transformation Not Working? Here’s a Five-Point Plan That Can Help
Digital transformation is the holy grail of businesses today, but the route is bumpy and the journey full of challenges. Businesses have, in waves, seen tech innovation as the gateway to unimaginable riches, all the while being unable to break from the shackles of reality.
Transformations that Haven’t Delivered
The cloud is the most obvious example. The idea you could replace all your systems and infrastructure with brand new stuff that’s easier to use, deploy, and scale—at a lower cost—has been hugely compelling. Unfortunately, the principles may be true, but then there’s the Jevons paradox: simply put, the more you have of something, the more you use it, so instead of costs lowering over time, as you might expect with increased efficiency, they increase with greater use.
As a result, boring finance types have been turning the screws on cloud spend. Moreover, at the end of 2022, interest rates started going up and money started to cost money, moving cost optimization from a “nice-to-have” to a need. Accountants are winning and innovators can’t just do whatever they like, hiding behind the banner of “digital transformation.”
If the momentum behind cloud-for-cloud’s-sake marketing is waning, it’s no surprise we’re turning our attention to the next bit of technological magic—artificial intelligence (AI)—or more accurately, large language models (LLMs). And this new manifestation has attributes similar to cloud: apparent low cost of entry, transformational potential for business, and so on.
But even as it begins its transformations, it is already signaling its doom, not least because LLMs are about processing rather than infrastructure. They create new workloads with new costs, rather than shifting a workload from one place to another. It’s a new line on the budget, a separate initiative with no real efficiency argument to sell it.
“But it has the power to transform!” cry the heralds. Sounds good, but for the last wave of digital transformation delivering, to put it kindly, sub-optimally. We’re three to four years into many initiatives, and the rubber has been hitting the road, in some cases, with much wailing and gnashing of teeth.
The Challenge with Digital Transformation
According to a recent PWC survey, “82% of CIOs say achieving value from adopting new technologies is a challenge to transforming.” Read between the lines and that says, “We’re not sure we’re getting what we thought out of our initiatives.” Meanwhile, consider the 100 million pounds spent by the UK city of Birmingham on Oracle-based solutions, a significant factor in the city declaring bankruptcy.
For every public example of a large technology project failure, tens more go undeclared. A technology vendor told me that it was common to hear of a business investing millions into a platform, only to quietly write it off a couple of years later. As another example, an end-user organization told me it adopted a scorched earth policy to move its infrastructure to the cloud, before rolling it back in pieces when they found they couldn’t lift and shift the entire application architecture with its manifold vagaries.
I get why people buy into the dream of massive change with epic results. I mean, I love the idea. Earlier in my career, I learned how end-user decision-makers were driven by how something would look on their CV, and that vendor sales representatives were highly focused on hitting their quarterly targets.
So, a lot of people have been duped into believing they could make these massive, sweeping changes in IT, with life-altering results. Obviously, it can work sometimes, but it isn’t a coincidence that most happy case studies come from smaller organizations because they’re of a size that can actually succeed.
Technology done right can achieve great things, but success can’t be guaranteed by technology alone. Sounds glib, right? But to the point: the problem is not the tech, it’s the complexity of the problem, plus the denialism that goes with the feeling it will somehow be different this time (cf: the definition of insanity).
Complexity applies to infrastructure—whether in-house and built to last yet frequently superseded, or cloud-based and started as a skunkworks project yet becoming a pillar of the architecture. As a consequence, we now have massive, interdependent pools of data, inadequate interfaces, imperfect functionality, and that age-old issue of only two people who really understand the system.
Unsurprisingly, simplification seems to be a massive theme among many technology providers right now—but, meanwhile, business has plenty of complexity of its own: bureaucracy, compliance issues, cross-organizational structures, conflicting policies, and politics at every level. Have you ever tried to make any substantive change to anything in your business? How did that go for you?
I am reminded of a book I once read about quality management systems—a.k.a. process improvement—by Isabelle Orgogozo. This line, while paraphrased here, has stuck in my head ever since, “You can’t change the rules of a game while playing the game.” Why? Because of the fearful and competitive nature of humanity. If you don’t address this, you will fail.
Let’s be clear—technology creates complexity, and it doesn’t even come close to solving corporate complexity. That’s the bad news. Much as we may want some corporate utopian techno-future to be enabled at the flick of a switch, and as much as we have literally banked on it (and may be doing so again, with LLMs), this is never going to happen. You may want the problem to go away with a tool, but it won’t. Sorry!
Getting Transformation to Work: The Five-Point Plan
So, what to do about it? Can you transform the untransformable, slay the dragons of complexity, and overcome organizational inertia? The answer is, I know it can be done, if certain pieces are in place. Conversely, if those pieces aren’t present, you stand less chance of success. It’s a bit like the sport of curling—achieving goals is as much about removing the things that will cause failure as much as it’s attempting that perfect shot at the goal.
1. Start with the Business Strategy
I know, I know–yawn. We can all fall into a rhetorical black hole if we start down the track of, well, it’s just about the business strategy, isn’t it? That’s game over. It’s always about business strategy.
But that’s the point. In their rush to digital, companies have been losing touch with the tangible. Fine that the business strategy has been digital-first, but not so fine that it has been business-second.
No car company is going to wake up tomorrow and not be a car company. We’ve all heard manufacturers say, “We just sell boxes on wheels,” but that’s a big mistake because people are buying the boxes on wheels and they don’t care about software. You might innovate on the software, but ultimately, people are buying the box.
Technology may augment, automate, and even replace in terms of what we do, but it needs to be an equal partner in why we do it.
That “all businesses are software businesses” thing only works if—and here’s the rub—we don’t treat tech as a solve-all. There is never, ever, any excuse for assuming that the answers lie in the technical, and therefore one doesn’t have to think about business goals too much. We all do it, buying stuff to make our lives better, without thinking about what it is we need first.
An easy win is to address this all-too-human trait first. So, what are your strategic initiatives? What’s getting in the way of them? Start there. Absolutely feed in what tech has the potential to do, it would be insane not to. But put business goals first.
2. Align Technical Initiatives to the Business Strategy
The tool should change the business for the better, or it’s no good. So, what’s the business change you’re looking for? And how is the tool going to help you get there?
GigaOm’s Darrel Kent discusses three types of business improvements in his blog: product innovation, customer growth, and operational efficiency. Obviously, operational efficiency is a big target right now, but so is product and service innovation.
An old consulting colleague and mentor of mine, Steve, used to be brought in when major change programs had gone off the rails. He was a rough, bearded bloke from the north of England, and he would start by asking, “What’s the problem we’re trying to solve here?”
It is never the wrong time to confirm business objectives and to ask how existing initiatives align with and drive them. If the answer is complex and starts to go into the weeds, you already have a problem. Cue another human trait: our inability to change course once it is set—which is why we bring in consultants at vast expense when things have gone wrong. You don’t need to wait for that moment, however; spotting failure in advance is not failure, it’s success.
Current projects might be about cost-efficiency, rationalization, and modernization, which is laudable, but could equally indicate an opportunity lost if all you are doing is looking for savings. So, look for gaps as well, as parts of your business strategy may be underserved. Remember the axiom (which I read somewhere, once) that success comes from cutting back quickest when a downturn happens, and coming out of it fastest when things start looking up.
3. Think about Technologies in Terms of Platforms
Let’s keep this simple: if you don’t have a target architecture, you need one. I’m not talking about the convoluted mess your IT systems are in right now, but the shape of how you want them to be. The more you can push into this technology layer—let’s call it a platform—the better.
This does fit with the adage (which I just made up), “better to be the best in your sector than the best infrastructure engineer.” Yes, you will have to bank on a technology provider or several, so put your time and effort into building those relationships and defining your needs rather than burning cycles trying to keep systems running.
As a five-year plan, look to pin down the platform as a basis for customization to meet your business goals, rather than trying to get your custom solutions into a coherent platform that you can then, ahem, customize. I could spend time now talking about multicloud plus on-premises, but I won’t, not here.
4. Align Platforms to Scenarios and Map to Workloads
How do you know if the platform is going to deliver? Simple: you can work through my SWB (scenario-workload-blueprint) model. OK, there is no such model, I just made it up. But let’s go through it piece by piece, and you’ll see what I’m getting at.
Scenarios First, scenarios. Think of these as high-level business stories (in DevOps language), or simply, “What do we want to do with the tech?”
Scenarios may be user facing: e-commerce, apps, and so on. Or they can be internal, linked to product development, sales and operations, or others. The point is not so much about what scenarios look like, but whether you have a list of things you can check against the platform and say, “Will it support that?”
Scenarios can also be tech-operational; for example, involving application rationalization, infrastructure consolidation, replatforming, and so on—but the question remains the same.
Workloads In any case, scenarios beget workloads, which are the software-based building blocks needed to deliver on them. Data warehousing, virtualized infrastructure, container-based applications, analytics, and (that old chestnut) AI all fall under the banner of workloads.
By thinking about (business) scenarios and mapping to (technical) workloads, you’re reviewing how your nascent technical architecture maps to the needs of your organization. Out of this should emerge some common patterns, hopefully not too many, which we can call blueprints. These can form the basis of the platform’s success.
Blueprints You certainly don’t want to build everything as a custom architecture, as that brings additional costs and inefficiencies. All we’re doing here is adding a couple of steps to set scope and confirm what can run where. The result—blueprints—can then be specced out in more detail, piloted, costed with confirmed operational overheads, and reviewed for security, sovereignty, compliance, and so on.
Also, and interestingly, very little of this exercise needs deep technical expertise. We’re creating a mapping, not building a new kind of transistor. So, there’s no excuse for keeping this discussion outside the boardroom—if your board is serious about digital transformation, that is.
5. Deliver Based on Practicality
There’s a moment you need to bite the bullet and recognize that you can’t deliver perfection. Of course, you can still take the moonshot, but there’s a strong chance you’ll fly like Buzz Lightyear right before he crashes into the stairwell. You may smother this with a fire blanket of denial, to which I say—even if you’re still set on the summit of Everest, how about you do everything you can to get to base camp first? Several strategies can help you here, though you’ll need to work out your own combination (cf: curling):
- Look for halfway houses to success. What does a partial transformation look like, and how can it succeed? For example, French bank Credit Agricole set up a satellite organization to build an app store; it didn’t try to change the whole bank.
- Build the Pareto platform. This is the 80/20 rule, and it should be your mantra, to deliver a tightly scoped infrastructure that delivers on most of your priority needs, which may be quite boring (like claims processing) but are no less transformative. Everything else is custom.
- Bring in the specialists (but stay in control). You won’t have all the skills in-house, so identify the gaps and bring in people who can fill them. Caveat: you want those skills, so use specialists to guide and support, while developing skills of your own.
Next Steps
Ultimately, the keys to digital transformation are in your hands and in the hands of the people around you. And there’s the crux: while the goal may be digital, the reality and the route are both going to be about people.
We may all want technology to “just work” but that’s like wanting people to “just change” and “just know how to make things different,” which just isn’t going to happen. Recognize this, address it head on, and the keys to digital transformation will be laid at your door.
I’d love to know how well this resonates with your own experiences, so do get in touch.
April 2024
04-17 – Navigating the SEC Cybersecurity Ruling
Navigating the SEC Cybersecurity Ruling
The latest SEC ruling on cybersecurity will almost certainly have an impact on risk management and post-incident disclosure, and CISOs will need to map this to their specific environments and tooling. I asked our cybersecurity analysts Andrew Green, Chris Ray, and Paul Stringfellow what they thought, and I amalgamated their perspectives.
What Is the Ruling?
The new SEC ruling requires disclosure following an incident at a publicly traded company. This should come as no surprise to any organization already dealing with data protection legislation, such as the GDPR in Europe or California’s CCPA. The final rule has two requirements for public companies:
- Disclosure of material cybersecurity incidents within four business days after the company determines the incident is material.
- Disclosure annually of information about the company’s cybersecurity risk management, strategy, and governance.
The first requirement is similar to what GDPR enforces, that breaches must be reported within a set time (72 hours for GDPR, 96 for SEC). To do this, you need to know when the breach happened, what was contained in the breach, who it impacted, and so on. And keep in mind that the 96 hours begins not when a breach is first discovered, but when it is determined to be material.
The second part of the SEC ruling relates to annual reporting of what risks a company has and how they are being addressed. This doesn’t create impossible hurdles—for example, it’s not a requirement to have a security expert on the board. However, it does confirm a level of expectation: companies need to be able to show how expertise has come into play and is acted on at board level.
What are Material Cybersecurity Incidents?
Given the reference to “material” incidents, the SEC ruling includes a discussion of what materiality means: simply put, if your business feels it’s important enough to take action on, then it’s important enough to disclose. This does beg the question of how the ruling might be gamed, but we don’t advise ignoring a breach just to avoid potential disclosure.
In terms of applicable security topics to help companies implement a solution to handle the ruling, this aligns with our research on proactive detection and response (XDR and NDR), as well as event collation and insights (SIEM) and automated response (SOAR). SIEM vendors, I reckon, would need very little effort to deliver on this, as they already focus on compliance with many standards. SIEM also links to operational areas, such as incident management.
What Needs to be Disclosed in the Annual Reporting?
The ruling doesn’t constrain how security is done, but it does need the mechanisms used to be reported. The final rule focuses on disclosing management’s role in assessing and managing material risks from cybersecurity threats, for example.
In research terms, this relates to topics such as data security posture management (DSPM), as well as other posture management areas. It also touches on governance, compliance, and risk management, which is hardly surprising. Yes, indeed, it would be beneficial to all if overlaps were reduced between top-down governance approaches and middle-out security tooling.
What Are the Real-World Impacts?
Overall, the SEC ruling looks to balance security feasibility with action—the goal is to reduce risk any which way, and if tools can replace skills (or vice versa), the SEC will not mind. While the ruling overlaps with GDPR in terms of requirements, it is aimed at different audiences. The SEC ruling’s aim is to enable a consistent view for investors, likely so they can feed into their own investment risk planning. It therefore feels less bureaucratic than GDPR and potentially easier to follow and enforce.
Not that public organizations have any choice, in either case. Given how hard the SEC came down following the SolarWinds attack, these aren’t regulations any CISO will want to ignore.
August 2024
08-16 – Operations Leadership Lessons from the Crowdstrike Incident
Operations Leadership Lessons from the Crowdstrike Incident
Much has been written about the whys and wherefores of the recent Crowdstrike incident. Without dwelling too much on the past (you can get the background here), the question is, what can we do to plan for the future? We asked our expert analysts what concrete steps organizations can take.
Don’t Trust Your Vendors
Does that sound harsh? It should. We have zero trust in networks or infrastructure and access management, but then we allow ourselves to assume software and service providers are 100% watertight. Security is about the permeability of the overall attack surface—just as water will find a way through, so will risk.
Crowdstrike was previously the darling of the industry, and its brand carried considerable weight. Organizations tend to think, “It’s a security vendor, so we can trust it.” But you know what they say about assumptions…. No vendor, especially a security vendor, should be given special treatment.
Incidentally, for Crowdstrike to declare that this event wasn’t a security incident completely missed the point. Whatever the cause, the impact was denial of service and both business and reputational damage.
Treat Every Update as Suspicious
Security patches aren’t always treated the same as other patches. They may be triggered or requested by security teams rather than ops, and they may be (perceived as) more urgent. However, there’s no such thing as a minor update in security or operations, as anyone who has experienced a bad patch will know.
Every update should be vetted, tested, and rolled out in a way that manages the risk. Best practice may be to test on a smaller sample of machines first, then to do the wider rollout, for example, by a sandbox or a limited install. If you can’t do that for whatever reason (perhaps contractual), consider yourself working at risk until sufficient time has passed.
For example, the Crowdstrike patch was an obligatory install, however some organizations we speak to managed to block the update using firewall settings. One organization used its SSE platform to block the update servers once it identified the bad patch. As it had good alerting, this took about 30 minutes for the SecOps team to recognize and deploy.
Another throttled the Crowdstrike updates to 100Mb per minute – it was only hit with six hosts and 25 endpoints before it set this to zero.
Minimize Single Points of Failure
Back in the day, resilience came through duplication of specific systems––the so-called “2N+1” where N is the number of components. With the advent of cloud, however, we’ve moved to the idea that all resources are ephemeral, so we don’t have to worry about that sort of thing. Not true.
Ask the question: “What happens if it fails?” where “it” can mean any element of the IT architecture. For example, if you choose to work with a single cloud provider, look at specific dependencies––is it about a single virtual machine or a region? In this case, the Microsoft Azure issue was confined to storage in the Central region, for example. For the record, it can and should also refer to the detection and response agent itself.
In all cases, do you have another place to failover to should “it” no longer function? Comprehensive duplication is (largely) impossible for multi-cloud environments. A better approach is to define which systems and services are business critical based on the cost of an outage, then to spend money on how to mitigate the risks. See it as insurance; a necessary spend.
Treat Backups as Critical Infrastructure
Each layer of backup and recovery infrastructure counts as a critical business function and should be hardened as much as possible. Unless data exists in three places, it’s unprotected because if you only have one backup, you won’t know which data is correct; plus, failure is often between the host and online backup, so you also need offline backup.
The Crowdstrike incident cast a light on enterprises that lacked a baseline of failover and recovery capability for critical server-based systems. In addition, you need to have confidence that the environment you are spinning up is “clean” and resilient in its own right.
In this incident, a common issue was that Bitlocker encryption keys were stored in a database on a server that was “protected” by Crowdstrike. To mitigate this, consider using a completely different set of security tools for backup and recovery to avoid similar attack vectors.
Plan, Test, and Revise Failure Processes
Disaster recovery (and this was a disaster!) is not a one-shot operation. It may feel burdensome to constantly think about what could go wrong, so don’t––but perhaps worry quarterly. Conduct a thorough assessment of points of weakness in your digital infrastructure and operations, and look to mitigate any risks.
As per one discussion, all risk is business risk, and the board is in place as the ultimate arbiter of risk management. It is everyone’s job to communicate risks and their business ramifications––in financial terms––to the board. If the board chooses to ignore these, then they have made a business decision like any other.
The risk areas highlighted in this case are risks associated with bad patches, the wrong kinds of automation, too much vendor trust, lack of resilience in secrets management (i.e., Bitlocker keys), and failure to test recovery plans for both servers and edge devices.
Look to Resilient Automation
The Crowdstrike situation illustrated a dilemma: We can’t 100% trust automated processes. The only way we can deal with technology complexity is through automation. The lack of an automated fix was a major element of the incident, as it required companies to “hand touch” each device, globally.
The answer is to insert humans and other technologies into processes at the right points. Crowdstrike has already acknowledged the inadequacy of its quality testing processes; this was not a complex patch, and it would likely have been found to be buggy had it been tested properly. Similarly, all organizations need to have testing processes up to scratch.
Emerging technologies like AI and machine learning could help predict and prevent similar issues by identifying potential vulnerabilities before they become problems. They can also be used to create test data, harnesses, scripts, and so on, to maximize test coverage. However, if left to run without scrutiny, they could also become part of the problem.
Revise Vendor Due Diligence
This incident has illustrated the need to review and “test” vendor relationships. Not just in terms of services provided but also contractual arrangements (and redress clauses to enable you to seek damages) for unexpected incidents and, indeed, how vendors respond. Perhaps Crowdstrike will be remembered more for how the company, and CEO George Kurtz, responded than for the issues caused.
No doubt lessons will continue to be learned. Perhaps we should have independent bodies audit and certify the practices of technology companies. Perhaps it should be mandatory for service providers and software vendors to make it easier to switch or duplicate functionality, rather than the walled garden approaches that are prevalent today.
Overall, though, the old adage applies: “Fool me once, shame on you; fool me twice, shame on me.” We know for a fact that technology is fallible, yet we hope with every new wave that it has become in some way immune to its own risks and the entropy of the universe. With technological nirvana postponed indefinitely, we must take the consequences on ourselves.
Contributors: Chris Ray, Paul Stringfellow, Jon Collins, Andrew Green, Chet Conforte, Darrel Kent, Howard Holton
September 2024
09-06 – 5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires
5 Questions for Carsten Brinkschulte, CEO Dryad: Silvanet, early warning for forest fires
I spoke recently with Carsten Brinkschulte, co-founder and CEO of Dryad. Here is some of our conversation on Silvanet and how it deals with the ever-growing global concern of forest fires.
Carsten, tell me a bit about yourself, Dryad, and your product, Silvanet.
I’ve been in telecoms for 25 years. I’ve had three startups and three exits in the space, in 4G network infrastructure, mobile email, instant messaging services, and device management. I started Dryad in 2020 with five co-founders. Dryad is what you’d call an “impact for profit” company. The mission is to be green, not just as a PR exercise. We want a positive environmental impact, but also a profit—then we can have more impact.
We introduced Silvanet in 2023 to focus on the ultra-early detection of wildfires because they have such a devastating environmental impact, particularly on global warming. Between six and eight billion tons of CO2 are emitted in wildfires across the world each year, which is 20% of global CO2 emissions.
Our mission is to reduce human induced wildfires. Arson, reckless behavior, accidents, and technical faults account for 80% of fires. We want to prevent biodiversity loss and prevent CO2 emissions, but also address economic loss because fires cause huge amounts of damage. The low end of the figures is about $150 billion, but that figure can go up to $800 billion a year, depending on how you look at the statistics.
What is your solution?
Silvanet is an end-to-end solution—sensors, network infrastructure, and a cloud platform. We’ve developed a solar powered gas sensor that we embed in the forest: you can hang it on a tree. It is like an electronic nose that can smell the fire. You don’t have to have an open flame: someone can throw a cigarette, then depending on wind and other parameters, a close-by sensor should be able to detect it within 30-60 minutes.
We’re running embedded AI on the edge in the sensor, to distinguish between the smells that the sensor is exposed to. When the sensor detects a fire, it will send an alert.
Sensors are solar powered. The solar panels are quite small but big enough to power the electronics via a supercapacitor for energy storage. It doesn’t have as much energy density as a battery, but it doesn’t have the downside. Lithium ion would be a silly idea because it can self-ignite. We didn’t want to bring a fire starter to the forest.
Obviously, you don’t get much direct sunlight under the trees, but the supercapacitors work well in low temperatures and have no limitations with regards to recharge cycles. The whole setup is highly efficient. We take care to not use excess energy.
Next, since we are in the middle of a forest, we typically don’t have 4G or other connectivity, so Silvanet works as an IoT mesh network. We’re using LoRaWan for the communications, which is like Wi-Fi but lower power and longer range—it can communicate over a few kilometers. We’ve added the mesh topology because LoRaWan doesn’t have mesh. Nobody else has done this as far as we are aware.
The mesh enables us to cover large areas without any power nearby! Sensors communicate from deep in the forest, over the mesh to a border gateway. Then a cloud platform captures the data, analyzes it further, and sends out alerts to firefighters.
What does deployment look like?
Deployment density depends on the customer. You typically have irregular deployments where you focus on high risk, high value areas. In remote locations, we put less sensors, but in areas like along a road highway, walking paths, power lines, and train lines, where most of the fires are starting, we put many more.
Humans don’t start fires in the middle of the forest. They’ll be along hiking paths where people throw a cigarette, or a campfire grows out of control or is not properly extinguished. For the rest, you could have a lightning-induced fire, or a power line where a tree falls onto it, or a train sparks, causing a grass fire that turns into a bush fire and then a wildfire.
You end up with variable density. You need one sensor per hectare, roughly three acres, for a fast detection time, then one sensor for five hectares overall.
Other solutions include optical satellite systems, which look down from space to detect fires with infrared cameras, or cameras on the ground that can see smoke plumes rising above the trees. All these systems make sense. Satellites are invaluable for seeing where big fires are heading, but they’re late in the game when it comes to detection. Cameras are good as well because they are closer to the action.
The fastest is arguably the electronic sensors, but they can’t be everywhere. So, ideally you would deploy all three systems. Cameras have a greater overview, and satellites have the biggest picture. You can focus sensor systems on areas of high risk, high value—like in the interface, where you have got people causing fires but also are affected by fires.
Do you have an example?
We have a pilot deployment in Lebanon. The deployment was high density because it’s what’s called a wild-urban interface—there are people living in villages, some farming activity, and forests. It’s of the highest risk and highest value because if there is a fire, there’s a good chance that it spreads and becomes a conflagration—then you have a catastrophe.
Within the pilot, we detected a small fire within about 30 minutes. Initially, the AI in the sensor calculated from the gas scans, a 30% probability of it being a fire. The wind may have changed as the probability went down, then about 30 minutes later it sensed more smoke and “decided” it was really a fire.
How’s business looking?
We try to keep pricing as low as possible—despite being manufactured in Germany, we’re less than €100 a sensor. We have a service fee for operating the cloud, charged on an annual basis, but that’s also low cost.
Last year, we sold 20,000 sensors worldwide. We now have 50 installations in southern Europe–in Greece, Spain, and Portugal–and in the US in California, in Canada, in Chile, and as far as South Korea. We have a deployment in the UK, with the National Trust. We’ve also three or four forests in Germany, in Brandenburg, which is very fire prone and dry as a tinderbox.
This year, we’re expecting more than 100,000 sensors to be shipped. We’re ramping up manufacturing to allow for that volume. We’re properly funded with venture capital—we just raised another 5.6 million in the middle of March to fuel the growth we’re seeing.
The vision is to go beyond fire: once a network is installed in the forest, you can do much more. We’re starting to work on additional sensors, like a fuel moisture sensor that can measure fire risk by measuring moisture in the fuel that’s on the ground, a dendron meter that measures tree growth, and a chainsaw detection device to detect illegal logging.
November 2024
11-07 – DevOps, LLMs, and the Software Development Singularity
DevOps, LLMs, and the Software Development Singularity
A Brief History of DevOps
To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level of experience. In the late ’90s, I was a DSDM (Dynamic Systems Development Methodology) trainer. DSDM was a precursor to agile, a response to the slow, rigid structures of waterfall methodologies. With waterfall, the process was painstakingly slow: requirements took months, design took weeks, coding seemed endless, and then came testing, validation, and user acceptance—all highly formalized.
While such structure was seen as necessary to avoid mistakes, by the time development was halfway done, the world had often moved on, and requirements had changed. I remember when we’d built bespoke systems, only for a new product to launch with graphics libraries that made our custom work obsolete. A graphics tool called “Ilog,” for instance, was bought by IBM and replaced an entire development need. This exemplified the need for a faster, more adaptive approach.
New methodologies emerged to break the slow pace. In the early ’90s, rapid application development and the spiral methodology—where you’d build and refine repeated prototypes—became popular. These approaches eventually led to methodologies like DSDM, built around principles like time-boxing and cross-functional teams, with an unspoken “principle” of camaraderie—hard work balanced with hard play.
Others were developing similar approaches in different organizations, such as the Select Perspective developed by my old company, Select Software Tools (notable for its use of the Unified Modelling Language and integration of business process modelling). All of these efforts paved the way for concepts that eventually inspired Gene Kim et al’s The Phoenix Project, which paid homage to Eli Goldratt’s The Goal. It tackled efficiency and the need to keep pace with customer needs before they evolved past the original specifications.
In parallel, object-oriented languages were added to the mix, helping by building applications around entities that stayed relatively stable even if requirements shifted (hat tip to James Rumbaugh). So, in an insurance application, you’d have objects like policies, claims, and customers. Even as features evolved, the core structure of the application stayed intact, speeding things up without needing to rebuild from scratch.
Meanwhile, along came Kent Beck and extreme programming (XP), shifting focus squarely to the programmer, placing developers at the heart of development. XP promoted anti-methodologies, urging developers to throw out burdensome, restrictive approaches and instead focus on user-driven design, collaborative programming, and quick iterations. This fast-and-loose style had a maverick, frontier spirit to it. I remember meeting Kent for lunch once—great guy.
The term “DevOps” entered the software world in the mid-2000s, just as new ideas like service-oriented architectures (SOA) were taking shape. Development had evolved from object-oriented to component-based, then to SOA, which aligned with the growing dominance of the internet and the rise of web services. Accessing parts of applications via web protocols brought about RESTful architectures.
The irony is that as agile matured further, formality snuck back in with methodologies like the Scaled Agile Framework (SAFe) formalizing agile processes. The goal remained to build quickly but within structured, governed processes, a balancing act between speed and stability that has defined much of software’s recent history.
The Transformative Effect of Cloud
Then, of course, came the cloud, which transformed everything again. Computers, at their core, are entirely virtual environments. They’re built on semiconductors, dealing in zeros and ones—transistors that can be on or off, creating logic gates that, with the addition of a clock, allow for logic-driven processing. From basic input-output systems (BIOS) all the way up to user interfaces, everything in computing is essentially imagined.
It’s all a simulation of reality, giving us something to click on—like a mobile phone, for instance. These aren’t real buttons, just images on a screen. When we press them, it sends a signal, and the phone’s computer, through layers of silicon and transistors, interprets it. Everything we see and interact with is virtual, and it has been for a long time.
Back in the late ’90s and early 2000s, general-use computers advanced from running a single workload on each machine to managing multiple “workloads” at once. Mainframes could do this decades earlier—you could allocate a slice of the system’s architecture, create a “virtual machine” on that slice, and install an operating system to run as if it were a standalone computer.
Meanwhile, other types of computers also emerged—like the minicomputers from manufacturers such as Tandem and Sperry Univac. Most have since faded away or been absorbed by companies like IBM (which still operates mainframes today). Fast forward about 25 years, and we saw Intel-based or x86 architectures first become the “industry standard” and then develop to the point where affordable machines could handle similarly virtualized setups.
This advancement sparked the rise of companies like VMware, which provided a way to manage multiple virtual machines on a single hardware setup. It created a layer between the virtual machine and the physical hardware—though, of course, everything above the transistor level is still virtual. Suddenly, we could run two, four, eight, 16, or more virtual machines on a single server.
The virtual machine model eventually laid the groundwork for the cloud. With cloud computing, providers could easily spin up virtual machines to meet others’ needs in robust, built-for-purpose data centers.
However, there was a downside: applications now had to run on top of a full operating system and hypervisor layer for each virtual machine, which added significant overhead. Having five virtual machines meant running five operating systems—essentially a waste of processing power.
The Rise of Microservices Architectures
Then, around the mid-2010s, containers emerged. Docker, in particular, introduced a way to run application components within lightweight containers, communicating with each other through networking protocols. Containers added efficiency and flexibility. Docker’s “Docker Swarm” and later, Google’s Kubernetes helped orchestrate and distribute these containerized applications, making deployment easier and leading to today’s microservices architectures. Virtual machines still play a role today, but container-based architectures have become more prominent. With a quick nod to other models such as serverless, in which you can execute code at scale without worrying about the underlying infrastructure—it’s like a giant interpreter in the cloud.
All such innovations gave rise to terms like “cloud-native,” referring to applications built specifically for the cloud. These are often microservices-based, using containers and developed with fast, agile methods. But despite these advancements, older systems still exist: mainframe applications, monolithic systems running directly on hardware, and virtualized environments. Not every use case is suited to agile methodologies; certain systems, like medical devices, require careful, precise development, not quick fixes. Google’s term, “continuous beta,” would be the last thing you’d want in a critical health system.
And meanwhile, we aren’t necessarily that good at the constant dynamism of agile methodologies. Constant change can be exhausting, like a “supermarket sweep” every day, and shifting priorities repeatedly is hard for people. That’s where I talk about the “guru’s dilemma.” Agile experts can guide an organization, but sustaining it is tough. This is where DevOps often falls short in practice. Many organizations adopt it partially or poorly, leaving the same old problems unsolved, with operations still feeling the brunt of last-minute development hand-offs. Ask any tester.
The Software Development Singularity
And that brings us to today, where things get interesting with AI entering the scene. I’m not talking about the total AI takeover, the “singularity” described by Ray Kurzweil and his peers, where we’re just talking to super-intelligent entities. Two decades ago, that was 20 years away, and that’s still the case. I’m talking about the practical use of large language models (LLMs). Application creation is rooted in languages, from natural language used to define requirements and user stories, through the structured language of code, to “everything else” from test scripts to bills of materials; LLMs are a natural fit for software development.
Last week, however, at GitHub Universe in San Francisco, I saw what’s likely the dawn of a “software development singularity”—where, with tools like GitHub Spark, we can type a prompt for a specific application, and it gets built. Currently, GitHub Spark is at an early stage – it can create simpler applications with straightforward prompts. But this will change quickly. First, it will evolve to build more complex applications with better prompts. Many applications have common needs—user login, CRUD operations (Create, Read, Update, Delete), and workflow management. While specific functions may differ, applications often follow predictable patterns. So, the catalog of applications that can be AI-generated will grow, as will their stability and reliability.
That’s the big bang news: it’s clear we’re at a pivotal point in how we view software development. As we know, however, there’s more to developing software than writing code. LLMs are being applied in support of activities across the development lifecycle, from requirements gathering to software delivery:
- On the requirements front, LLMs can help generate user stories and identify key application needs, sparking conversations with end-users or stakeholders. Even if high-level application goals are the same, each organization has unique priorities, so AI helps tailor these requirements efficiently. This means fewer revisions, whilst supporting a more collaborative development approach.
- AI also enables teams to move seamlessly from requirements to prototypes. With tools such as GitHub Spark, developers can easily create wireframes or initial versions, getting feedback sooner and helping ensure the final product aligns with user needs.
- LLM also supports testing and code analysis—a labor-intensive and burdensome part of software development. For instance, AI can suggest comprehensive test coverage, create test environments, handle much of the test creation, generate relevant test data, and even help decide when enough testing is sufficient, reducing the costs of test execution.
- LLMs and machine learning have also started supporting fault analysis and security analytics, helping developers code more securely by design. AI can recommend architectures, models and libraries that offer lower risk, or fit with compliance requirements from the outset.
- LLMs are reshaping how we approach software documentation, which is often a time-consuming and dull part of the process. By generating accurate documentation from a codebase, LLMs can reduce the manual burden whilst ensuring that information is up-to-date and accessible. They can summarize what the code does, highlighting unclear areas that might need a closer look.
- One of AI’s most transformative impacts lies in its ability to understand, document, and migrate code. LLMs can analyze codebases, from COBOL on mainframes to database stored procedures, helping organizations understand what’s vital, versus what’s outdated or redundant. In line with Alan Turing’s foundational principles, AI can convert code from one language to another by interpreting rules and logic.
- For project leaders, AI-based tools can analyze developer activity and provide readable recommendations and insights to increase productivity across the team.
AI is becoming more than a helper—it’s enabling faster, more iterative development cycles. With LLMs able to shoulder many responsibilities, development teams can allocate resources more effectively, moving from monotonous tasks to more strategic areas of development.
AI as a Development Accelerator
As this (incomplete) list suggests, there’s still plenty to be done beyond code creation – with activities supported and augmented by LLMs. These can automate repetitive tasks and enable efficiency in ways we haven’t seen before. However, complexities in software architecture, integration, and compliance still require human oversight and problem-solving.
Not least because AI-generated code and recommendations aren’t without limitations. For example, while experimenting with LLM-generated code, I found ChatGPT recommending a library with function calls that didn’t exist. At least, when I told it about its hallucination, it apologized! Of course, this will improve, but human expertise will be essential to ensure outputs align with intended functionality and quality standards.
Other challenges stem from the very ease of creation. Each piece of new code will require configuration management, security management, quality management and so on. Just as with virtual machines before, we have a very real risk of auto-created application sprawl. The biggest obstacles in development—integrating complex systems, or minimizing scope creep—are challenges that AI is not yet fully equipped to solve.
Nonetheless, the gamut of LLMs stands to augment how development teams and their ultimate customers – the end-users – interact. It begs the question, “Whence DevOps?” keeping in mind that agile methodologies emerged because their waterfall-based forebears were too slow to keep up. I believe such methodologies will evolve, augmented by AI-driven tools that guide workflows without needing extensive project management overhead.
This shift enables quicker, more structured delivery of user-aligned products, maintaining secure and compliant standards without compromising speed or quality. We can expect a return to waterfall-based approaches, albeit where the entire cycle takes a matter of weeks or even days.
In this new landscape, developers evolve from purist coders to facilitators, orchestrating activities from concept to delivery. Within this, AI might speed up processes and reduce risks, but developers will still face many engineering challenges—governance, system integration, and maintenance of legacy systems, to name a few. Technical expertise will remain essential for bridging gaps AI cannot yet cover, such as interfacing with legacy code, or handling nuanced, highly specialized scenarios.
LLMs are far from replacing developers. In fact, given the growing skills shortage in development, they quickly become a necessary tool, enabling more junior staff to tackle more complex problems with reduced risk. In this changing world, building an application is the one thing keeping us from building the next one. LLMs create an opportunity to accelerate not just pipeline activity, but entire software lifecycles. We might, and in my opinion should, see a shift from pull requests to story points as a measure of success.
The Net-Net for Developers and Organizations
For development teams, the best way to prepare is to start using LLMs—experiment, build sample applications, and explore beyond the immediate scope of coding. Software development is about more than writing loops; it’s about problem-solving, architecting solutions, and understanding user needs.
Ultimately, by focusing on what matters, developers can rapidly iterate on version updates or build new solutions to tackle the endless demand for software. So, if you’re a developer, embrace LLMs with a broad perspective. LLMs can free you from the drudge, but the short-term challenge will be more about how to integrate them into your workflows.
Or, you can stay old school and stick with a world of hard coding and command lines. There will be a place for that for a few years yet. Just don’t think you are doing yourself or your organization any favors – application creation has always been about using software-based tools to get things done, and LLMs are no exception.
Rest assured, we will always need engineers and problem solvers, even if the problems change. LLMs will continue to evolve – my money is on how multiple LLM-based agents can be put in sequence to check each other’s work, test the outputs, or create contention by offering alternative approaches to address a scenario.
The future of software development promises to be faster-paced, more collaborative, and more innovative than ever. It will be fascinating, and our organizations will need help making the most of it all.
11-22 – GigaOm Research Bulletin #010
GigaOm Research Bulletin #010
| | Where To Meet GigaOm Analysts In the next few months you can expect to see our analysts at AWS re:Invent, Black Hat London and MWC Barcelona. Do let us know if you want to fix a meet! To send us your news and updates, please add analystconnect@gigaom.com to your lists, and get in touch with any questions. Thanks! |
11-22 – Navigating Technological Sovereignty in the Digital Age
Navigating Technological Sovereignty in the Digital Age
Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with. So, should it matter to you and your organization? Let’s first consider what’s driving it, not least the crystal in the solute of the US Cloud Act, which ostensibly gives the US government access to any data managed by a US provider. This spooked EU authorities and nations, as well as others who saw it as a step too far.
Whilst this accelerated activity across Europe, Africa and other continents, moves were already afoot to preserve a level of sovereignty across three axes: data movement, local control, and what is increasingly seen as the big one – a desire for countries to develop and retain skills and innovate, rather than being passive participants in a cloud-based brain drain.
This is impacting not just government departments and their contractors, but also suppliers to in-country companies. A couple of years ago, I spoke to a manufacturing materials organization in France that provided goods to companies in Nigeria. “What’s your biggest headache,” I asked the CIO as a conversation starter. “Sovereignty,” he said. “If I can’t show my clients how I will keep data in-country, I can’t supply my goods.”
Legislative themes like the US Cloud Act have made cross-border data management tricky. With different countries enforcing different laws, navigating where and how your data is stored can become a significant challenge. If it matters to you, it really matters. In principle, technological sovereignty solves this, but there’s no single, clear definition. It’s a concept that’s easy to understand at a high level, but tricky to pin down.
Technological sovereignty is all about ensuring you have control over your digital assets—your data, infrastructure, and the systems that run your business. But it’s not just about knowing where your data is stored. It’s about making sure that data is handled in a way that aligns with the country’s regulations and your business strategy and values.
For organizations in Europe, the rules and regs are quite specific. The upcoming EU Data Act focuses on data sharing and access across different sectors, whilst the AI Act introduces rules around artificial intelligence systems. Together, these evolving regulations are pushing organizations to rethink their technology architectures and data management strategies.
As ever, this means changing the wheels on a moving train. Hybrid/multi-cloud environments and complex data architectures add layers of complexity, whilst artificial intelligence is transforming how we interact with and manage data. AI is a sovereignty blessing and a curse – it can both enable data to be handled more effectively, but as AI models become more sophisticated, organizations need to be even more careful about how they process data from a compliance perspective.
So, where does this leave organizations that want the flexibility of cloud services but need to maintain control over their data? Organizations have several options:
- Sovereign Hyper-Scalers: Over the next year, cloud giants like AWS and Azure will be rolling out sovereign cloud offerings tailored to the needs of organizations that require stricter data controls.
- Localized Providers: Working with local managed service providers (MSPs) can give organizations more control within their own country or region, helping them keep data close to home.
- On-premise Solutions: This is the go-to option if you want full control. However, on-premise solutions can be costly and come with their own set of complexities. It’s about balancing control with practicality.
The likelihood is a combination of all three will be required, at least in the short-medium term. Inertia will play its part: given that it’s already a challenge to move existing workloads beyond the lower-hanging fruit into the cloud, sovereignty creates yet another series of reasons to leave them where they are, for better or worse.
There’s a way forward for sovereignty as both a goal and a burden, centered on the word governance. Good governance is about setting clear policies for how your data and systems are managed, who has access, and how you stay compliant with regulations for both your organization and your customers. This is a business-wide responsibility: every level of your organization should be aligned on what sovereignty means for your company and how you will enforce it.
This may sound onerous to the point of impossibility, but that is the nature of governance, compliance and risk (GRC) – the trick is to assess, prioritize and plan, building sovereignty criteria into the way the business is designed. Want to do business in certain jurisdictions? If so, you need to bake their requirements into your business policies, which can then be rolled out into your application, data and operational policies.
Get this the other way around, and it will always be harder than necessary. However, done right, technological sovereignty can also offer a competitive advantage. Organizations with a handle on their data and systems can offer their customers more security and transparency, building trust. By embedding sovereignty into your digital strategy, you’re not just protecting your organization—you’re positioning yourself as a leader in responsible business, and building a stronger foundation for growth and innovation.
Technological sovereignty should be a strategic priority for any organization that wants to stay ahead in today’s complex digital landscape. It’s not just about choosing the right cloud provider or investing in the latest security tools—it’s about building a long-term, business-driven strategy that ensures you stay in control of your data, wherever in the world it is.
The future of sovereignty is about balance. Balancing cloud and on-premise solutions, innovation and control, and security with flexibility. If you can get that balance right, you’ll be in a strong position to navigate whatever the digital world throws at you next.
11-27 – Making FinOps Matter
Making FinOps Matter
In principle, FinOps – the art and craft of understanding and reducing costs of cloud (and other) services – should be an easy win. Many organizations are aware they are spending too much on cloud-based workloads, they just don’t know how much. So surely it’s a question of just finding out and sorting it, right? I’m not so sure. At the FinOpsX event held in Barcelona last week, a repeated piece of feedback from end-user organizations was how hard it was to get FinOps initiatives going.
While efforts may be paying off at an infrastructure cost management level, engaging higher up in the organization (or across lines of business) can be a wearying and fruitless task. So, what steps can you take to connect with the people who matter, whose budgets stand to benefit from spending less, or who can reallocate spending to more useful activities?
Here’s my six-point plan, based on a principle I’ve followed through the years – that innovation means change, which needs change management. Feedback welcome, as well as any examples of success you have seen.
- Map Key Stakeholders
Before you do anything else, consider conducting a stakeholder analysis to identify who will benefit from FinOps efforts. Senior finance stakeholders may care about overall efficiency, but it’s crucial to identify specific people and roles that are directly impacted by cloud spend overruns. For example, some in the organization (such as research areas or testing teams) may be resource-constrained and could always use more capacity, whereas others could benefit from budget reallocation onto other tasks. Line of business leaders often need new services, but may struggle with budget approvals.
The most impacted individuals can become your strongest advocates in supporting FinOps initiatives, particularly if you help them achieve their goals. So, identify who interacts with cloud spending and IT budgets and who stands to gain from budget reallocation. Once mapped, you’ll have a clear understanding of who to approach with FinOps proposals.
- Address Complacency with Data
If you encounter resistance, look for ways to illustrate inefficiencies using hard data. Identifying obvious “money pits”—projects or services that consume funds unnecessarily—can reveal wasteful spending, often due to underutilized resources, lack of oversight, or historical best intentions. These may become apparent without needing to seek approval to look for them first, but can be very welcome revelations when they come.
For example, instances where machines or services are left running without purpose, burning through budget for no reason, can be reported to the budget holders. Pointing out such costs can emphasize the urgency and need for FinOps practices, providing a solid case for adopting proactive cost-control measures.
- Focus Beyond Efficiency to Effectiveness, and More
It’s important to shift FinOps goals from mere cost-saving measures to an effectiveness-driven approach. Efficiency typically emphasizes cutting costs, while effectiveness focuses on improving business-as-usual activity. If you can present a case for how the business stands to gain from FinOps activity (rather than just reducing waste), you can create a compelling case.
There’s also value in showcasing the potential for “greenfield” opportunities, where FinOps practices unlock the potential for growth. Imagine investing in a funding reserve to fund innovation, experiments, or new applications and services – this idea can be applied as part of an overall portfolio management approach to technology spend/reward. With FinOps, you can manage resources effectively while building avenues for longer-term success and organizational resilience.
- Jump Left, Don’t Just Shift Left
Shifting left and focusing on the design and architecture phases of a project is a worthy goal, but perhaps you shouldn’t wait to be invited. Look for opportunities to participate in early discussions about new applications or workloads, not (initially) to have a direct influence, but to listen and learn about what is coming down the pipe, and to start planning for what FinOps activity needs to cover.
By identifying cost-control opportunities in advance, you might be able to propose, and implement preemptive measures to prevent expenses from spiraling. Even if you can’t make a direct contribution, you can start to get visibility onto the project roadmap, allowing you to anticipate what’s coming and stay ahead. Plus, you can build relationships and grow your knowledge of stakeholder needs.
- Make the Internal Case for FinOps
Being clear about the value of FinOps is crucial for securing buy-in. Use hard data, like external case studies or specific savings percentages, to illustrate the impact FinOps can have—and present this compellingly. Highlight successful outcomes from similar organizations, together with hard numbers to show that FinOps practices can drive significant cost savings. As with all good marketing, this is a case of “show, don’t tell.”
Develop targeted marketing materials that resonate with the key stakeholders you have mapped, from the executive board down—demonstrating how FinOps benefits not only the organization but also their individual goals. This can create a compelling case for them to become advocates and actively support FinOps efforts.
- Become the FinOps Champion
For FinOps to succeed, it needs a dedicated champion. If no one else is stepping up, perhaps it is you! You may not need to take the world on your shoulders, but still consider how you can become a driving force behind FinOps in your organization.
Start by creating a vision for FinOps adoption. Consider your organization’s level of FinOps maturity, and propose a game plan with achievable steps that can help the business grow and evolve. Then, share with your direct leadership to create measurable goals for yourself and the whole organization.
Use the principles here, and speak to others in the FinOps Foundation community to understand how to make a difference. At the very least, you will have created a concrete platform for the future, which will have been a great learning experience. And at the other end of the scale, you may already be in a position to drive significant and tangible value for your business.
December 2024
12-18 – Bridging Wireless and 5G
Bridging Wireless and 5G
Wireless connectivity and 5G are transforming the way we live and work, but what does it take to integrate these technologies? I spoke to Bruno Tomas, CTO of the Wireless Broadband Alliance (WBA), to get his insights on convergence, collaboration, and the road ahead.
Q: Bruno, could you start by sharing a bit about your background and your role at the WBA?
Bruno: Absolutely. I’m an engineer by training, with degrees in electrical and computer engineering, as well as a master’s in telecom systems. I started my career with Portugal Telecom and later worked in Brazil, focusing on network standards. About 12 years ago, I joined the WBA, and my role has been centered on building the standards for seamless interoperability and convergence between Wi-Fi, 3G, LTE, and now 5G. At the WBA, we bring together vendors, operators, and integrators to create technical specifications and guidelines that drive innovation and usability in wireless networks.
Q: What are the key challenges in achieving seamless integration between wireless technologies and 5G?
Bruno: One of the biggest challenges is ensuring that our work translates into real-world use cases—particularly in enterprise and public environments. For example, in manufacturing or warehousing, where metal structures and interference can disrupt connectivity, we need robust solutions for starters. At the WBA, we’ve worked with partners from the vendor, chipset and device communities, as well as integrators, to address these challenges by building field-tested guidelines. On top of that comes innovation. For instance, our OpenRoaming concepts help enable seamless transitions between networks, including IoT, reducing the complexity for IT managers and CIOs.
Q: Could you explain how WBA’s “Tiger Teams” contribute to these solutions?
Bruno: Tiger Teams are specialized working groups within our alliance. They bring together technical experts from companies such as AT&T, Intel, Broadcom, and AirTies to solve specific challenges collaboratively. For instance, in our 5G & Wi-Fi convergence group, members define requirements and scenarios for industries like aerospace or healthcare. By doing this, we ensure that our recommendations are practical and field-ready. This collaborative approach helps drive innovation while addressing real-world challenges.
Q: You mentioned OpenRoaming earlier. How does that help businesses and consumers?
Bruno: OpenRoaming simplifies connectivity by allowing users to seamlessly move between Wi-Fi and cellular networks without needing manual logins or configurations. Imagine a hospital where doctors move between different buildings while using tablets for patient care, supported by an enhanced security layer. With OpenRoaming, they can stay connected without interruptions. Similarly, for enterprises, it minimizes the need for extensive IT support and reduces costs while ensuring high-quality service.
Q: What’s the current state of adoption for technologies like 5G and Wi-Fi 6?
Bruno: Adoption is growing rapidly, but it’s uneven across regions. Wi-Fi 6 has been a game-changer, offering better modulation and spectrum management, which makes it ideal for high-density environments like factories or stadiums. On the 5G side, private networks have been announced, especially in industries like manufacturing, but the integration with existing systems remains a hurdle. In Europe, regulatory and infrastructural challenges slow things down, while the U.S. and APAC regions are moving faster.
Q: What role do you see AI playing in wireless and 5G convergence?
Bruno: AI is critical for optimizing network performance and making real-time decisions. At the WBA, we’ve launched initiatives to incorporate AI into wireless networking, helping systems predict and adapt to user needs. For instance, AI can guide network steering—deciding whether a device should stay on Wi-Fi or switch to 5G based on signal quality and usage patterns. This kind of automation will be essential as networks become more complex.
Q: Looking ahead, what excites you most about the future of wireless and 5G?
Bruno: The potential for convergence to enable new use cases is incredibly exciting. Whether it’s smart cities, advanced manufacturing, or immersive experiences with AR and VR, the opportunities are limitless. Wi-Fi 7, will bring even greater capacity and coverage, making it possible to deliver gigabit speeds in dense environments like stadiums or urban centers. Conversely, we are starting to look into 6G. One trend is clear: Wi-Fi should be integrated within a 6G framework, enabling densification. At the WBA, we’re committed to ensuring these advancements are accessible, interoperable, and sustainable.
Thank you, Bruno!
N.B. The WBA Industry Report 2025 has now been released and is available for download. Please click here for further information.
2025
Posts from 2025.
January 2025
01-09 – Making Sense of Cybersecurity – Part 1: Seeing Through Complexity
Making Sense of Cybersecurity – Part 1: Seeing Through Complexity
At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation we discuss the complexity of navigating cybersecurity tools, and defining relevant metrics to measure ROI and risk.
Jon: Paul, how does an end-user organization make sense of everything going on? We’re here at Black Hat, and there’s a wealth of different technologies, options, topics, and categories. In our research, there are 30-50 different security topics: posture management, service management, asset management, SIEM, SOAR, EDR, XDR, and so on. However, from an end-user organization perspective, they don’t want to think about 40-50 different things. They want to think about 10, 5, or maybe even 3. Your role is to deploy these technologies. How do they want to think about it, and how do you help them translate the complexity we see here into the simplicity they’re looking for?
Paul: I attend events like this because the challenge is so complex and rapidly evolving. I don’t think you can be a modern CIO or security leader without spending time with your vendors and the broader industry. Not necessarily at Black Hat Europe, but you need to engage with your vendors to do your job.
Going back to your point about 40 or 50 vendors, you’re right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which research you refer to. So, how do you keep up with that? When I come to events like this, I like to do two things—and I’ve added a third since I started working with GigaOm. One is to meet with vendors, because people have asked me to. Two, go to some presentations. Three is to walk around the Expo floor talking to vendors, particularly ones I’ve never met, to see what they do.
I sat in a session yesterday, and what caught my attention was the title: “How to identify the cybersecurity metrics that are going to deliver value to you.” That caught my attention from an analyst’s point of view because part of what we do at GigaOm is create metrics to measure the efficacy of a solution in a given topic. But if you’re deploying technology as part of SecOps or IT operations, you’re gathering a lot of metrics to try and make decisions. One of the things they talked about in the session was the issue of creating so many metrics because we have so many tools that there’s so much noise. How do you start to find out the value?
The long answer to your question is that they suggested something I thought was a really smart approach: step back and think as an organization about what metrics matter. What do you need to know as a business? Doing that allows you to reduce the noise and also potentially reduce the number of tools you’re using to deliver those metrics. If you decide a certain metric no longer has value, why keep the tool that provides it? If it doesn’t do anything other than give you that metric, take it out. I thought that was a really interesting approach. It’s almost like, “We’ve done all this stuff. Now, let’s think about what actually still matters.”
This is an evolving space, and how we deal with it must evolve, too. You can’t just assume that because you bought something five years ago, it still has value. You probably have three other tools that do the same thing by now. How we approach the threat has changed, and how we approach security has changed. We need to go back to some of these tools and ask, “Do we really need this anymore?”
Jon: We measure our success with this, and, in turn, we’re going to change.
Paul: Yes, and I think that’s hugely important. I was talking to someone recently about the importance of automation. If we’re going to invest in automation, are we better now than we were 12 months ago after implementing it? We’ve spent money on automation tools, and none of them come for free. We’ve been sold on the idea that these tools will solve our problems. One thing I do in my CTO role, outside of my work with GigaOm, is to take vendors’ dreams and visions and turn them into reality for what customers are asking for.
Vendors have aspirations that their products will change the world for you, but the reality is what the customer needs at the other end. It’s that kind of consolidation and understanding—being able to measure what happened before we implemented something and what happened after. Can we show improvements, and has that investment had real value?
Jon: Ultimately, here’s my hypothesis: Risk is the only measure that matters. You can break that down into reputational risk, business risk, or technical risk. For example, are you going to lose data? Are you going to compromise data and, therefore, damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But then there’s the other side—are you spending way more money than you need, to mitigate risks?
So, you get into cost, efficiency, and so on, but is this how organizations are thinking about it? Because that’s my old-school way of viewing it. Maybe it’s moved on.
Paul: I think you’re on the right track. As an industry, we live in a little echo chamber. So when I say “the industry,” I mean the little bit I see, which is just a small part of the whole industry. But within that part, I think we are seeing a shift. In customer conversations, there’s a lot more talk about risk. They’re starting to understand the balance between spending and risk, trying to figure out how much risk they’re comfortable with. You’re never going to eliminate all risk. No matter how many security tools you implement, there’s always the risk of someone doing something stupid that exposes the business to vulnerabilities. And that’s before we even get into AI agents trying to befriend other AI agents to do malicious things—that’s a whole different conversation.
Jon: Like social engineering?
Paul: Yeah, very much so. That’s a different show altogether. But, understanding risk is becoming more common. The people I speak to are starting to realize it’s about risk management. You can’t remove all the security risks, and you can’t deal with every incident. You need to focus on identifying where the real risks lie for your business. For example, one criticism of CVE scores is that people look at a CVE with a 9.8 score and assume it’s a massive risk, but there’s no context around it. They don’t consider whether the CVE has been seen in the wild. If it hasn’t, then what’s the risk of being the first to encounter it? And if the exploit is so complicated that it’s not been seen in the wild, how realistic is it that someone will use it?
It’s such a complicated thing to exploit that nobody will ever exploit it. It has a 9.8, and it shows up on your vulnerability scanner saying, “You really need to deal with this.” The reality is that you have already seen a shift where there’s no context applied to that—if we’ve seen it in the wild.
Jon: Risk equals probability multiplied by impact. So you’re talking about probability and then, is it going to impact your business? Is it affecting a system used for maintenance once every six months, or is it your customer-facing website? But I’m curious because back in the 90s, when we were doing this hands-on, we went through a wave of risk avoidance, then went to, “We’ve got to stop everything,” which is what you’re talking about, through to risk mitigation and prioritizing risks, and so on.
But with the advancement of the Cloud and the rise of new cultures like agile in the digital world, it feels like we’ve gone back to the direction of, “Well, you need to prevent that from happening, lock all the doors, and implement zero trust.” And now, we’re seeing the wave of, “Maybe we need to think about this a bit smarter.”
Paul: It’s a really good point, and actually, it’s an interesting parallel you raise. Let’s have a little argument while we’re recording this. Do you mind if I argue with you? I’ll question your definition of zero trust for a moment. So, zero trust is often seen as something trying to stop everything. That’s probably not true of zero trust. Zero trust is more of an approach, and technology can help underpin that approach. Anyway, that’s a personal debate with myself. But, zero trust…
Now, I’ll just crop myself in here later and argue with myself. So, zero trust… If you take it as an example, it’s a good one. What we used to do was implicit trust—you’d log on, and I’d accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid with no malicious activity. The problem is, when your account is compromised, logging in might be the only non-malicious thing you’re doing. Once logged in, everything your compromised account tries to do is malicious. If we’re doing implicit trust, we’re not being very smart.
Jon: So, the opposite of that would be blocking access entirely?
Paul: That’s not the reality. We can’t just stop people from logging in. Zero trust allows us to let you log on, but not blindly trust everything. We trust you for now, and we continuously evaluate your actions. If you do something that makes us no longer trust you, we act on that. It’s about continuously assessing whether your activities are appropriate or potentially malicious and then acting accordingly.
Jon: It’s going to be a very disappointing argument because I agree with everything you say. You argued with yourself more than I’m going to be able to, but I think, as you said, the castle defense model—once you’re in, you’re in.
I’m mixing two things there, but the idea is that once you’re inside the castle, you can do whatever you like. That’s changed.
So, what to do about it? Read Part 2, for how to deliver a cost-effective response.
01-15 – Demystifying data fabrics – bridging the gap between data sources and workloads
Demystifying data fabrics – bridging the gap between data sources and workloads
The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas.
At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines.
There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized.
So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:
- BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
- NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
- Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools.
- MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.
How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.
If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!
I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances.
That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.
If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:
- Network level: To integrate data across multi-cloud, on-premises, and edge environments.
- Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
- Application level: To pull together disparate datasets for specific applications or platforms.
For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.
In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.
While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.
June 2025
06-13 – Reclaiming Control: Digital Sovereignty in 2025
Reclaiming Control: Digital Sovereignty in 2025
Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
What does the digital sovereignty landscape look like today?
Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales.
We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
How Are Cloud Providers Responding?
Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
What Can Enterprise Organizations Do About It?
First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
Where to start? Look after your own organization first
Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
July 2025
07-01 – The State of Cybercrime
The State of Cybercrime
An Interview with the UK National Crime Agency
Cybercrime, particularly ransomware, is evolving fast—from encrypted extortion to AI-driven attacks. I spoke with William Lyne, head of cyber intelligence at the UK’s National Crime Agency (NCA), about what’s changing, what isn’t, and how we can all stay ahead.
Hi William, thanks for joining me. What’s your role, first of all?
My substantive role is head of cyber intelligence at the NCA, working as part of the National Cyber Crime Unit (NCCU). Within the NCCU, the focus is on cyber-dependent crime—offenses you can only commit using a computer.
The NCCU is a mixture of investigators, intelligence officers, data scientists, and people with other technical capabilities. We’re predominantly a counterthreat organization. We deliver operations to disrupt the cybercrime threat impacting our communities.
What does cybercrime look like these days? Where is it coming from?
We come at it from the perspective of the online cybercrime ecosystem, or “underground,” as some call it. That’s where you have threat actors coming together with online capabilities. Some are leveraging capabilities to deliver ransomware operations. With others, you can see a crossover into fraud and other types of online offending.
People like to think about cybercrime groups like the mafia, but that’s probably atypical. It’s more likely they work as a loose affiliation of people. Max Smeets recently said that most cybercrime groups resemble badly run, chaotic tech startups. I agree, to an extent—there’s a spectrum. Some will be more organized, and some traditional, longer-standing crime groups do have a slightly more vertically integrated structure. But they all tend to be quite steady and are slow adopters of new and emerging technologies. They’ll only really change what they’re doing if an opportunity comes along to make more money or if the current business model they’re employing—or technique they’re using—becomes less profitable than it did in the past.
Our strategy is to go after that ecosystem, after the things that support and enable cybercrime business models—particularly ransomware and cross-threat enablers.
Back in 2023, for example, we were involved in the disruption of Genesis Market. That was a significant marketplace where threat actors were buying access to victims for use in ransomware operations, as well as fraud and the rest. We were proactive in going after the threat. Reactively chasing victims and victimization can be quite difficult.
What’s changing in the ecosystem?
Many of the vectors that cybercriminals use to attempt to victimize people are the same things they’ve done for many, many years. Email phishing and spamming are still common.
But we see two interconnected trends. First, you no longer need to be really technically proficient. In the past, we’ve associated cyber-dependent crime with Russian-speaking groups, but that’s no longer the case.
Second, when I started out 15 years ago, the tools and capabilities you could buy were relatively limited. Things like DDoS—relatively low-level, easy-to-mitigate tools—were available for sale. But now, you can gain access, freely or relatively cheaply, to sophisticated cyber tools and capabilities that can cause significant damage. You no longer need a reputation or history of operating within certain marketplaces or forums. So, that barrier is lowered as well, which proliferates cyber tools and capabilities within that ecosystem.
That’s the difference. Overall, you no longer need significant amounts of money or credibility to build a cybercrime operation like you did in the past.
Is ransomware still the biggest game in town?
Ransomware is our priority. It’s the most significant cybercrime and cybersecurity threat to the UK. It’s one of the threats that the public and organizations are most likely to encounter.
There’s a misconception that ransomware is “big game hunting”—that cybercriminal groups are deliberately targeting large organizations in attacks. But in reality, they triage. They buy bulk access from brokers or marketplaces, then do cursory research to identify ones to prioritize.
Just because you don’t see yourself as an enticing target doesn’t protect you from the threat. Recent Coveware statistics show that the most common ransomware victim is a small- to medium-sized enterprise, not a big multinational company.
If you have a significant online footprint—and almost every business is IT-dependent in one way or another—you are a potential victim. This isn’t something you can assume will pass you by.
Ransomware has evolved and diversified in terms of how threat actors extort funds from victims. We had traditional ransomware, which was just encryption. Then we had double extortion—encryption plus data exfiltration. Now, in many cases, victims have good backups, so it’s encryptionless, or data exfiltration and extortion only.
You can deploy exfiltration quicker—encryption can be hard to obtain and tricky to configure. People pay because they don’t want to appear on the data leak site. If exfiltration doesn’t work, you just move on to the next victim.
We’ve seen ransomware payment rates decrease, which is positive—obviously, there’s still victim harm and impact, but it could mean that attackers are taking less time to try to extort a victim; they might just move on to the next.
Cadence becomes the lever to generate more money. The level of vulnerability exists in a relatively unsaturated market—threat actors don’t suffer a shortage of access to victims. That fosters collaboration as opposed to competition, and it also fuels that steady state.
What’s making it more complicated? Software supply chain security is clearly very important, and we’re now seeing a wave of business supply chain risk.
Businesses no longer operate as islands; they’re interdependent. They interact with other businesses, other industries, and other sectors. And we’re seeing a collision of the cybercrime ecosystem with the business ecosystem.
Organizations are exposed to significant supply chain vulnerabilities. As we live in an increasingly technologically enabled, interconnected world, it feels like that attack surface—or opportunity for criminals—gets bigger all the time.
Shall we talk about AI? AI gives you two things. First, it further reduces the threshold of effort when you use AI to code an attack. And second, you can scale much faster.
AI increases efficiency across different steps within cybercrime. It can enhance your phishing messages. It can help automate the command and control of your victims. It can help in many ways with AI-generated coding or improving code.
We’re seeing that now. It hasn’t turned the cybercrime business model on its head, but it’s interesting to think about how that might happen. We could see the adoption of AI in ransomware operations. Ransomware-as-a-service offerings could start to include commoditized AI capabilities at some point.
That would unlock new capabilities that threat actors will look to exploit. We need to get our crystal ball out a little to think of what those may be.
Are ransomware attacks targeting old infrastructure and databases, or is it all about new stuff and new interfaces?
That’s a good question. We do see a big range. Some ransomware groups are purchasing zero-day exploits and using them. But I’d say most ransomware groups are going after victims with unpatched, vulnerable internet-facing infrastructure.
Strategically, resilience is really important. That’s a major theme in the National Cyber Strategy—we need to improve our resilience to mitigate against a range of online threats, of which cybercrime and ransomware are just one part. Employing MFA is an example of how the public and organizations can seek to protect themselves.
So, patching, training, and MFA? What else would you say to help organizations?
I’m not a cybersecurity expert per se, but those are not bad places to start. We often point to the guidance available in the UK, such as the NCSC guidance on how victims can protect themselves. Cyber Essentials is a fantastic initiative. Doing the basics well will protect you from a significant amount of the threat.
I understand that properly protecting yourself is hard. If you’re part of a big organization, it’s complex to implement. If you’re a small- to medium-sized enterprise, it’s just hard in general, with budgets and maybe limited expertise on the team.
But ransomware remains largely opportunistic. These groups often utilize relatively low-sophistication tools and techniques to gain initial access to victims. They’re exploiting vulnerable internet-facing infrastructure, for example—password reuse, credential replay, whatever it might be.
I was on a panel at InfoSecurity, and fellow panellist Jen Ellis had some great advice: it’s better to think of resilience as a journey. “Start with one thing that is manageable and realistic,” she said. “Then, decide on the next. It’s OK—practical even—to take it a piece at a time. We don’t expect kids to run before they’ve learnt to crawl.”
Are we just going to be in a coping strategy situation for the next 20 years? Or are there any lights you can see at the end of any tunnels?
We’re a counterthreat organization. I think we’ve demonstrated, with some successes over the recent years, that we can be really impactful—like the disruption of LockBit, operations against EvilCorp, or the takedown of access marketplaces.
You need to do all these things collaboratively. There’s some great activity going on right now, led by European partners, called Operation Endgame—they’re going after info-stealers, marketplaces, all sorts. It’s a brilliant activity.
Operationally, I was reading an article saying that law enforcement is having a strategic impact against the ransomware threat. We have really joined up our public and private sector partners, both nationally and internationally.
At the NCA, we’re lucky to have brilliant relationships, and that joined-up, shared interest and shared strategy is bearing some significant fruit. So yes, I think we are making progress.
The UK government has developed policy proposals as part of the Counter Ransomware Initiative. There’s a diversity of thought and opinion in the community around those, but I’d encourage people to have a look. It’s brilliant that the UK is taking the lead.
Counterthreat operations and operational success are also great communications hooks for delivering protection and resilience messaging to the public. For example, through the LockBit activity, we were able to demonstrate that when people paid to have stolen data deleted, it was never actually deleted. Just demonstrating to victims “you can’t trust these threat actors” is useful.
The work you’re doing is worth publicizing—communicating the successes is as important as telling people what will go wrong.
For us, it’s about conveying those things. Do the basics really well; that’s fundamental. There are steps you can take; you are not helpless as a potential victim in this space.
At the same time, it’s not just up to individuals—it’s for all of us in the community. We all want the same: to counter the threat and protect potential victims. So we all have a role to play, in many respects.
Let’s go with that—lovely to speak to you, Will.
Thank you!
07-06 – From Sales Enablement to Buyer Enablement
From Sales Enablement to Buyer Enablement
Moving Toward the Win-Win
Sales enablement is now a discipline, and for good reason. Sales teams at technology vendor organizations are often stymied from doing what they’re good at. The disconnect between marketing and sales has been well-documented, debated, and addressed, with approaches like account-based marketing emerging in response.
We see it often as industry analysts. While so much discussion focuses on buying decisions, analyst relations activity often sits within the marketing function and stumbles when it comes to helping sales. No wonder analyst relations professionals struggle to demonstrate ROI.
It’s a familiar and important topic—we offer services to support sales leaders. But here’s my personal twist: I care more about how technology is defined, bought, and deployed than how it’s marketed and sold.
At heart, I remain the person I was when I was procuring and deploying technology. And that, too, could be like running through mud. My question, then, is why don’t we talk more about buyer enablement? Why not flip the script and look at technology purchasing from the buyer’s perspective?
Spoiler alert: it’s a world that looks nothing like sales, marketing, or even the strange microcosm that is the analyst space. If you work in analyst relations and you wonder why I ignore or push back on generic definitions of “influencing buying decisions,” then read on.
When I first became a buyer, I struggled. I was given a budget, which grew considerably during my time in the role. There was no guidance on what “good” looked like—how to plan capacity, work with procurement, understand amortization, or “sell” my recommendations to my bosses and those further up the chain.
So, how can sales teams set themselves up to succeed? The first step is to understand the buyer.
What Does Enterprise Buying Look Like?
The enterprise buyer’s world isn’t a generic place where generic people are hanging around waiting to buy things and just need a bit more of a push. It tends to be divided by role (or, in marketing terms, “persona”).
Also, it’s not all about big-ticket new purchases. In reality, much (let’s say, three-quarters) of the technology budget is already earmarked for maintaining or improving existing infrastructure, software, and services, leaving a small portion for new deals.
People making those big-ticket decisions tend to operate on one-, two-, or three-year cycles. They’re looking at what their budget is going to be, where they can put their money, where they can cut costs, what they no longer need, and what they need now.
For many vendors, therefore, it’s about getting a foot in the door. Relatively recent concepts like the Challenger Sale methodology pit hunters against farmers, aiming to occupy places the incumbents can’t go. It’s a reason “land and expand” models are so prevalent.
Things get even more fragmented in the world of software development and delivery. Developers want to create, update, and maintain code, testers want to test, and platform engineers are looking to deploy into the cloud. Meanwhile, managers with a bit of the budget would really like visibility into what’s going on.
On the operations side, you’ll see people trying to get ahead of problems, even as they fight the fires and get shouted at (did you ever call a help desk just to say, “Great job”?). They want tools to collate logs and events, build a picture, make planning decisions, allocate work, and so on.
Across the board, we see small, medium, and large acquisitions. So there isn’t really such a thing as an enterprise buyer per se, unless you’re looking at the person who oversees all of it. The CIO’s job is to set strategy and give the IT organization what it needs.
Everything rolls up to the ultimate sign-off, but the CIO can’t be involved in every decision. It’s typically delegated. The CTO sets the scene, and then the VPs of engineering, infrastructure, and operations each have a budget. If there’s a need, it better be clearly articulated in order to cut through the legion of other demands on the budget.
Removing the Blockers to Buying: Less Push, More Aligned
A lot of what stops sales is that buyers can’t buy. As a buyer, I wanted new or better tools, but I might not have had the budget or a clear way of telling my boss that I needed it. Most of all, I lacked time to think through the options and decide which made the most sense.
I often say that being an analyst is a privilege. The biggest reason for this is that I have time to think, unlike when I was managing IT systems. Sure, no work is without stress, but I have never experienced the “robbing Peter to pay Paul” scenario as I did when I was coping with the complexity of an ever-expanding IT environment.
I needed help and was likely to put my money with the vendors that helped me the most. I remember working with Sun Microsystems as the incumbent server supplier, as well as with Hewlett-Packard. If the latter had been more supportive, who knows how that balance might have shifted.
The other challenge is that every technology decision—unless it’s just a relicensing or renewal—involves change. People need to be on board. I can buy a tech package to help my user base, but if they don’t want to use it, it becomes shelfware. Meanwhile, a salesperson may hit quota for a quarter, but the bigger deal will stay forever out of reach.
So when I think about sales enablement, I’m thinking about how to help sales help buyers—not just to make decisions, but to manage the complexity, change, and internal politics that come with those decisions against a backdrop of highly restricted time to think.
Giving buyers tools to help them think through the options is valuable. Maybe they won’t choose your product. In that case, you haven’t lost a sale—you’ve removed the opportunity cost and can now focus on deals that may succeed.
Buyer enablement isn’t a clever phrase—it’s the missing piece of the puzzle. People buy from people, so if you understand me and my buying needs, I will still be there, quarter over quarter, whichever vendor you work for. Enable me to succeed, and you will be enabling yourself.
August 2025
08-11 – Can Technology Save the National Health Service?
Can Technology Save the National Health Service?
The UK’s National Health Service (NHS) has published a 10-year plan built on three “past-to-future” pillars: shifting from hospital to community care; from sickness to prevention; and from analogue to digital. While I can’t speak to the first two, I have been reviewing the third.
Technologically, this breaks down into three key elements: a new NHS App consolidating public-facing services and information; a single source of patient, health, and treatment data; and an access platform for NHS providers (hospitals, specialists, GPs) that draws on both.
All are desperately needed. I love our NHS—its clinical staff are diamonds, and their achievements often miraculous—but as a system, it is broken. Our ability to meet the healthcare needs of our citizens is hampered by outdated technologies, fragmented systems, and inefficient processes, as the 2024 Darzi review confirmed.
The billion-pound question is whether better use of technology can make a genuine, material difference. The NHS isn’t unique in this—globally, healthcare lags other industries in innovation. Might better systems and processes reduce risk, improve outcomes, and release funds that can be put towards patient care?
I’d love to say technology can “fix” healthcare or provide a solid platform for the future. Unfortunately, history suggests otherwise. The NHS already has layers of technology; adopting something new is more of a problem than letting go of the old.
As both a technologist with hands-on healthcare experience and an industry watcher, I’ve seen many successes—and just as many failures. In the spirit of the NHS plan’s past-to-future theme, here are seven imperatives based on my own experience:
1. Prioritize change management over technological hope
Technology is powerful, but it doesn’t work in a vacuum. A couple of decades ago, data warehousing was hailed as transformative. Then it was the Cloud. Now it’s AI.
This isn’t just vendor marketing hype: systems integrators, consultants, and technology leaders have also believed that stating a goal will somehow make it happen. The NHS plan’s current statement of “We’ll make an app, that’ll sort things out” is another example.
Time and again, we’ve backed new technology waves without addressing the real challenge: change management. Transformation must lead; “digital” follows. After decades of “starting to see results,” it’s time to put change first–which means building stakeholder buy-in, proving by results, and the rest.
2. Build from the bottom of the stack up
Interoperability is key. The big-ticket items—the app, the data lake—mean little without the underlying glue: secure, standards-based data exchange, operational manageability, and upgradability.
Platform strategy needs to consider these needs at every level of the stack. At what level should data-in-motion privacy be guaranteed, for example – with tag-based microsegmentation at network level, or using container-level approaches?
A shared, policy-based architecture (based on common design patterns) across NHS organizations, integrators and indeed service providers, would enable delivery whilst assuring compliance, sovereignty, and risk.
3. Accept legacy data, and leave it where it is (for now)
The vision of a single, centralized NHS data source is compelling, but unrealistic. Instead, we should be building and maintaining an open, accessible library of known data assets through an intermediate “façade” layer.
Modern data fabrics enable us to leave data in place, manage and timestamp access, and gradually migrate what’s current and useful, like a slowly draining reservoir.
This avoids bottlenecks, reduces costs, and supports future-safe, API-driven data management.
4. Anticipate technical blockers before they happen
Even the best solutions can fail in deployment, often for predictable reasons. Operational complexity, such as that found in the NHS, can make even the best-planned delivery slow to a halt.
Equally, innovators can move too fast, missing something essential from the specification along the way. Once it is discovered, it is too late to address. Such issues are common: as software luminary Ed Yourdon (RIP) once said, “Death march projects are the norm, not the exception.”
Rather than assuming “this time will be different,” plan for worst-case scenarios. Put deployment risk on the board’s agenda and treat stakeholder skepticism as a valuable tool for mitigation, not a barrier to progress.
5. Replace proof-of-concept pilots with minimum viable products
As the quote goes, the NHS may have “more pilots than British Airways,” but such studies are designed to measure potential, not to have an impact or deliver change.
In addition, they can fall victim to prototype thinking: what starts as a test model can become a tool in active use, even though it was not built for robustness or scale.
A better approach is product-based thinking. Base units of delivery on stakeholder needs, not project goals, and deploy small, complete iterations that provide immediate value.
6. Lead with open vendor relationships, not lock-in
Suppliers of all descriptions—vendors, integrators and service providers—aim for long-term retention through extended contracts, licensing policies, and technical dependencies. Indeed, the NHS strategy paper talks about some contracts having 10-year terms.
This is expected, but can be ignored up-front by customers and prospects. It is for procurement best practice to protect against lock-in at every level, from cloud data export, to technological incompatibility, to skills development.
This also goes to the value of healthcare data, and the role of third-party providers and insurers. The NHS strategy covers public healthcare only, but its data platforms can be made available to private sector health delivery, at a negotiated cost.
7. Start with the patient, not the clinician
Finally, we need to address the fundamental missing piece of modern healthcare: patient centricity. Healthcare systems are built for clinicians, not patients. Information is delivered in medical jargon, and interfaces can assume high levels of digital literacy.
At a recent healthcare day organised by AWS, one clinical speaker said, “Until we confront these blind spots, our digital transformation will remain skin-deep.” And recent NHS studies are still provider, rather than patient-centric.
The NHS must move from provider-based data management models to ones which support how patients think and act. These include analogue options supported by digital means, for example with AI-driven interfaces.
Getting Ahead of the Curve
Healthcare lags other industries, but that gap offers an opportunity to leap forward. New technologies such as AI can help—but without addressing foundational issues, they risk becoming part of yet another missed opportunity.
This isn’t about needing more money. The NHS already receives around £200 billion annually—8% of UK GDP—with about a quarter set aside for potential litigation from care failures. Patients are suffering unnecessarily, when they could be part of the solution: for example, catalysing demand for better information could drive improvements.
It’s time to tackle the technological elephants in the treatment room, reduce waste, and focus spending on better capabilities, greater visibility, and significantly improved patient outcomes.
As both technologists and people, our future health depends on it.
08-20 – Should you care about GitOps?
Should you care about GitOps?
(TL;DR: Yes)
GitOps has long interested me because it spans both applications and infrastructure. It promises more reliable software delivery, stronger infrastructure control, and greater efficiency and productivity.
So, what is it precisely? Let’s take a look.
What’s GitOps?
Let’s start with version control. One of the great principles of software configuration management, of which version control is a part, is to recreate a version of the application, based on specific versions of its component parts. You should be able to say, “Give me precisely the setup that I had on March 31st.”
In the virtual world we now occupy, alongside this comes Infrastructure as Code. Being able to define an environment in Terraform and watch it come to life is another powerful concept.
Managing that definition alongside the application code or libraries that will run within it is particularly compelling. That’s what GitOps principles, practices, and tools bring. They enable you to manage the “source code” for a target environment as a core, version-controlled part of your system.
What really excites me about GitOps is its closed-loop nature. Consider this: once you have deployed an application, you’re unlikely to change the binaries at runtime. More typically, you change the source, then rebuild and redeploy.
However, in infrastructure, you often tweak things, such as where services run, how much memory they use, security settings, and so on. That’s common in physical environments and just as easy in the cloud.
Doing so, however, introduces risk. Unless you keep detailed notes, you won’t know what your runtime environment looks like after reconfiguring it. You’d struggle to rebuild the version you had last Wednesday, never mind last year.
GitOps tools can support the overall approach. Tools like FluxCD and ArgoCD emerged as open-source projects to enable two mechanisms. First: left to right. You edit your configuration in Git and redeploy from there.
But GitOps can also work right-to-left, detecting if a runtime environment has changed, and enabling a comparison with a versioned “last known good” configuration. Some tools do this automatically, and (as with many things open-source) a debate continues about whether this is actually GitOps – but either way, the feature is useful.
What does GitOps look like in practice?
The OpenGitOps Project defines principles to standardize GitOps terminology and approaches. That left-right-left thing is known as the reconciliation loop. If the runtime configuration changes, you receive an alert and have the ability to reconcile the actual state with what’s defined in your config library.
Defining your environment ahead of time is known as a declarative model. According to GitOps, you shouldn’t have to care how it builds it, just what you want built. For the record, that’s known as an “imperative” approach. Versioning is another principle, and the final one is an automated build based on the required configuration.
Thus far, you might say, “I bet GitOps looks good on a whiteboard.” That is, great theory, but does it make a difference? According to a recent report from Octopus Deploy, it absolutely does. If you go to page 49, you’ll see software delivery performance increases in proportion to the depth and breadth of GitOps practices adopted. Not only this, but software reliability, and developer well-being also improve.
To be fair, it would have been shocking if that were not the case. Even the Romans knew it was cheaper to maintain a system than to repair one after it had been broken; and we all know that it’s less stressful. GitOps simply sets out how that maintenance should take place, in relation to this more modern problem space.
It’s worth drilling into breadth and depth. Breadth: Is GitOps being adopted broadly across the organization? Depth: how thoroughly and comprehensively are the GitOps principles being applied?
Something my late colleague, Michael Delzer, learned at American Airlines was: GitOps isn’t just for cloud-native environments. According to Michael, GitOps principles make sense in any environment where you can create and apply an infrastructure definition.
What’s missing from GitOps?
The missing piece, if I may, is that you need a level of ongoing monitoring, not just events triggered by configuration changes. In the real world, things change—whether you like it or not. It is therefore important to track configuration drift over time—that is, the gap between your planned state and actual state.
Drift visibility might be seen as a ‘nice to have’ in GitOps, but it’s a must-have practice. The GitOps model includes alerting on configuration changes, and that’s useful. You also need visibility into how big the drift is today, whether it is increasing over time, and where to focus efforts to reduce it.
Tools do exist to monitor configuration drift outside of GitOps, but they are not embedded in the methodology. We need what I would call GitOps observability, or (dare I say) GitObs, as a core element of the approach.
What to do about GitOps?
To conclude, while we can debate the details, the GitOps approach is no more or less than combining software and (virtualized) hardware configuration management, enabling age-old principles to be applied to modern application deployment.
Ultimately, GitOps isn’t just a ‘thing’—it’s simply best practice by another name. If you follow what it suggests, you’re going to be in a better place – you’ll be able to deliver higher quality software at lower cost and risk, more efficiently and with less stress.
The place to start is to learn how the principles apply to your environment, and (as per my GitObs comment) to ensure that you cover your processes end-to-end, from development to operations.
Embrace GitOps, and both your customers and your developers will thank you for it.
September 2025
09-24 – Why Security Awareness Remains on the Front Lines of Access Management
Why Security Awareness Remains on the Front Lines of Access Management
Back in 1993, I was working in France, overseeing a Sun Microsystems environment. I still remember running a simple password checker script on the central /etc/passwd file. Most passwords were easy to guess; there were a lot of swear words, so on the upside, I was able to improve my vocabulary.
Here we are, some three decades later, and are things really any better? Passwords are a scourge: they litter our phones and clutter our computers, leaving us with no recourse but to take shortcuts.
In the corporate world, each compromised authentication string leaves a tiny window in the enterprise defences, which, if breached, might as well be a front door. Hackers don’t generally care about individual user accounts, but rather, what they can do when they’re in – kick off a bigger data breach, or a ransomware attack.
People remain a weak point — not out of malice or carelessness, but because nobody can realistically cope with today’s complex, layered, fast-changing digital environment. It’s not just the end-users. A major burden is borne by operations and security teams, who must manage people’s password resets, onboarding and offboarding, and monitor for poor behaviors and attempted breaches.
And then, of course, we have developers. What of the username-password combinations required to access databases, or tokens to authenticate with APIs? These can be hard-coded into applications, held as environmental variables, or stored as run-time data, and transmitted in clear via a weak transport protocol.
Secrets sprawl is a problem for any development shop. However,so is the human trait of writing code to test something out without having security front-of-mind. If I want to build quickly, I don’t want to jump through all the hoops. But then, of course, three weeks later, the temporary hard-coding is still in place.
The final pillar of risk is not about people, but machines. Developer secrets are one piece of a bigger puzzle, extending to non-human endpoints such as edge devices, SaaS APIs, and other points of connection, potentially to be accessed via VPN or tag-based microsegmentation.
These needs are extending into AI Agents, which not only need to authenticate but would also ride roughshod over any privacy policy if left unfettered. Like humans, agents are only as safe as the credentials they’re given — but unlike humans, they are amoral by nature. Equally, agents need to be managed and protected securely.
“The car that starts the season isn’t the car that ends it—there are constant iterations,” said Nimesh Kotecha, Group Head of End User Services, Oracle Red Bull Racing. “The data and workflows around those iterations are our crown jewels. Many applications feed into those decisions, so having controls to ensure only authorised access, with accountability and auditability, is crucial. 1Password fits seamlessly into our existing ecosystem, complementing the tools we already rely on to deliver secure access, while giving us a unified view and the metrics we need.”
So, tools and workflows work well together in an organization which puts security first. As a consequence, for example, least privilege access (deny unless authorized) was implemented by default, not exception. Such approaches are great in an organisation where you have the backing of the management team and you’re on top of the complexity, as we can see in the case of Oracle Red Bull Racing (and power to their elbow).
However, they’re not necessarily going to be so effective where security isn’t a core pillar at board level, or where the organisation is too big to apply policies consistently. Even if SecOps can get on top of secrets and non-human identities, the thousands of users across multiple departments will leave a million tiny holes in the otherwise pristine attack surface.
It would be truly great to think that we could solve this conundrum through technological means alone. Perhaps one day we will – through a combination of white-hat agents and uniquely passkey-based access, maybe, or some quantum technology we haven’t worked out yet.
Until some combination of tools, processes, and architecture solves the problem completely, awareness remains our best defence. Another experience from my dim past was as a security awareness trainer for various government departments – I’ve seen firsthand how the security-first lightbulb can switch on in even the least technical of staffers, from board to the front line.
For this reason, I’d propose to security vendors that they consider end-user-facing “personal security posture” dashboards on their tools, making security awareness part of their daily information feed. Just a thought!
In the meantime, regular doses of security awareness remain an important piece of the overall puzzle. From experience, it’s a Goldilocks thing – too much advice or policy can become overbearing, too little and the message gets forgotten. But be in no doubt that it should be part of any organization’s security response, whatever the scale.
November 2025
11-24 – Don’t Be Afraid of the Big Bad Cloud Sovereignty Wolf
Don’t Be Afraid of the Big Bad Cloud Sovereignty Wolf
Sovereignty is one of those topics that is both everywhere and, at the same time, remains vague and undefined. It’s not just a European concern – it’s part of a global shift in how organizations think about control, autonomy, and the freedom to function without non-jurisdictional interference. Yet, ask two people, and you will get two definitions.
Even so, the signal from organizations is that it matters. From my own conversations, I know that cities and public bodies in Europe are adopting sovereign-first strategies; industrial suppliers are finding sovereignty baked into customer contracts across Africa; and hyperscalers are having to address the topic head-on in regions like South America. This clearly isn’t a passing regulatory fad.
So the first question isn’t “does sovereignty matter?” because it clearly already does. In the DACH region alone, around 80% of organizations say sovereignty is strategically important, and roughly 40% are actively planning around it. This level of demand is driving the hyperscalers (AWS, GCP, and Azure) into action, from what was a standing start just a handful of years ago.
The real question is: given the environment I already have and the business I’m trying to run, what is the impact of the sovereignty needs I want to address? Taking the first point, we can be lulled into thinking everything is cloud-based these days, but in reality, hyperscalers only account for about a third of tech infrastructure.
Every year at GigaOm, we look at cloud adoption, and every year the picture is refreshingly mundane: less than 40% of enterprise workloads run exclusively in cloud. The rest involves on-prem data centers, legacy systems, and hybrid combinations of local and hosted applications. Sovereignty needs to take all this into account, not just the cloud elements.
So if you’re wondering where to begin, the unavoidable truth is: you start by understanding what IT you already have — cloud-based data and workloads, systems and dependencies, controls, who’s running what, and who has the keys. Not everything needs to be sovereign, so you’ll need a clear picture before you can map that onto your sovereignty stance.
Then, successful sovereignty is about taking a risk-based view. Risk is probability multiplied by impact, as every security professional knows; like Cyber, sovereignty is about mitigating those risks through architecture, tooling, and process changes. Not everything needs to be sovereign, and not all sovereign requirements are equal, so identify the high-risk areas (patient data or identifiable telemetry) and deal with those first.
Costs also come up a lot in sovereignty conversations. Yes, sovereign options cost more on paper (15–20% is often cited), but that is a red herring when compared to the amount of cloud overspend we still see. Get on top of what you have, understand the risks, and you may find the smaller premium of sovereignty is more than offset by the amount saved across the cloud estate.
But please, don’t go hunting for a mythical “sovereign architecture.” What does exist is architecting for sovereignty — taking your environment, strategies, policies, and obligations, and shaping your architecture around the degree of control, agility, and trust you actually need for your critical workloads.
If I could boil this sprawling topic down, I’d start with this: don’t be intimidated. Nobody has sovereignty sewn up, and you’re not trying to boil the ocean. Begin by understanding and classifying your data and workloads, and address them according to priority and risk.
Secondly, see sovereignty as an opportunity rather than a burden: the effort you put in will pay off in a more flexible, future-proof architecture. Handled intentionally, addressing sovereignty strengthens the organization rather than constraining it.
Third, use it as a lever for funding. When I was running IT systems, I’d ask the CFO for something and inevitably get the “Do we really need this?” conversation. If I couldn’t justify whatever it was, no chance. But if I said, “It’s required for compliance,” the door opened — and the upside was I got to modernise my environment.
Finally, keep the screws on the hyperscalers. They’re reshaping their offerings in direct response to customer pressure. That is also a good reason to stay open-minded and flexible – while your current incumbent provider may not meet your needs now, perhaps they will in the months to come.
Sovereignty may begin as a vague concept, but the underlying desire is crystal clear: organizations want control over their IT assets. Over time, sovereignty will likely settle in as one aspect of a broader technology-governance agenda, alongside compliance, risk, and architecture. For now, though, it deserves focused attention — and your infrastructure will thank you for it.
2026
Posts from 2026.
February 2026
02-03 – AI, Data Architecture, and the Rise of Agentic
AI, Data Architecture, and the Rise of Agentic
As AI continues to drive change across the IT landscape, Field CTO Jon Collins sat down with analyst and data management expert William McKnight on why AI is raising the bar for data maturity, what “agentic” really means in practice, and why data architects aren’t going anywhere.
Jon: How do you see the AI market right now? Is it a bubble?
William: Tech leaders are making circular investments that are driving up the market. Does it have value at the level of investment we’re seeing? Today, probably not, but they say they’re investing for the long haul.
Nonetheless, AI still has tremendous value for enterprises, which can leverage the AI being developed. It’s not nearly as expensive for an enterprise to adopt AI and reap the benefits.
What AI and data-related challenges are you seeing as a primary focus?
Data architecture — because enterprises are trying to add AI to what they have. Nothing is taxing the data infrastructure quite like AI. Sometimes they learn that they don’t have a sufficient architecture to fully benefit from AI. Most larger enterprises don’t really have a single architecture — they have varying levels of data maturity across the enterprise. There’s a lot of redundant data being stored, for example.
I’m not saying the initial architecture has to be perfect. Enterprises can build on what we have and get some AI going, which creates the motivation to do more and develop data maturity. Initial initiatives are putting AI band-aids on top of what they have, which don’t give maximum value. You need a certain level of data maturity in order to get value out of your AI.
What about the architectural direction: “More decentralised… data mesh… lakehouse.”
The architecture style that I see organizations wanting more of, either by name or not, is the data mesh: decentralized components across the organization, creating data products for the rest of the organization and beyond, and sharing those products via APIs internally.
For the foreseeable future, it’s going to be backed by the data lakehouse – the best of both worlds between data warehouse and data lake. Also, there’s a data fabric over the top of all of this. The mesh is how the data is laid out architecturally, and the fabric is the delivery mechanism that lets you connect to data wherever it is.
Plus, MCP will be adopted as part of the architecture. With MCP, you’re connecting lots of data, giving the agents good context.
Data architecture is a balancing act between what would be ideal — a centralized data warehouse that performs perfectly — and the realities of how initiatives happen in the organization.
That’s why you still need good data architects: they’re not going to be AI’d away in 2026. You still need expertise at the top of this. The data architect can make the right trade-offs between centralization and decentralization which can create a good data mesh.
So what’s now and next in 12 to 18 months?
Not surprisingly, more AI, and a clamour to turn everything into agentic AI – we‘re seeing the rise of agentic marketplaces. That’s in the budget and the plan for many companies I talk to.
Whether it is realized is a different question. Enterprises are unlikely to get everything they want from agentic in 2026, but they will see ROI and stay committed. Agentic will be with us for quite a while, so it’s worth committing to, even at the low level of maturity that it’s at today.
But beware of the agentic that you adopt, if you’re not creating it. Is it truly AI, or AI-washed? Agentic AI is supposed to use AI, not BI and analytics. Some of the stuff you can see on an agentic AI marketplace is closer to good old analytics. Nothing wrong with that, but the possibilities are so much higher with agentic approaches.
What’s the broader impact, for example, on applications teams or leadership?
So many application teams have been managing the data they need for their application over the years, and not doing a good job of it; there’s no leverage for the enterprise. Data should be managed by data professionals and provided as a core service to the enterprise.
Meanwhile, I’m seeing continued pressure on cost and TCO, including for AI. It’s gone beyond exploratory and “we’ve gotta do it, so let’s do it,” to “How is this driving ROI?” You have to have the ROI skills to show that. So in 2026, leadership is more necessary than ever – someone who can see the big and small picture at the same time and drive efforts towards the enterprise’s goals.
So, business-focused technology leadership. Because AI accelerates everything, it either accelerates you over the cliff or it accelerates you up the mountain.
Yes, more than ever.
Thanks, William!
Golden Path
Posts published in Golden Path.
2025
Posts from 2025.
May 2025
05-21 – Video: Software Is Ephemeral
Video: Software Is Ephemeral
Software operates without constraints unless we create some: it’s like an infinite, ephemeral cocker spaniel. As we enter a phase of AI-fuelled innovation, let’s think ahead about what we are creating.
One for a morning coffee. https://youtu.be/SK21YeVjIHs
05-29 – On LLMs, Push Joints, and embracing the change
On LLMs, Push Joints, and embracing the change
I tried to do some plumbing once. It’s a cautionary tale about capability blind spots and the all-permeating power of water. Eventually, the repairs to the walls and floor far exceeded the money saved by doing it myself.
I remember talking to one of the plumbers who came. He didn’t seem too perturbed about my lack of skills, I had the impression he had seen it before. Almost as an aside, he told me briefly about his dad, also a plumber, and push joints.
When push joints came in, so he said, many plumbers thought that was the end. Who’d need welding or pipe bending, which were after all the hardest parts of the job? The plumber’s dad stared gloomily at his own, inevitable obsolescence.
Or so it seemed. Fast forward several decades, and it appears we still have plumbers. Not just to repair the damage caused by gung-ho pseudotradies who think B&Q holds all the answers.
You see where I’m going with this. We are currently, absolutely, and without a doubt, in the middle of a major change in how technology is built, delivered and used.
Large language models are not even done. Models are improving, memory capacities are growing, capabilities are extending. And beyond that, all that was machine learning has had a major shot in the arm.
Technology doesn’t come more disruptive than this. And yes, there is every indication that AI is going to take your job. But, we’ve been here before, several times. I know I have.
First, the bad news. The jobs that appeared stable now appear anything but. Nobody is immune, even senior roles who were remunerated on the basis of their inherent knowledge and experience.
We’ve been here before. Back in the Nineties, I watched as people whose careers were built around maintaining proprietary software systems, saw them replaced by commercial, then open-source stacks.
Later, we can all blame Tim Berners-Lee for instigating what became known as disintermediation. The web was astonishingly disruptive to supply chains, taking away livelihoods even as it created new opportunities.
In neither case, nor in legions of others, were people happy. We can focus on that, or on the fact that employment is still a thing, despite Seventies predictions suggesting we’d all be living lives of leisure by now.
We still need people to do things, just as we always did. Will they be the same things? No; but neither is being a farrier a good career move.
As my lovely colleague Darrel Kent once put it, technology either automates, or it augments. If your work is automatable, it will be automated—this was always true, it was just a question of timing.
However, where technology exists to augment… that’s where the good news kicks in. Some tasks, not least being an analyst, benefit from the immense distillation power of machine learning models.
Equally there are certain places AI can’t go, and maybe never will. My money’s on hairdressing and baristas. It’s also on the outer edges of complexity, which is fractal. Every time we achieve more, we create more problems to solve.
See also: cybersecurity, which funnily enough also follows laws of permeability. Hackers find the easiest path to the money, they don’t care which route or which vehicle. Plus, they will always be one step ahead.
See also: areas of risk which require discernment. Whilst computers may currently be doing a great job at serving up answers, we often need a human in the loop to determine if the answers are correct and appropriate. That was true when “computers” were people.
As for innovation… I’ve likened this to mice dancing in a moving train. However good the underlying platforms may be, however fast moving they are, and however small we are, we can still dance on top. We can always do more.
Which is the very nature of business. Fundamentally, businesses have to do more, or different, than the competition to survive. They have to differentiate, to have additional capabilities, to have their own spin.
In every future scenario, this will remain true. Businesses stand between supply and demand, and take their cut. The clue’s in the word “enterprise”, whether at an individual or corporate level.
Will there be new intermediaries? Yes. Will there be charlatans and sharks? Yes. Will, again, there be jobs lost? Yes, but there will also be jobs we’ve never even heard of, alongside augmented versions of old jobs, alongside ones we still want people to do.
As a final point, the one thing AI still lacks is a meta-cortex, that is, the ability to review a bunch of disparate factors and discern an appropriate path. Just as complexity is fractal downwards, context is fractal upwards: it grows exponentially.
Perhaps “models of models” will exist in the future, but we are a long way off. See also, Artificial General Intelligence (AGI). Whatever this is, it is not that. As my other lovely colleague Whit Walters has said, LLMs are currently parlour games, not AGI.
As are AI’s use in creativity. I fundamentally believe art relates to the communication between two souls. Whilst AI music, or an AI picture, might appear similar to something a human can do, it lacks this fundamental characteristic.
AI doesn’t understand history or the ramifications of its actions: it can’t walk in the shoes of a humans, nor would it understand why without a prompt. It can’t make leaps of insight - it won’t sit, troubled, pondering an issue until it arrives at a sudden epiphany.
Perhaps one day it will be able to do this, and we will have our lives of leisure… I have an inkling that neither the best nor the worst intentions of our peers will allow for this scenario.
For now, it’s about embracing the change, potentially passing through the fires of a career move (sorry), but in recognition that, indeed, we are mice and we can dance. If push joints weren’t going to destroy us, we’ll be OK with AI.
05-30 – On Continuous Acceptance
On Continuous Acceptance
I’m sitting here feeling frustrated about platform engineering, pondering whether I should make a video about it - which would require standing up, as well as having something concrete to say (which I’m not sure about).
What’s platform engineering, you say. Here’s a fair definition from Phil Taylor:
“Platform engineering aims to automate everything in the software delivery pipeline. It attempts to standardize and automate the deployment of applications across private and public clouds.”
Sounds OK, so, what’s my problem with it? I could quibble about the stated purpose - I’d perhaps put it in terms of end to end delivery based on supporting services and tools (“platform engineering is about engineering a platform…”) but that’s one for over a beverage.
It’s not that. It’s more… why do we need a term to describe what could be called “doing things in the right way?” A lack of standardisation and automation needs to be met by exactly these things, for sure, but that won’t address the root problem - how we let things be so non-standard and fragmented in the first place.
The answer is, reasons. And if we don’t address these, platform engineering will only ever be a band-aid, or a partial response which helps in the short term.
Want some reasons? A lack of controls around tools adoption, because controls stymie innovation. Vendor freemium models that play to our cuckoo-like desire to try something new. Short-termism in the name of progress, on all sides. And so on.
I don’t have a problem with these things, but if we accept them, we need practices that take them as fact. Platform engineering says, “you’ve created a monster, let’s make it better so this time next year, all will be neat and tidy.” Guess what, it won’t.
That’s my second problem. By creating a name, we give it a lifecycle of its own. Platform engineering moves from theme to trend, and people (me included) pile on with, articles saying, “Here’s how to do it, but it’s not a magic bullet,” or “What people get wrong,” or (my favourite) “Platform engineering is dead, long live platform engineering.”
Having created a space, the conversations exist within that… until we move to the next one. But as per the above, the space could only exist as an aspirational goal. See also DevOps, Agile, continuous anything, and indeed, every maturity model level five.
The only continuous things are change, human behaviour, and complexity resulting from both. Neither good nor bad things, just things.
The bad thing is our lack of acceptance, or if I may, our denial of reality. No sunlit upland of software delivery exists, where everything just works, there never was. Perhaps continuous acceptance might be a place to start, followed by continuous allocation of resources to review and refactor the tools, libraries and services upon which we rely.
Now, that’s a definition I can get behind.
Idg Connect
Posts published in Idg Connect.
2014
Posts from 2014.
January 2014
01-01 – Does Berners Lee Not Go Far Enough?
Does Berners Lee Not Go Far Enough?
Does Berners-Lee Not Go Far Enough?
Jon Collins comments on Sir Tim Berners-Lee’s recent call for an internet “bill of rights”.
Mar 12, 2014 12:00 pm PDT
A straw poll we ran to 130 IT and business professionals earlier this year showed that 83% of you believe “it is time for an international data protection and privacy governance framework”. Today Jon Collins comments on Sir Tim Berners-Lee’s call for an internet “bill of rights”.
In reference to the kind of governance we need in this post-Snowden world, it’s not particularly surprising that the Magna Carta has been cited yet again - this time by inventor of the World Wide Web, Sir Tim Berners-Lee. As we all scrabble around ourselves for historical points of reference, it is inevitable that we end up with the document seen as the root of all power-capping legislation. While it was of only limited scope in itself, the Magna Carta is the skyhook of rights, the touchstone of how we, as individuals, can curtail the tendencies of both state and corporation to invade the privacy of citizens and consumers alike.
The appetite for a “bill of rights” is strong (as illustrated by our own poll). But do Berners-Lee’s proposals go far enough? His Web-we-want initiative, which celebrates “the free, open, universal Web”, presents a laudable remit, covering net neutrality, data privacy and open standards. The Web needs to be free, goes the Web-we-want argument, and very sound it is too.
Trouble is, there is more to the future than communications. Consider, for example, the continued explosion in use of CCTV - is this part of the Web? How about the fact that Ukrainian citizens were sent warnings by text, when their mobile phones were in the vicinity of demonstrations - is SMS part of the Web as well? Advocates might argue yes on both counts, but others will disagree and even contemplating such a debate takes valuable time away from the main issues.
Fundamentally the issue is about the data, as I presented back in January (having arrived at my own Magna Carta epiphany). Data can emerge from many quarters and be distributed across a range of transports. Just as the UK Data Protection Act struggles to differentiate between whether a printout needs to be controlled in the same way as an online record, so we need to recognise that the message is of greater importance than any one medium.
Of course, without the Web we might not be at this juncture right now. But technology was already starting to encroach upon personal freedoms before it, and will continue to do so even if it is superseded. Our data is not something ‘out there’ that needs to be protected separately - it is a fundamental part of all of us, and needs to be treated accordingly.
01-01 – In 2014 And Beyond The Best Is Yet To Come
In 2014 And Beyond The Best Is Yet To Come
In 2014 and Beyond the Best is Yet to Come
Forecasting is a tricky business but we are only beginning to exploit technology
Jan 8, 2014 12:30 pm PST
It was Danish physicist Niels Bohr who coined the phrase, “Prediction is very difficult, particularly about the future.” In technology circles, especially, predictions often sit at the bottom of the heap, a few tiers down from lies, damn lies, statistics and ICT vendor positioning statements.
Fortunately we have a few, quite solid premises upon which to construct any views of the future. Not least, that Moore’s Law continues to play out. The principle, concerning the rate of increase of transistors on a single silicon chip, has a lot to answer for. While some of the spin-off factors have started to slow (for example clock speeds are reaching their maximum), senior wonks still give us another 10 years of shrinkage . Given that we’re still acting as if computer power is an infinite resource, we’ll probably benefit from another decade or so of efficiency improvements following that point.
The main consequence is that the technology ‘market’ will continue to commoditise. Tech companies have a window of opportunity to make hay from new capabilities before they are rolled into the substrate, a reality that has seen the demise of many a mega-corporation.
From commoditising corporate and desktop computing and servers, margins have all but vanished. The phenomenon known as cloud computing could also be known as, “What happens when processing becomes no more than a commodity?” Existing providers will turn towards higher-value cloud services such as analytical farms as their business models become increasingly tight.
The same wave has led to the rise of the mobile device and the inaccurately-stated “death of the PC”. While we’re not seeing desktop and computers heading for the landfills, they have become good enough for most people’s needs. Cue the (equally inaccurate) death of the tablet when people find little to be gained from upgrading in a year or so.
Even as existing technologies are subsumed, there remains plenty to be gained from extending technology’s reach further into our business and personal lives. We shall see a groundswell of smart monitors and control devices integrate with a cloud-based back-end to deliver the Internet of Things. While I didn’t coin the term ‘Bring Your Own Thing’, I wish I had.
Does this mean we will all be living in smart cities in the immediate future? Realistically, no. While we might see one-off examples, the costs would be too high, the benefits marginal and the downsides too great. We can look instead towards more specific examples. My money’s on municipal waste disposal, home automation and industry-specific use cases.
While commoditisation gives us the raw materials with which to architect our technological future, we also require the ability to control resources and service flows as a whole. It’s not hard to see how the wave of interest in software-defined ‘everything’ presents the latest iteration in our efforts to do so.
We are, however, many years from what we might term ‘the orchestration singularity’: the moment at which computers, storage, networking and other resources can manage themselves with minimal human intervention. In the meantime, our ability to make the best use of technology will continue to be constrained.
Speaking of constraints, none greater exists than the network which, despite having the inordinate ability to shift data around its core, lags behind our abilities to create or process it. Significant advances will come from mobile; not least we can expect 4G LTE to make a real difference by operating at double WiFi speeds, reducing latency with minimal ‘hops’.
It would be a mistake, however, to believe that the main benefit should be faster movement of large data volumes. As we recognise ongoing limitations on data movement, we will innovate around open data and open interfaces to develop smarter ways to store and access information without requiring raw data to be transmitted.
What does all this mean to business, culture and society? Technology will create new opportunities and challenges for all businesses, but for some more than others. Those with strong ties to the physical world - utilities, retail/supply, healthcare and manufacturing - need to prepare for considerably higher volumes of data due to the groundswell of smart, while content and media industries face re-intermediation as a result of next-generation delivery platforms.
IT departments will have their work cut out, as ever, keeping the lights on for existing strata of technology while being expected to conjure innovation and deliver it to decidedly un-agile lines of business. Meanwhile, lines of business will move from digitisation to re-delegation as sales, marketing and other technologies become too much of an overhead to manage.
Outside of the business world, today’s technology advances inevitably result in whole new methods of surveillance — computing is a two-edged sword, which means we need to think hard about the kind of world we want to live in. As many have highlighted, last year’s Edward Snowden revelations indicate not a general failure of government, but a failure of governance.
Ultimately, what happens over the next five years will show that we have no more than scratched the surface of technology’s potential. As time passes, we shall come to terms with the fact that the data-rich world we are creating is akin to stumbling upon a new dimension — driving the requirement for, and acceptance of, virtual identities and online representations.
Niels Bohr died in 1962 but not before he was instrumental in the creation of CERN, that august institution which curated the ‘invention’ of the World Wide Web. While it is worth remaining sanguine about technology’s potential, keep in mind that the simplest ideas, which go on to take the world by storm, are often the hardest to predict. The best, with all its ramifications, is yet to come.
Jon Collins is principal advisor at Inter Orbis, the company he launched to watch technological developments. With over 20 years’ background in the technology industry, Jon has a deep understanding and practical experience of the applications of ICT.
2015
Posts from 2015.
January 2015
01-01 – I'M Under Surveillance — By My Watch
I’M Under Surveillance — By My Watch
I’m under surveillance — by my watch
Smart watches make a haven for hypochondriacs and could overburden healthcare systems
May 12, 2015 2:30 am PDT
Psst. Quiet, don’t say anything. I’m being followed, you see. My every move, my habits, even when I stand up, sit down or go to sleep. They know everything about me.
I’ve been under surveillance for a while now. I don’t know quite when I was sure, I’ve had a nagging suspicion for some time. But I do know it really started a month ago when I put on that so-called fitness watch, a seemingly innocuous Basis Peak, loaned to me by Intel, so I could see what the fuss was all about.
Built in is a heart monitor, shining dual LEDs into my wrist and reading my pulse. Great, I thought. Before I knew it I was swiping the screen every now and then, just to see how well I was doing. Even sitting in a chair gave the opportunity to check my resting heart rate. 55bpm, not bad, I remember thinking to myself.
Every now and then I would sync to my phone, uploading data from the HRM, as well as accelerometers and a temperature gauge. I could unlock habits merely by keeping the process going - “Well done,” it said, “You’ve been wearing the Peak for 12 hours, three times a week! You’ve been walking more than 2,000 paces a day! You’ve burnt calories, got enough sleep!“
Then the emails started to come through. “Here’s your weekly sleep summary,” they announced. It was at that point I realised that I was being monitored — first and foremost by me. I was wandering alone in a wilderness of mirrors, each reflecting back some aspect of myself. My existence, writ large on a management console for my own perusal.
A couple of decades ago the future-facing Wim Wenders film ‘Right to the End of the World’ incorporated a device that could read and play back our dreams. At first people saw it as smart, but then they realised just how addictive it could be. One character was lost forever to the screen, watching their deepest thoughts on repeated loop.
In Wim’s world, the devices were all-powerful but we have the cloud, that seemingly-infinite processing and storage space controlled by, well, whom? As we upload our data to cleverly-named service providers, we fail to consider who they are and what they might do with it. Which might have been fine for our shopping habits, but it should perhaps not be the case for our vital signs.
It’s the same dilemma that we face for our location data, as accumulated by mobile phone companies: this one has been running for years but we are still no closer to a solution (and meanwhile, the information continues to be stockpiled). However, fitness information takes this to a new level — for example, could there be an additional insurance risk of having a higher than normal heart rate?
In my last article I suggested that it might be a good thing to health-monitor train drivers — and I am a staunch advocate of detailed monitoring of armed forces in training, to avoid pushing young people beyond their capabilities. As commenter Brock McLellan noted however, we need to be careful about how we do so. “I would like to add a 6th point to your plan. Communicate only when necessary!”
I’ve said before that data about us should be treated in exactly the same way as ourselves; whatever is right for the physical, should also be considered for the virtual. For example, if my data is of value to others, I may be prepared to enter into a value exchange (as I do with my Tesco Clubcard) — but this is not the case for so much else of it, which is bought, sold and raided with impunity.
To the point: smart watches take this debate to the next level — it is as if we have been softened up by technology to such an extent that we now see it as acceptable, without any T’s and C’s, to give away data about our core functions. Perhaps this is acceptable to the majority; perhaps it is not; but in either case, we should be thinking quite deeply about the ramifications of doing so.
One such impact might be a misalignment of our own expectations. I remember reading an interview with a doctor in the US, shortly after Apple released its HealthKit framework. “Where, precisely, am I going to have time to look at the ongoing health statuses of my patients” was the gist.
Our hard-pressed medical services are already bowing under the weight of the so-called ‘walking well’. We are creating a haven for hypochondriacs, as people know more about their health and depend on the powers of Google — in which the more extreme cases bubble to the top — for initial diagnosis. It’s a dilemma, undoubtedly, as we are encouraged to think more pre-emptively about our health.
A week ago I took off the Basis Peak, to see what happened — I have found myself feeling slightly bereft but what with Fitbit, Apple and a host of others getting in on the game I doubt this will remain the case for long. Frankly, I am self-absorbed enough to enjoy perusing such data, for better or worse. This doesn’t change the fact that I may already be creating a petard of data upon which I might be hoist.
Over the next decade we will be able to know more about ourselves and others than we ever could. Right now we have a collectively, fantastically lax attitude to how we hand over our data to unknown parties, but the consequences will eventually need to be dealt with, for better or worse.
01-01 – Looking Beyond Big Data
Looking Beyond Big Data
Looking beyond Big Data: Are we approaching the death of hypocrisy?
As computers get better at understanding meaning, we will no longer be able to fool others – or ourselves
Feb 2, 2015 6:00 am PST
“Come, let us build ourselves a city, and a tower whose top is in the heavens.” The biblical Tower of Babel story is often used to illustrate Man’s hubris, the idea that we humans have of being better than we are. Even if notions of arrogance aforethought (and a conspicuously vindictive God) are taken out of the mix, the tale still reflects on our inability to manage complexity: the bigger things get, the harder they are to deal with. Cross a certain threshold and the whole thing comes crashing down.
Sounds familiar, tech people? The original strategists behind the ill-fated tower might not have felt completely out of place in present day, large-scale information management implementations. Technologies relating to analytics, search, indexing and so on have had a nasty habit of delivering poor results in the enterprise; while we generally no longer accept the notion of a callous super-being to explain away our failures, we still buy into the mantras that the next wave of tech will create new opportunities, increase efficiency and so on. And thus, the cycle continues.
The latest darling is a small, cuddly elephant called Hadoop. We are told by those who know that 2015 will be the year of ‘Hadooponomics’ - I kid ye not - as the open-source data management platform goes mainstream and the skills shortage disappears. Allegedly.
Behind the hype, progress is both more sedate and more deeply profound. Our ability to process information continues down two paths - the first of which tries to build an absolute (so-called ‘frequentist’) picture of what is going on, and the second of which is more relativistic. Mainstream analytics tends towards the former model, based on algorithms which crunch data exhaustively and generate statistically valid results. Nothing wrong with that, I hear you say.
As processing power increases, so can larger data sets reveal bigger answers at a viable cost. When I caught up with French information management guru Augustin Huret a couple of years ago, just after he had sold his Hypercube analytics platform to consulting firm BearingPoint, he explained how the economics had changed - a complex analytical task (roughly 15x10^9^ floating point operations, or FLOPs) would require access to three months of a Cray supercomputer over a decade ago. In the intervening period, the task length has been reduced to days, then hours and can benefit from much cheaper, GPU-based hardware.
This point is further emphasised by the fact that the algorithms Augustin was working on had first been worked out by his father in the 1970s - long before the existence of computers that could have handled the data processing requirements needed. “The algorithms have become much more accessible for a wider range of possibilities,” he told me - such as identifying and minimising the causes of Malaria. In 2009 he worked with the French Institut Pasteur to conduct an investigation into Malaria transmission, working with data from some 47,000 events across an 11-year period. Using 34 different potential variables, the study was able to identify the most likely target group: children under the age of five having type AA Haemoglobin and fewer than 10 episodes of the Plasmodium Malariae infection.
The race is on: researchers and scientists, governments and corporations, media companies and lobby groups, fraudsters and terrorists are working out how to reveal similar needles hidden in the information haystack. Consulting firm McKinsey estimates that Western European economies could save more than €100bn ($118bn) making use of Big Data to support government decision-making.
Even as we become able to handle larger pools of data, we will always be behind the curve. Data sets are themselves expanding, given our propensity to create new information (to the extent, even, that we would run out of storage by 2007 according to IDC - relax, it didn’t happen). This issue is writ large in the Internet of Things - a.k.a. the propensity of Moore’s Law to spawn smaller, cheaper, lower-power devices that are able to generate information. Should we add sensors to our garage doors and vacuum cleaners, hospital beds and vehicles, we will inevitably increase the amount of information we create - Cisco estimates this at a fourfold increase in the five years from 2013, to reach over 400 ZettaBytes - that’s 10^21^ bytes.
In addition to the fact that we will never be able to process all that we generate, exhaustive approaches to analytics still tend to require human intervention - for example to scope the data sets involved, to derive meaning from the results and then, potentially, to hone the questions being asked. “Storytelling will be the hot new job in analytics,” says independent commentator Gil Press in his 2015 predictions. For example, Hypercube was used to exhaustively analyse the data sets of an ophthalmic (glasses) retailer - store locations, transaction histories, you name it. The outputs indicated a strong and direct correlation between the amount of shelf space allocated to children, and the quantity of spectacles sold to adults. The very human interpretation on this finding: kids like to try on glasses, and the more time they spend doing so, the more likely are their parents to buy.
As analytical problems have become knottier, attention has turned towards relativistic approaches - that is, models which do not require the whole picture to derive inference. Enter non-conformist minister Thomas Bayes, who first came up with such models in the 18th Century. Bayes’ theorem, which works on the basis of thinking of an initial value and then improving upon it (rather than trying to calculate an absolute value from scratch) celebrated its 250th anniversary in 2013.
Long before this age of electronic data interchange, Bayes’ theorems were being denigrated by scientists - indeed, they continue to be. “The 20th century was predominantly frequentist,” remarks Bradley Efron, professor of Statistics at Stanford University. The reason was, and remains, simple - as long as data sets exist that can be analysed using more scientific means, the use of less scientific means have traditionally been seen as inferior. The advent of technology has in some way forced the hands of the traditionalists, says security data scientist Russell Cameron Thomas: “Because of Big Data and the associated problems people are trying to solve now, pragmatics matter more than philosophical correctness.”
The Reverend Bayes can rightly be seen as the grandfather of companies such as Google and Autonomy, the latter sold to HP for $11bn (an acquisition which is still in dispute). “Bayesian inference is an acceptance that the world is probabilistic,” says Mike Lynch, founder of Autonomy. “We all know this in our daily lives. If you drive a car round a bend, you don’t actually know if there is going to be a brick wall around the corner and you are going to die, but you take a probabilistic estimate that there isn’t.“
Through their relativistic nature, Bayesian models are more attuned to looking for interpretations behind data, conclusions which are fed back to enable better interpretations to be made. A good example is how Google’s search term database has been used to track the spread of Influenza - by connecting the fact people are looking for information about the flu, and their locations, with the reasonable assumption that an incidence of the illness has triggered the search. While traditional analytical approaches may constantly lag behind the curve, Bayesian inference permits analysis that is very much in the ‘now’ because - as with this example - it enables quite substantial leaps of interpretation to be derived from relatively small slices of data, quickly.
Lynch believes we are on the threshold of computers achieving a breakthrough with such interpretations. “We’re actually just crossing a threshold - the algorithms have reached a point where they can deal with the complexity and are able to solve a whole series of new problems. They’ve got to the point where they have enabled one which is much less understood - the ability of machines to understand meaning.” This is not to downplay the usefulness of exhaustive approaches - how else would we know that vegetarians are less likely to miss their flights - but we will [never be able to analyse the universe exhaustively](https://www.idgconnect.com/article/3579965/(http:/www.computerworld.com/article/2863740/quantum-mystery-an-underestimate-of-uncertainty.html), molecule by molecule, however much we improve Heisenberg’s ability to measure.
Top down is as important as bottom up, and just as science is now accepting the importance of both frequentist and Bayesian models, so can the rest of us. The consequences may well be profound - to quote Mike Lynch: “We are on the precipice of an explosive change which is going to completely change all of our institutions, our values, our views of who we are.” Will this necessarily be a bad thing? It is difficult to say, but we can quote Kranzberg’s first law of technology (and the best possible illustration of the weakness in Google’s “do no evil” mantra) - “Technology is neither good nor bad; nor is it neutral.” To be fair, we could say the same about kitchen knives as we can CCTV.
We are, potentially, only scratching the surface of what technology can do for, and indeed to, us. The “human layer” still in play in traditional analytics is actually a fundamental part of how we think and work - we are used to being able to take a number of information sources and derive our own interpretations from them. Whether or not they are correct. We see this as much in the interpretation of unemployment and immigration figures as consumer decision making, industry analysis, pseudoscience and science itself - ask Ben Goldacre, a maestro at unpicking poorly planned inferences.
But what if such data was already, automatically and unequivocally interpreted on our behalf? What if immigration could be proven without doubt to be a very good, or a very bad thing? Would we be prepared to accept a computer output which told us, in no uncertain terms, that the best option was to go to war? Meanwhile, at a personal level, how will we respond to all of our personal actions being objectively analysed and questioned? “In an age of perfect information… we are going to have to deal with a fundamental human trait, which is hypocrisy,” says Lynch. While we may still be able to wangle ways to be dishonest with each other, it will become increasingly hard to fool ourselves.
The fact is, we may not have to wait a long time for these questions to be answered. The ability for computers to think, a question for another time, may be further off than some schools believe. But the ability for computers to really question our own thinking, to undermine our ability to misuse interpretation to our own ends, may be just round the corner. To finish with a lyric from Mumford and Sons’ song, Babel [video]: “Cause I know my weakness, know my voice; And I’ll believe in grace and choice; And I know perhaps my heart is fast; But I’ll be born without a mask.”
Jon Collins is principal advisor at Inter Orbis
01-01 – Wanted A Model For Startup Success That Doesn'T Rely On Alchemy
Wanted A Model For Startup Success That Doesn’T Rely On Alchemy
Wanted: A model for startup success that doesn’t rely on alchemy
We haven’t come far in establishing a formula for modern technological business success
Feb 26, 2015 12:30 pm PST
For I am troubled, I must complain, that even Eminent Writers, both Physitians and Philosophers, whom I can easily name, if it be requir’d, have of late suffer’d themselves to be so far impos’d upon, as to Publish and Build upon Chymical Experiments, which questionless they never try’d; for if they had, they would, as well as I, have found them not to be true. — Robert Boyle, 1661
Generations of schoolchildren are familiar with Boyle’s Law, that is, how the pressure of a gas is inversely proportional to its volume. The less volume available, the greater pressure there will be — as also experienced by anyone who has tried to blow up a hot-water bottle.
In 1660 Robert Boyle co-founded the Royal Society with a purpose to further the scientific method — “We encourage philosophical studies, especially those which by actual experiments attempt either to shape out a new philosophy or to perfect the old.” Less familiar to schoolchildren, though recognised as a seminal work was his 1661 exposé of alchemy, ’The Sceptical Chymist’, which has positioned him quite rightly as the father of Modern Chemistry.
Some examples of technology company success appear to have that alchemic quality of turning lead to gold. Instagram, for example, which was launched in 2010 and sold less than two years later for a reputed $1bn. Or think of the Chinese e-commerce site Alibaba, which last year raised some $25bn.
Who wouldn’t look at examples such as these and try to replicate their success? It’s clearly not easy, and sometimes downright strange to understand how some great ideas work, while others flounder. For example, why should Vodafone’s M-Pesa mobile payments system have been so successful in Kenya and Tanzania, whereas similar schemes have not been widely adopted in Western countries, despite years of trying? For that matter, why aren’t we all using Bitcoin? Why was Facebook so successful when its forerunners — Friends Reunited, MySpace and the like — were not?
Even as innovation cycles shorten, we appear none the wiser about where the next successes will come from. The models most widely used to explain mass adoption tend to see the journey from the point of view of the product, service or trend. Geoffrey Moore’s Crossing the Chasm, for example, or Gartner’s Hype Cycle work on the basis that all good things will eventually emerge, once the bad stuff has been filtered out.
Such models worked well when everyone was trying to do much the same thing, and when most technology was corporate — indeed, they still have validity in big-bucks enterprise procurements. However they do little to explain small ticket, big-impact phenomena such as mobile apps, social networking platforms or cloud-based services. Right now the Internet of Things is top of Gartner’s hype list, but we are still brainstorming what that means.
This absence of a magic formula is frustrating for anyone wanting to make it big. Experience pays off, as numerous articles on the subject express. But while the general themes — relentless focus, resource management, the right relationships — may appear to be an art, they could have more to do with science than we think.
Robert Boyle was one of the first to document the chemical process of synthesis — in which a substrate of compounds react to form something new and, sometimes, remarkable. In the decades that followed and with the support of organisations such as the Royal Society, his progenitors and progeny tried every possible combination of options, often risking their own health (or, in the case of Carl Scheele, life,) in the process. Even so, progress was slow; indeed, we are still making new discoveries today.
While the risks to life and limb may be smaller, tech company founders exhibit similar behaviours to the scientists of old; it is no coincidence that technology parks are attaching to universities in the same way that pharmaceutical companies are funding campus environments for scientific research.
For tech startups, the good news is that the substrate is expanding all the time. Compute, storage and networking costs are reducing through a combination of Moore’s law and supply/demand economics, while investment in skills, availability of open standards, freemium platforms and accepted norms make it easier than ever to create something new, enabling a brainstorming approach with hindsight as the arbiter of whether or not it is a good idea.
Often it isn’t: in a recent poll of founders, the greatest reason cited as to why startups fail was a lack of market need. All the same, the approach pervades.
“You won’t have all the answers about the space, but you should have an educated and defensible opinion about it… [which] is what you bet your company on: ‘Yes, people want to share disappearing photos.’ ‘Yes, people want to crash on other people’s couches instead of a hotel room.’,” says startup founder, Geoffrey Woo. But who knew in advance that Snapchat or Airbnb would be, or remain, a success?
The fact is that even venture capitalists — who should be good at this, if anyone — have not become any better at spotting future success. And the elephants’ graveyard of startups is only a blink away. What is a startup, asks Closed Club, set up to analyse why startups fail, but “a series of experiments, from conception of an idea to researching competitors, running Adwords campaigns, doing A/B tests…”
Well, indeed.
In chemistry synthesis terms, the try-it-and-see-what-sticks model for technology innovation is inherently endothermic in that it draws, rather than releases energy. The term ‘burn rate’ was adopted during the dot-com boom (even spawning a card game) to describe the uneasy relationship between sometimes cautious capital supply and hungry startup demand, with many companies floundering and even failing when on the brink of success.
Energy does not have to be linked to capital alone, but is better considered in terms of positive value, either real or perceived. Facebook’s growth, for example, is more linked to how it tapped into a latent need — the village gossip post — and used this to demonstrate its worth to advertisers. Amazon’s continued reputation as a loss maker has done nothing to dampen investor enthusiasm or quell market fears about its voracious appetite, given how people keep using it. And the adoption of Google and Skype [now owned by Microsoft] as verbs demonstrates an old tactic which has assured a stable future for both.
This links to a common tactic among larger companies: the ‘embrace, extend and extinguish’ technique, originally honed by Microsoft in the 1990s, is one of many ways to ensure both new entrants and established competitors are starved of energy. Other examples include the promotion of open source equivalents to draw on resources against established competition, just as IBM did against Microsoft and Sun Microsystems in the early Noughties, so is Google doing with Android to fend off Apple. Attempted strangulation, starvation from energy-giving oxygen, is recognised as a valid business strategy.
The chemistry set analogy bears a number of additional comparisons. The reaction rate, for example, which can depend on latent temperature, pressure and so on - we can thank Robert Boyle for this understanding. So, just as positive value input can increase temperature, so can latent need and a good marketing campaign positively catalyse pressure. Indeed, crowdfunding models operate on both axes, driving demand while increasing available resources (and it is without irony that crowdfunding sites themselves benefit from the same models).
The ultimate goal, for any startup, is that its innovation reaches a critical point — that is, where the position changes from attempting to gain a foothold with a product or service, to it achieving ‘escape velocity’, in much the same way that a liquid becomes a gas, which can then spread through diffusion. Reaching such scale may require a level of industrialisation: “You’re committing not just to starting a company, but to starting a fast-growing one,” says Paul Graham, co-founder of Y-Combinator. “Understanding growth is what starting a startup consists of.”
The chemistry-based analogy is not perfect, none is [YouTube video clip]. However, it does go a long way towards explaining why some segments struggle with technology adoption, (such as UK healthcare, being “a late and slow adopter”, according to The Healthcare Industries Task Force) – while the desire to make use of new tech may be there, a critical level of energy is not.
Wouldn’t we all want to be the next Facebook? Well, maybe we can be. But not by leaving success to the general superstitions of the time — those days should be behind us. Instead, we should turn our attention to applying a more scientific method to how we synthesise new capabilities, based on accurate reporting of resources, energy and other, highly measurable factors. Indeed, given how our abilities to oversee such things are themselves extending, theoretically smart startups are no better off than cobblers’ children.
In 1737, in Leiden in Holland, Abraham Kaau gave one of the last recorded speeches about the dubious nature of alchemy. By this point, some 75 years after Robert Boyle’s ministrations, he was largely preaching to the converted. We can only hope we do not have to wait a similar amount of time before we dispense with less scientific approaches to the creation and adoption of innovative tech.
01-01 – Why Aren'T Businesses Building Impossible Products?
Why Aren’T Businesses Building Impossible Products?
Why aren’t businesses building impossible products?
Agility is not enough — companies should be addressing the opportunities of the near future
Aug 14, 2015 5:00 am PDT
When the Human Genome Project was first initiated in 1990, its budget was set at a staggering $3bn and the resulting analysis took over four years. Just over two decades later, a device costing just $50,000 was used, aptly, to sequence Intel co-founder Gordon Moore’s DNA in a matter of hours. This time last year, the costs had dropped to $1,000.
How can businesses ever keep up, or even hope to become more than also-rans? While the information revolution is having an undeniably profound impact on how we work, rest and play, its continued progress is not that hard to understand.
Technology, in all its glory, is frequently a red herring; while information can also become complex, it offers a far more straightforward and tangible starting point. Please bear with me as I run through some basics.
Information exists to inform. We can be entertained or educated, we can use it as the basis of making our minds up both within and outside business. It can also inform processes and drive equipment, opening the door to automation.
So far so good, but where do we get it? Information is generally created by capturing, then digitising and processing data, transporting it from one place to another using a generally accepted binary format. Whether we write a message or make use of a sensor, we are adding to the mother of all analogue to digital converters.
As so often, the simplest concepts can have the broadest impact: no restriction has been placed on what data can be about, within the bounds of philosophical reason. Neither are restrictions placed on the amount of data we can store, process or transport.
While the sky is the theoretical limit, in practice we cannot reach so high. Constraints of time, space, power and cost are directly responsible for the rate of progress we are experiencing.
First then, time. When we create information from data, we often experience a best-before time limit, beyond which it no longer makes sense to be informed.
This is as true for the screen taps that make a WhatsApp message as for a complex medical diagnosis. As so neatly illustrated by a jittery YouTube stream, we also have a threshold of tolerance for poor quality.
Closely aligned with timeliness (in the shape of Moore’s Law) are the eroding limitations of physical space and power. Smaller devices with longer battery lives inevitably have broader use, meaning even electric razors and prescription drugs can be computer-powered.
As we saw in the genome example, a consequence of these constraints are the costs associated with any specific use case or application. In turn, costs drive demand, which drives supply, the automation of the latter being subject to the same time, space, power and cost trade-offs.
In layman’s terms, as we use more electronics the electronics become cheaper and more usable. The resulting tendency is for initial innovations to become commodity items, as generations of proprietary IP are displaced by ‘open’ standards and software, each driving down costs still further.
Such gradually reducing constraints have guided the progression of the information revolution, and continue to set the scene for what is practical. Examples are literally everywhere, a backdrop to 50 years of progress, from the number of sensors in an engine to the fact we now carry mainframe-class processing in our pockets.
Compromises still have to be made, inevitably. We cannot “boil the ocean” and nor can we attach sensors to every molecule in the universe (not yet, anyway). What businesses can do, however, is recognise that the thresholds are falling — there is money to be made in doing so.
What was once prohibitively expensive can become, quite literally, as cheap as chips — in six, 12, 18 months’ time a whole number of things will become predictably possible and affordable. Business cases become feasible, price points achievable.
So far, so obvious. What is downright weird, however, is that this shifting horizon of opportunity is rarely taken into account in product and service design. And this is the case even though product cycles often operate across a similar time period.
How could things be done differently? The alternative is to ask the impossible of technology — to design products and services as though they would be both practical and cost-effective, even though they are not today.
All businesses should be asking questions. What would I do if I had unlimited technology budget? What if I could engage with every one of my customers, directly, as individuals? What if I could scan all patients for all conditions on entry to the hospital, or even the surgery? What if I could talk to a holographic image of my team members, wherever they are? What if I could 3D print an entire car, or a whole different form of vehicle? On the other side of the world?
And they should not be stopping there but working on the answers, creating and prototyping designs and architectures that clearly will not work. Yet. Not only does this exercise spin up the flywheel of future opportunity, it also spawns a list of criteria that the organisation can act upon — such as the business models it needs to become a leader in the spaces that do not yet exist.
Perhaps a business will decide it does not want to operate in a certain area — but it may have value to add, for example in offering insurance policies for the driverless car or shared lawnmower maintenance and management.
Organisations large and small have the opportunity to gaze into the crystal ball — while we may not know everything that the future holds, we can have a fair stab at identifying what will become possible.
While this is obvious, it isn’t how product plans are defined — which is why inevitably faster-moving new companies steal the lunches of so many incumbents.
Startups aren’t any better at prediction; there are simply more of them, exploring options faster than any traditional company can by itself. How much of what we call ‘business agility’ is simply down to spending less time in meetings, and more just building stuff, I wonder. Even as businesses are judged on their ability to respond to today’s challenges, they are ignoring the opportunities staring them in the face. The biggest failure of any business will ultimately link to of a lack of imagination.
March 2015
03-02 – How Much Are Banks Risking On Mobile Financial Services
How Much Are Banks Risking On Mobile Financial Services
IDG Article: How much are banks risking on mobile financial services?
The term “mobile financial services” covers a broad church of money-related activities, from simple peer-to-peer transactions to more complex arrangements such as international transfers, loans or insurance. Despite such diversity, all models share the the principle of operating our finances digitally from where we are, without requiring to meet physically with someone, say at a bank branch.
In sub-saharan Africa, mobile finance has expanded in prevalence due to the relative paucity of other choices. If you’re living in a rural area and the options are to ping some money to your uncle on your mobile phone or walk two days to get to the nearest bank teller, which are you most likely to choose?
In addition, the notion of having a bank account is not so familiar among poorer populations that are more familiar with cash transactions. About 7% of EU citizens above 18 are ‘unbanked’ in industry parlance, compared to roughly 75% in Africa, so it is little surprise that Vodafone’s M-Pesa model — which expanded the idea of a pay-as-you-go phone card to a mobile money brand — took off as it did. Continued demand in Africa has led commentators such as BCG to suggest it offers a “$1.5 Billion Revenue Opportunity by 2019.”
In Western economies we are blessed with being both better off, and hampered with the legacy infrastructure that has evolved to support our financially oriented lifestyles. We are used to being able to drive to a bank branch, pay in a cheque, berate an assistant for the foibles of the company they represent, take cash from the hole in the wall.
And so we do: 67 million bank branch transactions still take place in the UK, even though we are being inefficient in the process. In consequence we have not been the greatest adopters of mobile finance. “The proportion of mobile phone users actively using mobile money services remains tiny,” say Ernst and Young.
This situation may not last however — simply because accessing bank branches is becoming less of an option. Between 2009-2013, about 20,000 (8%) of Europe’s branches were closed, and this trend continues. In the UK, it has been reported that half of all branches have been closed since 1989.
The ‘digital revolution’ is generally cited as the reason why such moves are necessary — people’s habits are changing and banks claim that subsidising unprofitable branches is a luxury they cannot afford. While it may be true that branches equate to 60 percent of retail banking costs, the other compelling aspect is that mobile service delivery offer additional ‘upside’ — when UK bank Lloyds reported that 200 branches (6%) would close back in October, this was on the back of a digitally driven jump in profits.
So should banks keep branches open regardless? Do we deserve a friendly face at our beck and call, simply because we want to store our money at one place rather than another? The question may be moot, given that banks show little intent to slow down their branch closure programs (despite UK business secretary Vince Cable’s best efforts).
Whatever their internal closure strategies, banks are already struggling to maintain any semblance of trustworthiness — they need to be careful not to play too fast and loose with the Western tendency to stick with the companies they know, as it could only be the albeit inefficient journey to a bank branch that is keeping them loyal. As the European and African models converge, people on both continents will decide precisely what they are prepared to pay for, and from whom.
August 2015
08-01 – The End Of Certainty How Ashley Madison Changed It All
The End Of Certainty How Ashley Madison Changed It All
The end of certainty: How Ashley Madison changed it all
We still don’t know the full impact of the digital trails we leave behind us
A long time ago, I went through a vetting process for some work with the UK government. Understandably nervous, I asked a colleague what it was all about. “Just be honest,” he said to me. “They’re looking for ways that people can get to you. If you have no secrets, you have nothing to worry about.“
I took this at face value and all went well. But many people have things they would rather keep to themselves. For better, or worse.
At the same time, the internet. A capability never before experienced by humanity, the World Wide Web is still so new that we continue to learn the rules and behaviours associated with it. Even as we use the internet, we start to understand it. And, sometimes, we become unstuck.
Love affairs are possibly one of the oldest of our behaviours, their very nature filled with contradictions and a spectrum of moral judgements. Of all the people having an affair, many more might consider it. Whatever the rights and wrongs, rare would be the relationship that continued, start to finish, with absolute certainty, or without a glance elsewhere.
At the same time, the internet. A complex network of computer servers and storage devices, our every message and preference logged and stored for future reference. Even as we click on articles or comment on social media, we create an electronic trail for later analysis. We are yet to understand fully the potential opportunities for, or the dangers of, this accumulation of data.
A couple of years ago Autonomy founder Mike Lynch remarked, “In an age of perfect information… we’re going to have to deal with fundamental human trait, hypocrisy.” That is, data transparency goes directly against human behaviours that rely on keeping secrets.
And then, Ashley Madison. The consequences are already starting to unfold — the first lawyers have been instructed, the first big names knocked off their pedestals. We already have conspiracists suggesting government involvement (and perhaps they are right).
While it is nowhere near the same scale as September 11, 2001 in terms of horror or tragedy, it bears similar traits. And there will be a very human cost. No doubt governments will capitalise on the situation to push their own cyber-agendas forward, in the name of citizen protection.
“How could people be that dumb,” some have written. But the fact is they were that dumb. They were complacent, they were ignorant. All that effort by security professionals over the years, all that rhetoric about personal privacy was written off as industry paranoia, or as a ploy to sell more products.
Neither is it over. Technology has not stopped evolving but our frameworks and framing mechanisms are falling ever further behind. Consider the fitness device being used in court as evidence, or the increasing use of facial recognition. Soon it will be impossible to deny a visit to a certain bar, or indeed say, “I was at home all the time.”
People might not be quite so dumb again. Rather, they will live their lives in a state of heightened fear, as to the consequences of their online actions. All that certainty, that complacency, is gone for ever.
2016
Posts from 2016.
November 2016
11-01 – Taking Baby Steps With Big Data
Taking Baby Steps With Big Data
IDG Connect - Taking baby steps with Big Data
01 Nov 2016
Amidst all this evangelism and hype (together with pop star examples of startups taking the world by storm) it’s sometimes worth assessing how things actually are, and why they are as they are. As I am doing so, currently, following a day’s session getting the latest from HPE Software on its strategy and approach to Big Data and Information Management. In HPE’s world, this means how it deals with structured data analytics and unstructured data management respectively, with overlaps in between.
Now, I’ve been monitoring the impact of technology for 15 years now, having spent a similar period of time working in IT. Call me a Kool-Aid drinker but I’m left with an overwhelming feeling that it really has had a profound effect. At the same time however, some things have stayed exactly the same. We have seen technology companies come from nowhere, take the world by storm and then abruptly vanish. For every Kodak there is an Alta Vista, for every Blockbuster a Digital Equipment Corporation. And even as we are wowed by the Amazons and Ubers, nobody knows which will still be around in 10 years, with the rest no doubt acquired out of existence.
Truth and fiction in Big Data analytics
To whit: Big Data analytics, machine learning, artificial intelligence and all that clever stuff that’s going to rock our worlds, if it isn’t already. According to the rhetoric, we’re heading towards a moment in which decisions will be automated out of existence through the use of smart algorithms. But just how true is this? Of course someone has to present such a singular (if you’ll forgive the pun) view of the future. It gives everyone else something to triangulate against, as do any Luddite positions or disaster scenarios.
And as with any set of polarised perspectives, logic would suggest that the answer lies ‘somewhere in the middle’, which is where prediction gets a whole lot harder. Part of the challenge lies in the fact that we are seeing changes not only in how IT is delivered but also in the kinds of business that result. We will always need manufacturing, power generation, transportation, healthcare, haircuts and manicures and a whole bunch of other industries, products and services. But rare is the industry that is not worried about the effects of so-called ‘digital transformation’ right now.
Insurers are concerned about the threat of data-oriented companies to their core underwriting business; retailers are pushed to the edge by online-only companies; banks face the buzz of fintechs that exploit their core services to deliver far better customer experiences. Meanwhile, utilities are losing the fight to control the increasingly smart home, and traditional car companies seem only to flounder in the face of a smarter generation of vehicle manufacturers.
Where’s the truth? Should established companies seek to get more out of their vast pools of data, or would such exercises amount to fiddling while entire industries burn? These are tricky questions, and nobody has a monopoly on the answers. HPE Software is taking a pragmatic view: it believes at least part of the response lies in what the company is calling ‘augmented intelligence’, which is as much a manifesto as a glib marketing phrase. It is our intelligence being augmented, you see – technology exists to serve our needs as (therefore) smarter beings, who can then build upon what the insights they are offered.
It’s about the information (and not the data), stupid
To understand better where things are going, I believe we can start from a reasonably solid foundation – that all companies are information companies. Indeed, they always were, ever since Joe the blacksmith developed his skills and knowledge about what shoe to put on what horse, and Freda learned how to discern a good from a bad payer.
Over recent decades, we have been generating data like it is going out of fashion, but so much of it is preventing us from being actually informed. We spend person-years of corporate time and millions of dollars trying to pull together disparate data sources, in the hope we might unlock the value that lies within. All companies thrive or survive based on the quality of the information they maintain about their customers, their back-office processes, their finances and supply chains. And therein lies the challenge, as this data-rich world is also, and increasingly, information-poor.
Speaking of Kodak, the digital camera analogy isn’t bad. We have gone from taking 24 carefully planned shots of a two-week holiday to snapping hundreds, or even thousands of photos, which we either painstakingly file over many hours or leave languishing on hard drives. It’s the same for insurers or retailers looking for the elusive single customer view. “If you can’t measure, you can’t manage” goes the adage, and rare is the company today that can successfully measure. If it can, chances are any such metrics will quickly be out of date.
Perhaps this issue will remain unresolved, at least for as long as we see generating increasing amounts of data as good, or inevitable. At the same time however, we can discern the characteristics of the ‘better-informed’ organisation. First, given the deluge of data faced by any organisation (large or small), just being able to make sense of it is already a good start. Success in this area amounts to basic hygiene factors, delivering capabilities in data integration, quality and structure. A failure in this area possibly also amounts to a breach of regulation, so it is where much attention is focused.
Second comes the ability to drive the organisation with whatever the information is saying, not least just understanding the data, but then also being able to conduct more detailed analytics and start to make more predictive decisions. It strikes me that this is where many organisations are struggling with the wrong mindset, as they see information as something ‘out there’ which should be consulted on occasion, like the oracle (sic) up the mountain. The fact is however, that the oracle has come down the mountain, available for consultation at any point.
This gives us an additional hygiene factor, based on a choice: do you use data in your decision making, or do you still hit and hope, based on what you believe might work? As HPE ‘Distinguished Technologist’ Chuck Bear noted:
“Look at two gaming companies with the same idea: the one that does A/B testing will get more market share out of it.” All companies have a choice – to make decisions based on the information they have freely available, or to increase the risk of doing the wrong thing. Organisations really can be their own worst enemies.
The third area is then to use information to learn and improve, to change and become more dynamic. Information is not static but is constantly changing, meaning yesterday’s insights could well be inaccurate or, indeed, wrongly framed. In healthcare for example, no doubt it made sense at one point to measure ‘bed occupation’ as an indication of utilisation – what it failed to take into account was the fact that as soon as the measure was used, it became skewed due to the changing behaviours caused by its measurement. For different reasons, the same consequence is true for all industries, remarked HPE Software’s marketing VP Jeff Veis, “Most metrics are backward looking - we often see companies creating dashboards that put them out of business.”
To really mess things up will always need people
To be able to benefit from information, therefore, requires a level of human savvy that doesn’t look like it is going to go away: to put it bluntly, we can be remarkably thick when it comes to how we use information, and no amount of algorithmic automation is going to change that. Does this mean all existing businesses are doomed, and that start-ups will mop up? Not necessarily: even the most disruptive startups are going for lower-hanging fruit, exploiting the fact that big business, bizarrely, can’t engage with its customers in any new way without spending aeons of time in meetings about it. Yes, it’s dumb, but that’s where we are.
Equally, while some such ripe pickings may generate such huge revenues that they can create global businesses out of a relatively tiny investment, they are a symptom of the times. Because of the exponential nature of complexity, the higher fruit (and to switch food-oriented analogies, the bread and butter of big business) may be way further up than such examples suggest. What this means in consequence is that people, not computers, will remain a significant factor long after technology has commoditised – and that hygiene factors in how we use information will differentiate successful organisations from the less successful. To be clear, when we all have the same tools, the least dumb will win.
Perhaps one day computers will exist that can make absolute sense of the pools of data we continue to generate. And yes, machine learning and even machine deciding will increasingly come into the picture. But, as noted Chuck Bear, “There’s no magic algorithm that, given garbage data, will give magical insights.” And we have plenty of the former right now, with more coming on stream all the time. “It’s very easy to get the simple stuff wrong; that will be true five years from now and in 500 years from now.”
So, yes, the shorter term will be more about augmenting our intelligence than replacing it, offering situational awareness and empowering people to make the right choices – there’s just too much in the mix for things to be otherwise. We are constantly processing more information than we ask machines to do, and we will continue to do so, even as we can stand on the shoulders of such giants. To think otherwise makes the worst assumption of all: not that computers can become as intelligent as us, but that they can prevent us from being stupid. That really would require some magical algorithm.
Nothing To Declare
Posts published in Nothing To Declare.
1999
Posts from 1999.
November 1999
11-17 – Nobody ever listens!
Nobody ever listens!
Here’s a fact. It has been reported that the average executive spends up to 2 hours of the working day “doing”, that is, getting on with the job and 6 hours “communicating”, i.e. exchanging information. The average consultant’s day will differ from this profile, it is sure - for a start we spend much of the day dealing with all those ironically named “office automation” applications. The fact remains, however, that a significant proportion of our working minutes are spent in meetings, interviews, group discussions and, for some, client lunches.
So what? Well, while we spend many hours of our lives honing our outward communications skills, for example in presentations, interviewing and report writing, the inward art of listening is most often assumed. Listening is a given, so we concentrate on outward skills such as learning the right questions to ask, how to structure a report or how to get a message across.
Listening is a given. Hmmmm… maybe it is. We listen all the time, and therefore we are constantly keeping our abilities to listen “on the boil”. However, as with all givens, the issue is one of complacency. In this context, we have two weapons against complacency: knowledge and discipline.
Knowledge is required, in terms of what makes a good listener and what are the blockers to good listening. A good listener is - OK, hands up what makes a good listener? that’s right, someone who:
- pays attention!
- sees the speaker’s point of view
- shows interest in the subject matter
- looks for the underlying messages in the speaker’s words.
Blockers to good listening are less easy to pinpoint.
- baggage - extraneous clutter in our own brains which distracts from the conversation in hand
- inner noise - where the conversation sets off a train of thoughts which, though fascinating, prevent us from continuing to listen
- control - making leaps of understanding about what the person is trying to say, thereby missing his actual point entirely
- ping-pong - where a client’s point triggers a memory or an opinion, so we spend the next minutes looking for a suitable gap to express it
- display - where we use the conversation as a tool to express our own knowledge, thus ignoring the client’s subject matter and making him feel stupid in the bargain
- hidden agenda - where we ensure that the conversation achieves our own goals, forgetting to check that the client’s goals are satisfied. Dull down your own agenda - for example, don’t bring sales talk into a requirements capture exercise, but note any opportunities for later
Discipline, in this context, involves the continued relearning and application of good listening practice. It is relatively simple to give a pretence of paying attention but this is not enough. Good listening requires sustained concentration (“hanging on every word”), with the listener constantly on the lookout for any underlying messages, (“not what I say but what I mean”). Paying attention is not easy even in optimal conditions. In difficult (that is, real-world) situations, with heated rooms, distracted participants and too-little time, it can be nigh impossible. In either case it requires effort.
Throughout the conversation, the good listener should be on the lookout for blockers to listening, both preventing them in advance (for example, by clearing one’s mind of preconceived agendas) and recognising them when they appear. Not all parts of all conversations require intensive listening. There is always a need for balance - for example, small talk and telephone preambles are leisure activities which may not require full attention.
These words have concentrated on listening, not because it is more important than speaking or presenting but because it is as important. It is not only an issue of making the client feel appreciated. Before we can give the customer what he is asking for, we have to listen to the question.
2003
Posts from 2003.
May 2003
05-08 – Review: A solid, practical mug with a thoughtful, understated design.
Review: A solid, practical mug with a thoughtful, understated design.
Well, the Mug arrived in the post today. I’ve put it through its paces and I’m glad it say that, overall, it does the job a mug should do. Let’s see how it got on.
PACKAGING
The mug arrived in a cardboard box, apparently sprayed with expanded polystyrene balls. The box was solid enough, but I confess to being slightly dubious about how well the balls would withstand a greater than normal shock. Still, no damage done. The box opens by removing a cardboard tongue, then the lid opens quite easily.
FIRST IMPRESSIONS
Having removed the mug from its packaging, I gave it a quick rinse with water and had a good look. There is no manufacturer on the base, I confess I should have dried it before checking as I now have a wet leg. The mug is slightly creamy white and is printed with four Barries, each holding a coat hangar. They are sporting yellow, blue, green and red anoraks, I am still to work out which one is missing from the five in the CD notes. Print quality and image resolution is good. However the Barries feel slightly rough to the touch - it remains to be seen whether this affects the drinking experience. The green Barry has a tiny blemish just above his right foot.
The mug is covered in shiny glaze, this may be seen as downmarket by some but it does have the advantage of being easy to clean.
HANDLING
Balance is centred well, with the finger positions slightly above centre. The handle is cleverly thought out - it has room for two fingers, and the underside of is angled to accommodate a third. This adds to the feeling of stability, whilst limiting the potential for scalding - I can see my children bringing me cups of tea in this one.
The mug is designed to hold 300 ccs of fluids, though with its good balance you could probably go over this limit for short periods. On contact with the mouth the mug demonstrates its careful design once again. It has a lip, if you will, that is just large enough to catch one’s own, thus making the mug-mouth process very simple and effective. The handle position results in a slight pressure on the lower digits when it is tipped, but this should not cause anyone too much grief. For those who prefer to hold the mug by its body, once again the balance is good and there is room for two average-sized hands.
DRINKING QUALITY
I tried the mug with Kenco instant coffee, this was mainly successful. The protruding lip did result in a slight fluid residue on the underside, however dripping was kept to a minimum. That’s the mug, not me. The mug is neither chunky nor delicate, neither is the rim too wide, therefore it should be suitable for both coffee and tea. Dunking is an option for most biscuits, including Rich Tea and Hob Nobs - some Digestives may prove a little too wide. Oh - and the roughness of the Barries did not prove to be a distraction.
Heat retention was good, but the last few mouthfuls tended to lose the heat quite rapidly so be careful.
OVERALL
A solid, practical mug with a thoughtful, understated design. The few minor weaknesses we could find in this mug should not distract from the drinking experience it provides. Buy one!
05-30 – Earth to...
Earth to…
(click)
psst… is this thing on?
And I wonder, even as I type my first blog, whether I will sustain it, or whether it will last a few months and be forgotten. For some obscure reason it reminds me of Air Miles.
Off to walk the dog. And yes, the name is in homage to Bill Watterson.
(click)
05-30 – h's bar
h’s bar
Here’s something I wrote a couple of days ago, in my pre-blog days. It all seems so distant now.
A funny thing happened on the way back from Woking yesterday…
It all started a few weeks ago when, once again, I was cruising the M4. A minivan passed me with all haste, but as it went I couldn’t help noticing the logo “h’s bar” on the side. I was instantly intrigued. Where was this bar? I had to find out. Foot to the floor, I was determined to catch up with this vehicle that was already half a mile away. Suffice it to say, good job the police weren’t patrolling that particular stretch of the motorway.
Finally, I drew parallel with the van. On its side was a Hungerford address, that was all I needed to know. Relieved, I withdrew to a more sensible speed and let it on its way. The plan was already formulated, to one day seek out the bar and - think of the excitement - pick up a beer mat or something. Who needs Everest.
And then it was yesterday. A meeting had finished early, I had a spare hour, I had some thinking to do and my stomach was in rebellion. h’s bar beckoned, so off I went down the slip road and into Hungerford. Can’t be that big a place, I thought, not much more than a main street. The bar was keeping its counsel as I drove up and down, so eventualy I parked up and asked a local. Indeed, I was that desperate. Not far at all, I was told, down that side road and past the fire station. Can I walk it? I asked. Oh yes, smiled Janice, still wearing her badge having finished a shift at the Co-op. So off I went, and there it was. h’s bar. Home of a warm welcome, cold beers and a cheese sandwich if I was lucky. I pushed the door open and went in.
Inside the decor was neo-pub, light and airy, but the atmosphere was most definitely not. Six pairs of eyes were staring at me, unusually all of them were from women, who made up the entire clientele. There was an old lady at one table, two mums with their daughters at another, and a lady at the bar. Time stood still for just a moment, until I blurted, “hello!” Well, what else was I supposed to say? I turned to the bar, but the manager was otherwise occupied, talking on the phone. Next to him was a teenager porting one of those fluorescent waistcoats usually worn by people who work on the railways. I glanced again at the lady at the bar, she gave me a pitiful, almost pleading look. This wasn’t what I expected at all. What was I to do?
I could think of nothing other than to wait.
Eventually the bar manager came off the phone, and exchanged some sharp words with the lad. Suddenly all hell broke loose. The youngster grabbed some keys, and looked like he was making a run for it before the bar manager grabbed him and, after a scuffle, wrestled the keys off him. “SIT DOWN OVER THERE!” he screamed. “YOU CAN STAY THERE ’TIL THEY COME!” He pushed the boy into a chair, then returned to the bar. “Hello there,” he said, “sorry about that. Can I get you anything?” Petrified that I might ask for the same thing the boy had requested, I proffered, “I was wondering if I might get some food?”
“What would you like?” he asked.
“A sandwich,” I said nervously, “anything you like, maybe cheese.”
“Is that it?”
“Perhaps a bit of tomato?”
“Some onion?”
“Er - no, thanks.” I didn’t want to push my luck.
“No trouble,” he said, and the mood of the whole bar lightened considerably. He took the order to the kitchen, then returned to his post. “Anything to drink?”
“An orange juice,” I said, feeling cocky now. “With soda.” I took my drink and went to the table next to the old lady, as things returned to some semblance of order. “What happened?” I whispered over to her, and she explained that the lad had come in to use the loos and was still in there half an hour later, getting up to all sorts of no good by the tone in her voice. She confided that the police had been called, and sure enough as my sandwich arrived, a car drew up and two burly officers strode in. They went away into a back room with the barman and the lad, before leaving the premises in a southerly direction whilst escorting the wrongdoer to the vehicle. Well, that’s how they talk, isn’t it. All was well, so I thought, until the lady leaned over to me again. “I knew he was up to no good,” she confided, her eyes widening just enough to set my adrenal glands pumping again. “I can tell, you know. I always could…”
A quick glance down told me I still had two quarters of sandwich to go. I was sure I could get away with leaving the lettuce garnish, as long as I mucked it around a bit. “Humour her,” said the voice in side my head, “and munch with purpose. That way, you might get out of here alive.” So I asked her what on earth she was on about. Politely, of course.
It transpires (and this was the really weird bit), the lady - Jean - was matron at Holloway prison for over 20 years. I’ve seen the press clippings. I’ve also seen photos of all her grandparents, ancient pictures, compositions in sepia stuck on cardboard. I’ve had the stories - “nobody ever laid a finger on me, you know… I’d be there to mop their tears when they were in the docks… I don’t know why she confessed to me…” and so on, you get the picture. Other stories as well - one of her forebears was a hushed up bastard offspring from previous royalty, all hushed up of course, and a grandparent built the first London bridge. Or was it the second. No matter.
In the end, I didn’t want to leave, but my time was up. As I paid my bill, I conversed with the barman who by now was considerably less agitated than when I arrived. I explained to him what had drawn me to h’s bar (no, he wasn’t called h), and mentioned the irony about the song written all about Holloway. We agreed, I would send the words to the bar, and he would pass them to Jean for me. I said my goodbyes and left, through the same door, back past the firestation and to my car. Before long I was heading towards the familiarity of the motorway, towards home and the life that had, for a couple of hours, been put on hold. All I took with me was a couple of business cards, a receipt and a single thought that has been rattling around in my head ever since - what in heaven’s name was all that about?
Just thought I’d share that with you.
June 2003
06-07 – BT and the art of customer service #1
BT and the art of customer service #1
Apologies for the length and confusing nature of this post. Ironically, it is exactly that confusion which illustrates the point Im making here. I hope.
Well, I certainly didn’t expect to wake up to this on this bright Saturday morning. While I wait for a BT service person to pick up the phone (for a second time), I’ll let you know where things have reached. Came down this morning to a phone bill, of the so-enormous-it-verged-on-the-amusing sum of £1,245.47. When I checked, I found that most of the cost was down to the ISDN line, on which a total of 998 minutes of calls had been made. How could this be, as I was using Surftime Anytime the fixed-price Internet access package? Only one way to find out, so I called BT on their billing enquires number.
My first contact was with a lady called Kim. Having worked through the various items of security, discussed the problem and apologised for the inconvenience of it, she tried to put me through to the Home Highway division as, she said, the problem was theirs. Unfortunately, after a few rings, an automated voice told me to replace my handset. So much for that.
I called back and this time, after a much longer wait, I spoke to Nadia. We did the security thing again, at which point she put me through to Michelle of the Home Highway division in Sunderland. We had a good chat, during which she checked what number I have been using to accumulate all these costs. It was an 0845 number, she said, so I checked the configuration of my Buffalo ISDN box. Sure enough it was indeed an 0845 number, as she was telling me that that was a lo-call number, not a free-call number. As she spoke I remembered back to the Buffalo box having been reset to its factory defaults a couple of months ago. I had hunted around for the number, finally finding this one it had worked so I hadnt thought any more about it. Yes indeed, this could be construed as a mistake on my part. I waited for Michelle to finish, then I explained what I could see. Michelle was very apologetic, she said that she was surprised Tiscali didnt have a mechanism to catch that sort of thing. However, she said that the issue lay with Tiscali, as BT didnt do anything other than see Surftime Anytime as a free service. So I thanked her very much and prepared to call Tiscali.
No its not amusing any more, its turning into a nightmare. Heres how I see it. Clearly, at some point, I configured my ISDN device to use the wrong number. Did this therefore mean that this whole situation was my fault? Back to the plot.
I’m now in a queue waiting for Tiscali’s customer service. They also apologise that my call is in a queue. Thank goodness for speakerphones.
Now Ive spoken to Richard at Tiscali. He said that BT was wrong about it being an 0844 number, as Tiscali used an 0808 number. He said also that Tiscali could not be held liable for what is, in Tiscalis eyes, a customer mistake. He made the point that the correct behaviour would have been to call up Tiscali and find out what is the right number. He also said that Tiscali had no way of checking whether the right number was being used, and that the only people who could do that were BT. I commented that I had tried to call up Tiscali on several occasions in the past, and all the numbers I tried had failed. Without apology (dont get me wrong, he was pleasant enough), he said that this was due to Tiscali acquiring a number of companies and the call centres being consolidated as a result. Also, I think he said that Lineone used to use an 0844 number for dial-up rather than 0808. Theres that number thing again. During the course of the call I found the original 0808 dial-up number written on the letter that Tiscali had sent me originally, the one which started You may already have heard the exciting news that LineOne is now part of Tiscali This is also the letter with the incorrect help and support numbers on it, so I would argue how could I differentiate any other numbers that I had scribbled on this letter? This was also a memory jogger for me, as I remembered that I had been given the 0808 number over the phone. Is that really considered a sufficient or appropriate mechanism for issuing what is clearly such an important number?
I thanked him and called back BT.
Ive just spoken to Barry at BTs Middlesborough office. I hope Ive spelt that right. He mentioned once again about the 0844 number for freecall (which sounds horribly similar to 0845 to me) and I said to him that that wasnt the number that Tiscali was quoting. I said, I hope not too glibly, that if BT could get a number wrong, or at least have incorrect information about the right type of number to use, what hope was there for the customer however I made the point that because the customer made the configuration rather than BT, it would appear to be the customer that was liable rather than BT. He told me the amount of the phone bill was very unusual for a residential customer, and that BT have mechanisms that can monitor line usage but that they are only normally used for new customers and are turned off after 12 months. He did agree that this issue was something that would not have happened if BT had had their monitoring mechanisms in place. I have asked to escalate the issue to a manager, who will be calling me back today or Monday.
There you go thats all for now. Sorry about washing all this dirty linen in public. I feel a bit sick, I think its the adrenaline.
Back so soon. I have just spoken to Geoff, the BT manager at Middlesborough. He told me that this was the largest residential phone bill he had seen for a number of years. This rang huge alarm bells, as surely they should have noticed it if it was really this big? We had quite a long chat, during which he said that if Tiscali had referred the situation back to BT, they would have been wrong to do so. The fact of the matter is that this is about liability - Geoff once said not BTs responsibility and once, BT is not at fault. I said that between Tiscali and BT, the two companies had failed to provide a sufficiently solid safety net in order to protect the customer and provide a suitable level of service, despite having the mechanisms available to do so. The mechanisms were not used for policy reasons, I was told, at which I commented that therefore the policy was wrong as clearly it did not provide the required safety net.
Geoff is now going to talk to his manager, and they are going to send me a letter in the middle of next week. This will enable me to take the case to Oftel, which I plan to do.
Lets look at some of the more salient facts.
- The fact that this could happen dependent on a single digit being typed in error 0844 to 0845 - The fact that BT have mechanisms to watch for this sort of thing, but they were not being used - The fact that Tiscali did not send a letter, but gave be the number over the phone - The fact that Tiscalis customer support numbers are incorrect on the written correspondence I have been sent.
Thats all (again) for now. I am at a loss. Id be lying if I said this wasnt about the money, because if it had been only 50 quid, I might have quietly changed my settings and got on with my life. That, maybe, is what a lot of people do, so maybe its a good job that that isnt the situation. We shall see. Ironically, the fact that the configuration change was effected right at the start of a billing cycle might also be seen as somewhat unfortunate. The fact this is one of the largest quarterly residential bills BT have ever seen, makes me want to take it further. Inconsistencies and weaknesses exist all the way down in the process, and yes I entered the incorrect number into my Buffalo box, but the issue seems to fall between the stools of BT and Tiscali, both of which are quite happy to send me to the other. Having worked in network management for over 5 years and having spent 16 years in the IT industry, Id argue that if Ive fallen foul of the complexity, it could happen to anyone. Im decidedly concerned about BTs inability to implement perfectly reasonable and available mechanisms. Also, the 0845/0844 issue concerns me. Chances are, if Id configured all this for the first time last month, and this had happened, it would all have been cleared up in a jiffy, but because it was a reconfiguration of a piece of equipment on a 20-month old line, it has not.
The whole liability issue alarms me as well, I think its a bit of a sticky wicket considering the legal state of telecoms regulation in the UK at the moment (think Oftel being replaced by Ofcom, and why). As I said to Brian, Michelle, Richard and Barry, if this was about misuse of a credit card and my spending patterns changed suddenly, they would likely have been in touch very quickly. The phrase appropriate safety nets were not in place to protect the customer seems to fit quite nicely, at least I think so I wonder what the rest of the world thinks. I also wonder if the situation is merely a symptom of the fact that BT and Tiscali arent able to work together to deliver a single service, or to respond to an issue when it arises.
I will be interested to see what Oftel say about this. I’ll wait for BTs letter then take it from there.
06-10 – BT and the art of customer service #2
BT and the art of customer service #2
Very efficient - BT’s letter (known on the inside as the “f*** off letter”, so I understand) came today. Interesting use of the term “after a full and thorough investigation”, considering that the next paragraph tells me that I should have been dialing an 0844 04 number, when in fact I should have been dialing an 0808 number. “I can confirm and reiterate all the information you have received previously from my colleagues as being correct,” another interesting one when they told me that BT’s protection mechanisms did exist and they were surprised they were not used.
There’s a number I should “not hesitate to contact”, but every time I’ve called it, I’ve got an answerphone. Left a couple of messages though. There’s also a sentence without a full stop, which irks me :-)
Enough of that. Today, I’m off to London for an Action Aid do (see link on right), As I’ve mentioned, I’ll be using my tenure, hopefully, to break out of my cushy, throw-money-over-the-wall approach to charity and try to get a handle on what really happens at the coal face. Watch this space.
06-12 – Pamela's Book Launch
Pamela’s Book Launch
Wellll? the book launch was, shall I say, interesting. “There ain’t nothing ever works out the way you want it to,” says the Devil in Crossroads, and never was a film quote more applicable to me than this evening. I met a few people, drank a few glasses of water, even enjoyed myself somewhat, but it felt a bit like being at somebody else’s wedding. Pamela Des Barres was very pleasant, though I think she was wearying of signing all those books after a while (it was interesting watching her signing copies without even looking at the page). There were lots of people talking to lots of other people. And that was it! Still, it was a learning experience, and inspiring in an odd way. The venue was the Horse Hospital, which was an upstairs room which looked to all intents and purposes like an old stable. Judging by the ramp one had to walk up to get to the room, I suppose that’s exactly what it was.
Speaking of which (spot the link), there are three novels that I want to write. Or maybe two and a screenplay. One’s about sex, one’s about time, and one’s about an umbrella. And none are quite what you might think. In the loo at the party there was a sticker on the cistern facing me, with a person’s face and the caption, “you’re a fat old git.” That uncannily accurate statement will be in one of the books (or perhaps the screenplay), I’ll leave you to guess which one! And yes, I do know which one, myself. Whether any of them happen is a moot point.
Finally, I’m writing this on the train having tried to download my email over the GPRS connection. Is it just me, or is this GPRS thing a bit of a con? There ain’t nothing ever works out the way you want it to?
July 2003
07-01 – Towards The Perfect Device
Towards The Perfect Device
Towards the perfect device
© Jon Collins July 2003
The question is, can one size fit all? Yes, and it’s small, argues Jon Collins
“There can be only one,” Sean Connery told Christophe Lambert in Highlander. That is the correct spelling, by the way, Christophe is a Frenchman, which is why he always appears to get less than his fair share of dialogue. But I digress. “There can be only one” is a mantra that could equally well be applied to the device market, as anyone who has lugged around a PDA, laptop, mobile phone, external CD player, MP3 player and camera will tell you. The list doesn’t include the other essential - a portable printer - but I don’t want you to know just how sad I really am. There can be only one, not least to save on osteopathy bills, but also to solve the inherent issues of integration, communication, synchronisation and software compatibility between the lot of them. Whatever this “one” is, it needs to perform all the functions of the rest without compromising on performance. Potentially an impossible goal. Indeed, maybe it is impossible, which is why I’d like to propose a different approach.
I have recently been experimenting with these USB storage devices that seem to be proliferating at the moment. Natty little things, they plug and play with the most recent versions of Windows and Linux, meaning that the data they contain can be accessed on any recent computer with a spare USB port. Handy for backups, neat for file transfer, a good little floppy disk replacement I thought to myself. But then I started thinking a bit harder, and their true potential started to become apparent.
For example. I’m currently using a USB storage device for all of those things. But also - and here’s the (hopefully) clever bit - I have an email application that I run from the device. It’s called nPOP, and the beauty of it is that it is self-contained - it doesn’t use the registry or any external files or directories to run. This means, I can plug my USB device into any Internet-connected computer and check email across all my email accounts, without having to specify them one by one and without relying on an email service provider. Sure, there are such things as email providers with Web access, and I could configure one to check my email, but they are rarely sufficiently functional to provide the full service. This package also provides an address book, it can work with attachments, and so on. So, when I travel, I can rely on the fact that there will be computers where I am going.
This USB-storage-device-centric model can be extended to encompass other applications. Of course, first it is necessary for them to be hot-plug compatible, and the majority of modern apps are not. Which brings me to the leap of faith, and something which I shall be checking out. What about Java? If a standalone Java virtual machine can be installed, then an entire operating environment is provided without recourse to the host operating system. There are already a plethora of productivity apps (i.e. small, useful ones) for the Java VM, including instant messaging software such as Tipic. And PIM software, though I haven’t tested the packages I have seen. Even if Java isn’t appropriate, maybe a browser-based approach would work, for example using CGI. It should be possible then, to develop a portable environment.
What about those common apps, such as Word and Excel, I hear you shout. Spot on, good point. The answer is, that the common bloatware packages are already commodity items, and do not require any user information. Therefore they can exist quite happily on the host computer, separate from the user’s own apps that exist on the USB device. When I plug in to a computer, I would expect there to be a basic suite of apps. Fortunately, in most internet cafes, they already are.
While we’re on the subject of what-abouts, what about all that other stuff, MP3, cameras and the like that I mentioned? USB storage devices already exist that are also MP3 players. There are already MP3 players that are cameras, or vice versa - it’s difficult to tell sometimes. So, given the above, it is not unreasonable to expect a device which is storage, camera, voice recorder and music player in one (not to mention mobile phone). Given developments in this area, it’s probably a matter of months away. All some savvy manufacturer has to do is bundle a similarly useful software suite on the device, enabling it to plug and play and become a truly portable environment. Provide Java VM’s for multiple platforms, and support SD Cards, and it really will enable anyone to work with anything on the move. Indeed, the software suite could equally well exist on an SD card as the device itself. We just need a mobile device supporting Java, which is USB storage compatible, with an SD-card slot, and we have everything we need. Given all this, it becomes apparent that maybe we’re not looking for a single device after all, rather a USB-compatible storage mechanism that will work with everything we throw at it.
There is no technological reason why everybody that wants to, couldn’t be carrying a complete application environment on their phone, MP3 player or just on an SD card in their wallet. That’s possible today. Add some imagination and we could talk about having some e-money on the device, which is deducted by the host computer in micropayments per minute used. We could also recommend public kiosks – such as those provided by BT – supporting this model, and proliferating as people realise they no longer have to lug unergonomic hardware whose weight is determined largely by their inadequate batteries.
It sounds simple, but imagine if everyone was doing it, it would bring the truly mobile world one step closer.
07-10 – Microsoft and tears
Microsoft and tears
Am I never going to be free of Uncle Seattle? However hard I try, I can’t seem to shake him off. A bit like his software, in fact! I confess, I do think Microsoft have some pretty good software. They won the suite wars because their stuff was best, after all - I just wish they didn’t want to take over the world, that’s all. Anyway, this was supposed to be my last week, but they’ve asked me if I have any more time. I have said “very little”, but it doesn’t seem to have shaken them off. We’ll see what happens.
Meanwhile, I’ve started to develop a new Web site for Faros. It’ll be up and running in the next few weeks, I think. This Art Gallery thing has really spurred me on. One thing I’m sure of is not so much, “build it and they will come”, more “don’t get out there until there’s something to show.” It amounts to the same thing, however I can’t decide if its just an excuse to avoid fixing meetings with potential clients…
Got some news back from the people who the book will be about, let’s call them the Musketeers. It was neither good nor bad news, but at least it was communication which is always a good thing. We agreed I would write something and get it to them by August, and they could see what they thought. Scary.
And finally, got a CD in the post this morning, it was the latest Marillion best-of album from EMI. Darren sent it, as a little thank you for getting him in the market for doing cover art (he did the Separated Out book cover, fyi, and very fine it was too). It’s a funny old world, glad to be of service and all that.
07-13 – Somewhere over the Atlantic
Somewhere over the Atlantic
Today I shall be mostly sitting on an aeroplane.
It was a bit of a hair raising start to the day. Knowing I had to get up at 5.40, I inevitably woke at five, then quarter past, then… half past six! Disaster in the intervening minutes we had had a power cut and the mains alarm clock had reset itself to midnight. Fear and loathing, and the trip to Las Vegas hadn’t even started! Rapidly I stirred myself, washed my appendages and got on my way.
Ironically, from that point on everything has been a breeze. The clear motorways and the just-subsonic speeds I achieved meant that I got to Gatwick in an obscenely fast time, wondering all the while whether I would meet anyone official on the way. I didn’t, and as I had a hire car, the arrival was slowed only by having to top the car up with petrol before I returned it. Phew.
So, now I’m sitting on a plane on the way to sunny Vegas. Extremely sunny apparently, that kind of shirt-sticks-to-you-as-soon-as-you-put-your-head-outside sunny. The occasion is CA-World, an event that Computer Associates are very kindly flying me out to. I’ve just done a bit of work for CA, a video (my first - I was told in no uncertain terms that I wasn’t a natural), and as I spoke for free at CA-World last year, I think they took pity on me! It should be stimulating. Really!
After that, I shall return home via Boston, to meet up with some friends, go whale watching and take in a couple of gigs. And why not.
So, back on the plane. So far, I’ve finished reading a book that I can’t tell you about, got to 5,000 words with my first novel (can I say, just writing the word “novel” feels really pretentious? It’ll be interesting to see how that pans out!) It’s now called The Knowledge, a great title until I find out its been used already. The good news is, I’ve worked out the plot from start to finish, so hopefully now its just a question of filling in the blanks. And making it readable. Hmm. Not out of the woods yet then.
I’ve also written some more of the next music book, but my heart wasn’t in it and the words weren’t flowing, so I’ve shelved it for today. Oh and I’ve watched a film - Phone Booth. Cleverly done, with the usual “get out of that” formula transposed onto a corner of a street. I enjoyed it, and was relieved to see a modicum of Hollywood creativity. I would say “for a change,” but I don’t see that many films these days!
Three hours, 47 minutes to go, as we fly over Canada. Ho, hum.
07-15 – Conference Urges
Conference Urges
I don’t know what it is, but at IT exhibition I get a one track mind, and it ain’t the technical track. I’ve learned to control my urges by getting them out of the way first, then I can get on with the useful stuff. I am of course, talking about the give-aways, or ‘gizzitts’ (as in, ‘go on, gizzitt, go on’)
Last night, on the opening night of the conference I exceeded myself, I really did, in my blatant disregard for what people were saying at their stands and in my direct approach to what really mattered. ‘What’s in the box,’ I said, ‘Can I have one?’ I also tried ‘I don’t work on Sundays,’ which seemed to hit the spot.
So, here, goes. In a style reminiscent of the children’s travel game, I went to the expo and I blagged:
Five mini-penknives
One squidgy penguin
Three climbing clips (marked ‘not for climbing’, two with a compass on the strap)
Various sweets and candies
One pink fluorescent Frisbee ring
One water pistol ’ loaded (I overheard someone say, ‘we need to find someone we know to fire this at.’ I wasn’t so discriminating!)
One cap ’ blue
Two monoculars (does that make a binocular?)
One fridge magnet clip
One inflatable neck cushion
A paperclip tray
A pocket radio ’ with built in torch (flashlight) and compass
One squidgy chair (fits the penguin)
Three notebooks
Five gold rings
One cable clip
A plastic egg containing silly putty
Two digital dice - bang it down and it displays a number with its six LED’s
One calculator/ruler
A number of pens
A handy clip with extending string for conference badges or ski passes
One bar of chocolate that looks like a gold AmEx card
One metal torch
One metric conversion card
Two auto-rewind modem cables
One drinking bottle
A map of Las Vegas
One multipurpose survival card (with built in tools and ’ you guessed it ’ compass)
Ten chocolate $100 gaming chips
Two t-shirts
I think that’s about it (I was joking about the gold rings). The winner of the prize for the most worthwhile gizzitt has to be the survival card. Now I must remember to pack it in my suitcase for the way home, otherwise it won’t make it past the checks!
On to more serious stuff: I’m absolutely knackered. I was up for twenty three and a half hours yesterday and only slept for six. Mustn’t grumble - where would I rather be ;-) It’s now 6.15 in the morning, I’ve finished a slushy chick-lit novel (very good ’ ‘Thirty Nothing’ by Lisa Jewell) and I’m off to find breakfast.
07-18 – Boston whales
Boston whales
Arrived in Boston the day before yesterday and went straight to the Ramada, which is out near the University. In fact, it’s privately owned by very nice people, and it had a regular shuttle to the subway (the “T”) and so on, but anyway. Woke up yesterday and finished some stuff for Uncle Seattle, which took me until lunchtime, when I headed in to Boston with the vague goal of finding a wireless Internet hotspot. I then had a bit more work to do, which took place in the shade on the end of a quay at Boston Harbour. Reminds me of the time I told someone I’d sat on the beach at Nice and worked. “Of all the things to do on the beach,” the friend said, “you had to work?” I replied, “Of all the places to work…”
I took in Chinatown, the harbour, the shops and the sights - by the way, only go to Cheers bar, if you want to go to a bar anyway. Nothing to see but a bar. On the recommendation of a wireless access service provider (I happened to be walking past the office), eventually I ended up in the News bar, a very plush establishment which boasted wireless Internet access. It happened to be happy hour as well, I didn’t take advantage of the free martinis but I did have four starters for 99 cents each. Unfortunately the Internet access wasn’t working. I tried everything, and was quite willing to believe that it was my fault, but as I could ping stuff that wasn’t my computer I thought not. When I finished my food I headed back round to the office of the chap who installed the thing. The long and the short was I ended up down in the kitchens of the News bar, trying to diagnose issues with power over ethernet, linksys boxes and broadband connections. What fun, but we didn’t solve the problem.
This morning I got up a bit earlier and headed for the harbour to do the whale watching thing. I thought I’d missed the boat, but no, here I am on deck typing this up. It’s an overcast day, so it?s unlikely I’ll get the kinds of pictures you see on posters, indeed we’ll probably be lucky to see anything at all but at least its 4 hours of not very much, just what I need. Just had a great idea for an article - “the roadmap conspiracy,” which makes out that computer companies have everything all mapped out and are drip feeding new technologies to the users. As if, and the Internet bubble didn’t burst either!
2 hours later.
Just seen a whale! Well, several whales - one of which was the best the guy with the mike had seen this year, so he said - it was a humpback sleeping in the water, when we moved alongside it arced its back and flipped its tail out of the water as it dived. Very impressive. Now we?re heading back, then I head to the station to catch a train to Providence. Or maybe I should hire a car. I’ve got an hour to make my mind up!
07-18 – Opeth in Providence
Opeth in Providence
I arrived in Providence on the train, unconscious of the fact that it was in this city that saw the tragic and untimely demise of Great White following that foolish application of pyrotechnics. Arriving early, I met with Jason who told me everything was sorted for the show. Opeth were playing first, he said, so expect a late night. S’okay, I said, my train was to be at 12.45. He looked dubiously at me - gulp.
I wait for the doors to open, flitting from street to Starbucks and back. Eventually I meet up again with Jason and Ian, and we go for a Chinese - a welcome respite in the hanging around. Maybe I’m getting old, or maybe its the act of being alone in a strange city. Whatever.
Lupo’s is a dive, looking like the sort of place you might see in a film set, like Bladerunner without the rain. This was nothing if not authentic, right down to the lack of doors in the gents. The stage is in the centre pf an oblong, with space at either side. Beside it are two pool tables, the players oblivious to the fact that a band is playing at all. The crowd outside is a regular mix of post-punk, goth, beachbums and rock and rollers, each no doubt planning on finding their own personalities reflected in the music.
Doors open. Audience file in, unconscious of the fact that their meek behaviour is at total odds with their rebellious attire. Positions are taken and the stage is set.
Opeth walk on and the applause is rapturous, even more so as they break into their first song. It’s a slow number, a strange one to start with, but it is unlikely that it will stay slow for long. Indeed, the pace picks up. More rapturous applause. It’s going to be a goodnight for the Opeth fans. Unfortunately it doesn’t do anything for me. I should say I’m hopeless at listening to bands that I don’t know the music of, but it sounds a bit pedestrian and indulgent, like a cake with all the right ingredients but which has been left in the oven too long. Maybe I’m getting too old. It doesn’t help that I can’t hear the guitars properly - they are there, but added, it would appear as an afterthought. Given the fact that the singer is also on guitar, I find that unlikely. There is a lack of energy, which is more likely, the sound is trying to be deeply moving but it has more of a soporific effect.
The next song starts out acoustic, but gradually the other instruments join. This is more like it - a rhythmic number that harks to the orient, Aziz Ibrahim would be comfortable playing on this one. It might be called the darkness. It builds up, to a crescendo which I listen to from behind the one door in the John. It was a good song, and I am maybe now better able to appreciate the music. Was it me, or does the floor sway in Lupo’s? I wish I could blame it on the alcohol, but I am stone sober.
Opeth’s set continues as it should. A good band, consummate musicians giving the audience what the want. I don’t think anyone would accuse them of being showmen though, with each song pausing for a changeover of instruments and a brief introduction, it is difficult to build a flow.
07-19 – The Threadbare Carpet
The Threadbare Carpet
Ubiquitous network, global reach, any time, anywhere connectivity, we all know the lingo and have probably used it from time to time. The reality is somewhat different. Sadly different. Devastatingly different, if you have wandered around a major city looking for a cybercafe or a wireless hotspot, as I did in Boston last week, or if you have baulked at the outrageous pricing that some places charge for access.
The fact is, when the marketeers and CTO’s mapped out the new landscape, they forgot to take into consideration the fact that there are humas involved. Humans are slow on the uptake and resistant to change, meaning that it takes a long time for any great change to take place. “Evolution, not revolution” is not the mantra of progress, but a recogition that people don’t say jump when you want them to. And Geoffrey Moore’s chasm (or Gartner’s trough) in their respective adoption curves is in fact a generour way of saying that most people just don’t want what they’re being offered. With wireless, we have the chicken and egg that people aren’t the purchasers of hotspots, rather their users, but they don’t want to pay for them; menawhile the operators, stung by 3G, are not going to make the mistake of spending too much beefore the demand is assured. Frankly, they can’t afford to.
The end result, is rather than having a finely knitted mesh of connectivity, we have a threadbare carpet, a rag rug of many different protocols which shows its ill-fitting seams far too clearly.
Nobody is thinking about the customer perspective. Let’s stop for a second and think what these requirements are.
1. The customer should not have to give a monkey’s grunt what the protocols are. The way the protocols work together should be entirely transparent to the user. As an interface between applications and networks, the TCP/IP protocol set is a no-brainer. Everything below that should be someone else’s problem, and should be auto-selected based on a user’s service needs.
All this fighting over protocols has resulted in turf wars, which does the service consumer no good whatsoever.
2. A number of services are so obvious they barely need listing, but I will anyway:
- application access. In other words, a Web-based front end onto anything, be it a travel bookings service or music station
- media access. Video, audio, with the necessary clarity to make it usable.
- interpersonal access. Email, chat and discussion boards, coupled with voice communications, in a nutshell.
These three access types, and combinations of each, give people everything they might need
3. Customers want to pay the minimum, if anything at all, for their access. This is fair enough, given the low prices of some access types and the fact that access is a means, and not an end in itself. People should pay for applications,and we don’t need new and improved micropayments mechanisms for this. Telcos already have billing structures that can pay a percentage to each other, there are also time-based subscription mechanisms and credit cards. Don’t give me micropayments, I don’t need them, not for this anyway. Where billing happens, it should be transparent, for example as an on-screen counter: if somebody (say, starbucks) wants to pay for me instead, I’d be happy to see the unobtrusive advert on my screen that told me so. Don’t get me wrong - this isn’t a “water is a basic right” argument. It’s more like I wouldn’t expect to pay for the pavement,or for the shopping centre, or for the cinema walls before I can sit in the seat.
There is nothing at all wrong with the concept of a basic service being given away for free, but a better or more complex service being available at extra cost. The basic serviecs I would suggest are voice and data messaging, and Web access. More expensive would be higher-bandwidth apps such as image/video and gaming, these and other applications could be paid for. It is simple to work out how - application providers need to pay to have their applications hosted on the Internet. That’s it, that’s all, just as the consumer shouldn’t pay a thing to walk into a shop. It is then up to the application provider to edcide how to cover those costs, for example by charging for the service or by considering it as a loss leader or a worthy plan. Charities, for example, pay for mail shots, they can also pay for people to see their web sites, even if somebody else chooses to bear this cost.
I believe that a basic service should be given away. The reasons for this may be entirely self serving, but also they are based on developing a ubiquitous service on a global basis. If not given away, they should exist on a subscription basis. One way or another, it should be absolutely clear what is being paid for, when and why. WIthout some kind of basic, low cost service, it becomes difficult to extend the model as broadly as required. For example, a service shoudl function not only town to town but also country to country. Given the fact that the internet knows no national boundaries (ask a packet), it is laughable to suggest that packet-based access shoudl cost any different whichever country one is in. This is undoubtedly true for western countries.. There should not even be a variance from hotel to hotel.
Tesco’s should install free wireless hotspots in its shops. This would be achievable at relatively low cost, be a great PR coup, and set Tesco’s as a thought leader in teh market place. It would be a bit like Tesco’s insurance, books or pharmacy shaking up the market.
3. Customers expect a level of service guaranteed. The service should be secure and available, within limits. It should not be necessary to trawl around a major city looking for somewhere to plug in - this is a laughable situation that should be resolved as quickly as possible. Consolidation is an inevitable element of the solution, which should be available on autility basis - like water, in fact. I think the biggest weakness at the moment is service transparency. For example, bandwidth availability should be as visible on any device as the signal strength meter on a mobile phone.
The right connectivity, the right apps and services, using the right costing model. That’s it! The fabric/carpet analogy is a good one. In a fabric, every thread is as strong as the others, otherwise the strength of the whole fabric is at risk. Who’d pay for a threadbare old piece of material, unless there was some antique value in it? Which, hopefully, is what will happen to the network.
07-31 – The Gazpacho and Green Glass Incident
The Gazpacho and Green Glass Incident
The scene: a reasonably well-to-do restaurant in downtown Boston. Outside, a number of ironwork tables, largely unoccupied. In one corner, separated from the street corner by an iron railing, sit our thirty-somethings - Karin, Carole, Shane, Jane and Jon, who are looking largely content despite the hour it took to find a seafood restaurant and the final indignity of being unable to discover the way in. A robust, humorous waiter has already taken the orders, which include two bowls of Gazpacho soup - because it was there, largely, and if you have to ask why you-d better not know. Cut to:
Jon’s voice - narrating
I’d never had Gazpacho soup before, or at least I don’t think I had. It seemed like the right thing to do.
Waiter arrives with two bowls, sets them before Jon and Jane. Zoom past Jane to Jon, conversing with others as he picks up his spoon and stirs it around the bowl, inquisitively, before scooping his first mouthful.
When it arrived it looked absolutely delicious. I plunged my spoon into the bowl, heaped with liquidized tomatoes, peppers and onions, and lobbed an oversized spoonful into my mouth.
Jon nods his approval and starts to chew. Close in on a variety of facial expressions, which blot out background to fill screen. Follow actions as they are described:
It was delicious - so I thought. But as the food turned over in my mouth, I was faced with the experience of something going very wrong. I bit on something hard, crunching it between my teeth before I could stop myself. Oh no, I winced. I’ve broken a filling.
Jon reaches into his mouth and removes something as camera backs away and shows expressions of others, still conversing, oblivious. Lip sync narration with Jon’s profile, while keeping focused on faces as expressions turn from interest to horror.
Look, I said, depositing two small, tomato-ey fragments of green glass onto my palm and holding it out for all to see.
Carole
Oh my God!
Karin
What *is* that?
Jon’s voice, narrating in sync:
It’s glass, I said, to the horror of everyone sitting at the table. I decided it would be a good time to go find the manager.
Looking a bit uneasy now, Jon pushes away his chair and stands up, almost thoughtfully walking back to the main restaurant. As he arrives at the door, the waiter emerges. No sound is needed to explain the interaction between Jon and the waiter - the crumpling into incredulity and fear on the waiter’s face as he turns and goes back inside, leaving Jon to return to the table. Fade to black.
When the manager arrived, she was highly apologetic, but just the tiniest bit skeptical - after all, it doesn’t take a rocket scientist to have the idea about lobbing some glass into the soup in the hope of a free meal. Slowly it dawned that this was no set-up, and the skepticism turned to fear - this was the land of the free litigation after all - and then relief as I said that nobody would be suing anybody. Unless I found my stomach lining had been irreparably damaged. We had quite a good old chat in the end, and were promised various things, including a free lunch for me (hurrah! I was having the lobster!) and free desserts for everyone.
Things started getting silly then - the manager said she would name the Gazpacho soup after me - “Jon’s Gazpacho” and we all agreed it would be a good thing that the restaurant obtained and played a copy of the CD that started all this, as a penance, for a week. The rest of the meal passed relatively smoothly, even with me ripping a hole in my thumb as I attempted to crack one of the lobster claws, and the desserts were delightful.
Unfortunately, I have suffered no ill effects since, so there doesn’t really appear to be a case for litigation! Next time, I’ll have them!
October 2003
10-01 – Weird Week
Weird Week
This week has been, well, weird. I thought I had nothing to do, but nature abhors a vacuum… what has actually happened is that I have discovered a whole variety of things that have been forgotten or left by the wayside during my more busy times. There are diary clashes, unfilled forms, an article for Silicon.com I was supposed to write last week, honestly its a wonder I manage to get out of bed, I wonder how I do!
Plus, at the end of last week I managed to hook my wife’s bumper on a post and rip it off the car. Such a simple thing, such an expensive thing… and such a lot of trouble I’m in given that she only bought it two months ago. Fate is not without a sense of irony, as two days later it was her birthday and I was planning on purchasing personalised number plates. It helps, of course, to have a bumper to stick them on.
Work is up and down. Some great things happening - a few nice little earners, then of course there’s the books, but there’s no clear pipeline. “Worst comes to the worst,” I said to my friend George at the weekend, “you could always get a job!” she interrupted, a thought that left me feeling just a little queasy. Well, I could, but, no. Not an option!
10-27 – Four Colours Solution?
Four Colours Solution?
I was just now listening to Simon Singh’s Radio 4 broadcast about the Four Colours problem. No doubt a little over-stimulated by having just finished Cryptomicron (cracking book, if you haven’t read it), I’m now sitting in a Little Chef having a scribbled a potential, mathematical solution to the problem all over the backs of the menu sheets. It goes something like this: rather than trying to work out all the possible maps, it instead tries to enumerate all the potential relationships between two two-dimensional shapes.
First, an assumption - that the four-colour issue doesn’t apply for shapes that meet at a point where several (more than four) lines meet, like the roads leading off from the Arc de Triomphe. If this assumption were wrong, and there were ten lines meeting at the point, this would require 9 colours. Sorry - what I’m referring to are the shapes that are diametrically opposite (across the point) as opposed to the shapes which are next to each other and happen to meet at the point.
Given this assumption, the steps to a proof are as follows.
1. Prove that there are only 6 ways in total that two shapes can join, based on whether the connection is made with an edge, a vertex or one object completely inside the other. This results in the proof also that no shape can touch any more than three already touching shapes. 2. Prove that, given an infinitely large shape as a starting point (represented as a line that stretches to infinity in either direction), and given that the shape has one colour and the space not occupied by the shape has another (visualise the sea and the sky, with the line being the horizon), you can add another two shapes into the resulting visualisation with only two more colours. This should be true however the additional shapes are positioned with relation to each other (there are 6 ways, remember). 3. Prove that you can add another shape without requiring any additional colour, based on 6 available relationships between shapes: a. If new shape touches one or two existing shapes (including the infinite line), use the colour of the shape it does not touch b. If the shape doesn’t touch any other shapes, use either of two available colours c. If new shape touches 3 or more shapes, you may have to switch colours with existing shapes to accommodate new relationship between shapes. As a shape cannot contact any more than 3 others (proved in (1) ), you should always have one colour to spare 4. Finally, prove that the act of joining the two points of the infinite line has no effect on the number of relationships that can exist between the existing shapes. The same is true if the plane is linked, for example by forming a sphere or a cylinder.
There may be another way to prove this, going purely on the 6 relationships basis. Any shape can be made into a triangle just by straightening out the lines. It appears to my limited brain that there are only so many ways that the planar sides of 2, 3, 4, - n triangular shapes can come into contact with each other. The central triangle can of course be connected to stacks of triangles, it’s just that they can’t all touch each other (for a start, they’re all pointing away from the centre!) As no 2-dimensional object can have less than three sides, then topologically, any part of a map can be reduced to a 3-sided object with a number of 3-sided objects connected to it. As any shape can be positioned as the central shape, topologically, prove it for one and you’ve proved it for them all.
Maybe I should have stuck with languages ;-)
2004
Posts from 2004.
February 2004
02-11 – Thinking Inside The Box
Thinking Inside The Box
Thinking inside the box
Jon Collins, 9 February 2004
IT has never been about making things simpler, at least not from the technological perspective. In the very old days, computer ‘infrastructures’ used to consist of a single, large box with a number of connected, “dumb” clients. More complicated setups had two or three boxes joined together (one with the obligatory tape reels), but they still made up one, large, mainframe computer. Indeed, some people thought that would be the way the world was for ever. “I think there’s a world market for about five computers,” said IBM chairman Thomas Watson in 1952, blue ones presumably. As things got smaller and things got cheaper however, everybody wanted one. Then, some bright spark had the idea of getting the disparate computers to talk to each other, and all hell broke loose. We’ve never really looked back: workstation and client server computing were added grist to the mill, and seemingly (according to the now-greying mainframe guys, who watched from the wings), all the best principles of reliability, security and so on were thrown away. Why? Largely, because things were happening too fast: before the software had time to catch up computers were still getting smaller, and cheaper, and everybody wanted more of them. It’s not all been bad: each new wave of computing has enabled businesses to reach further and achieve more, but in each generation, those old, mistreated principles were forgotten. It’s still true today, largely, and we all know it to be so; at the same time, mainframe technologies refused to lie down and play dead, and jostling for position in a world of blades and clusters.
Distributed computing is here to stay, and for good reason. Historically, the smaller, cheaper computers made it possible to do things that were impossible before – graphical displays, cheaper local processing to take the load off the back end, a generally improved end-user experience. It goes on: PDA’s, phones and even MP3 players are the ultimate in ultraportable computing. And let’s not forget – how could we ever – the Internet, which is nothing more than some bright sparks agreeing a few protocols so that any compute device can talk to any other. On top of all of this, layered software has evolved to make the most of local, central compute power, and any tiers that might exist in between. Even the old fashioned, monolithic software packages are being broken into more manageable chunks, enabling a pick and mix approach to applications – theoretically anyway.
In practice, and in addition to the lip service paid to them good ol’ mainframe principles of performance and uptime, the distribution of software and hardware has been the cause of many new headaches. For example, no operating system ancient or modern was designed to handle thousands, or even millions, of simultaneous connections. A number of solutions exist to such problems, some (such as DNS round robin) built into the protocols themselves, others are supplied by enterprising companies who recognise a need to be met when they see one. This includes the clustering techniques advocated by major operating system providers such as Microsoft, Sun and purveyors of Linux products; it covers the distribution mechanisms built into Web servers from the major providers (such as IBM and BEA). it also incorporates appliance companies (for these, read software companies who recognise they need to provide a straightforward, packaged solution). These companies make clever boxes that control the connections and the data flow, offloading some of the pain from the servers.
Different appliance companies take different tacks. For example, F5 Networks (and their competitors, Alteon and Foundry) supply a box that enables different IP packets to be sent in different directions based on their content. In doing so, an F5 solution can be used for load balancing, inspecting the packets then distributing them across servers in an appropriate manner (the box can even take a feed from the servers, to support its decision making). A second example is Redline Networks, which cares less about the content of the packets, and more about ensuring it is transported in the most efficient manner possible across the ether. Both solutions are valid: one of the most attractive elements of such appliances is that they are based on ASICs – Application-Specific Integrated Circuits, custom silicon that has been optimised for one purpose alone. There is no ideal computer architecture, and custom hardware will inevitably give better performance than general purpose hardware. As such, these appliances can achieve much higher throughput than server-based equivalents; they are also more cost-effective. It does seem a shame that another link needs to be added to the chain to make the whole chain work better, but it would appear unavoidable.
Or is it? Over in the research labs and universities, the boffins have come up with some new concepts that have recently made it into the mainstream. These go under the banner of “Grid” – essentially, clever software running on each computer that enables the whole bunch to be run as a single resource pool. The result is a highly – indeed hugely – scaleable resource pool, and vendors that have jumped on the Grid bandwagon have been quick to point out the successes. Trouble is, Grid found its niche with single applications that can run in a distributed manner. It was never really designed for multiple applications, and as a result can only ever be part of the answer.
Meanwhile, there has been another new buzzword doing the rounds. From the On-Demand stables at IBM and the Adaptive Infrastructure workshops at HP, we have “Virtualisation” – essentially, a mechanism for taking a bunch of resources and carving them up in the way applications want to see them. We can “virtualise” a zSeries mainframe, for example, by making it look like a few hundred virtual computers, each running Linux; the Unisys ES7000 allows us to do the same with both Windows and Linux, and allocate RAM on the fly. We can apply the same principle to a rack of storage, allocating, re-allocating and de-allocating on an as-needed basis without needing to stop and start the applications that depend on the space. To the system operator this is a powerful capability – it brings an additional level of control, and it also enables far better utilisation of resources than before. Let’s state this clearly: done right, it drives costs out of IT – music to the ears of the CIO who has enough to cope with on his ever-diminishing budget.
Virtualisation can be a major catalyst for consolidating the disparate servers and compute devices in a data centre, as it enables more tasks to be done with less equipment. With all the advances in technology, it is getting quite common to see equipment from different vendors all in the same rack, architected by the reseller or even the lead vendor, and delivered as a plug-and-play solution that is optimised for such things as availability and performance. As these offerings evolve, they will inevitably incorporate the requirements of the majority of customers, increasingly componentised and therefore inevitably cheaper. Already we have a SAN-in-a-rack solution – what’s to stop having an infrastructure in a rack?
Hang on a minute, let’s just work this one through. Suppose we have a pre-configured, off the shelf rack of equipment – clustered blades, load balancers, virtualisation software, storage and so on. Perhaps it wouldn’t fit in a single rack, so let’s have two racks. We could also install a database at the factory, and why not a content management system and a couple of enterprise applications, prepared in such a way that they could be used with as little intervention as possible. There’s a few other things we could throw in as well - web page serving and terminal services say, so that client computers wouldn’t require any reconfiguration. Then, suppose we didn’t like the look of the racks. We could add some sleek black (or even blue) doors, and add some windows for the status lights to flash through.
You know what would happen next, of course. The day after the quick, successful deployment, one of those smart-Alec, grey haired mainframe types would come along and glue some old tape reels to the doors for that retro look. He wouldn’t admit to it of course, but you’d know it was him by that smug look that said, “we’ve been here before.”
And he’d be right.
March 2004
03-07 – Now reading: Dead Air
Now reading: Dead Air
The trouble with Ian Banks?
? or even Iain Banks (so, you know he?s Scottish from the very first thing you know about him, and you won?t be forgetting that, will you now), is that all of his main characters are the same. Sassy women, rock heroes, war heroines, journalists, players, they all possess an impeccable wisdom, wit and logic that is unswervably Banksian. Essentially, he writes to express his fantasies on the printed page. There?s nothing wrong with this ? indeed, the results are to be applauded, but despite the stylistic changes, the clever uses of the language, the flashes back and forward I can?t help wondering if Oor Iain is a bit of a comfort zone creature. Amalgamating a few central characters, he likes to live slightly dangerously, enjoys good sex and a few drugs occasionally, and likes to be the centre of a conversation. He values his independence, enjoys his music and feels decidedly uncomfortable that he is getting older, to the extent that he indulges in things that might incite the occasional snigger from those around him. All of which is probably about as far from the truth, but heck. It?s a Sunday.
03-18 – Stupid bloody, bloody trains
Stupid bloody, bloody trains
I’m at Swindon station, where I’ve been cordially told that there is no scheduled train from here to the next station along (ten minutes away) for two hours. The man at the ticket desk kindly informed me that there was no demand for the service - ironic, I thought, as I was demanding it - and even more bizarre based on what he said counted as research, namely the number of people entering and leaving the station.
Now, given the fact there is no service to come in for, how could they possibly know - or is there something I’m missing here? Equally ironic is the reason why I went to Kemble rather than Swindon in the first place, that the car park is always too full by the time I get to the station to make use of it. The car park at Kemble itself was heaving, 300 cars I was told, busier and busier by the day, but still, no call for any more than a minimal service between the two stations. It just doesn’t stack up.
Integrated transport system. Pshaw.
April 2004
04-18 – Invisible Man
Invisible Man
It is the evening of what has been a fine, sunny yet breezy day, one of the first that could really call itself Spring. The sun is holding a steady position on the western horizon, still warm and pleasant enough to walk the dog, yet hinting at the slow descent into evening, towards sleep and the distant stresses of another day. All is well as I take standard, measured paces down the lane, each step predefined by countless repetitions during rain or shine. In one hand, an extending lead flicks this way and that, following the scents of late afternoon. The other fumbles with wires and buttons, untangling the earbuds of an MP3 player before attempting to press play. With the opening bars the volume is set, electronica replaced by a steady drum beat. My pace quickens imperceptibly, falling in line.
“The world?s gone mad?”
The music keeps gentle company, not too loud to drown the sounds of nature, loud enough for the nuances to come through, a soundtrack to the soundtrack. I turn a corner and head towards the usual field, its pathways well mapped, timed to perfection. At the gate I stop, hand reaching down to the leash before pausing momentarily.
“The invisible heart?”
No, I think as the music recedes, then quickens. Not the field, not today. The sun hovers fractionally lower in the sky as the decision makes itself and the music calls me on. Shout my name in public places, my lips mouth the words as my feet match the rhythm pace for pace. I hear the words, I say to myself, past the barn conversion and into the open space beyond the village, into the light of a late spring day. Close my eyes, I will walk stride for stride. I have become the invisible man and I follow every note and allusion as my path leads away, towards the fields and the trees. Leave me be.
“It fell through a hole in the corner?”
Ha ? had me there. It?s only music, I think as I recognise the tugging on the lead and look down at the doleful spaniel eyes glancing back towards me. I have barely released the clasp before Cassie bounds over the drystone wall, sniffing for the recent memories of rabbits and wildfowl. It won?t be long before some startled bird is chased mercilessly across the stony field, its indignant croaks drowned by the yelps of the determined, yet ultimately hopeless hunter. I walk on, turning a corner and crossing into the fields myself, a new song to accompany me.
“I had this recurring dream?”
I walk, or try to walk, my feet lightening until they barely touch the ground, rising above the puddles and pieces of ancient brickwork. I lose myself in attics of treasures, the first signs of dusk hovering on the edges of my vision. I sway, involuntarily at first, then deliberately to cover my discomfort though there is not a soul to see or care. Unscared now, I fling my arms from side to side in a brief release of euphoria, singing snatches of song in tuneless abandon. Ploughed soil gives way to meadow as I come over the rise, the coarse grass still flattened by the winter.
“It?s always a struggle?”
There is no time to stumble, nor to prepare. Uncluttered vocals, floating on layers of harmony, render me drowsy and draw out my soul. Unprotected, I am hooked by gentle rhythms, once again fixing my pace and urging me on, taking the rest of my life away. What a wonderful, fantastic place, lush green stretching in every direction, plunging across and down before me. The music lays me down to drink in the vastness of oncoming night, the universe reclaiming its own as the first stars wink their encoded greetings. In a moment outside of real life, I drink in its majesty. I am nothing, I have nothing, my purpose long forgotten as I can do nothing but wonder. Somewhere inside the remnants of my inner being, even now dissolving into particles and casting away on the breeze, there surfaces a snatched memory of a song, “…my body has gone, but my eyes remain…”
“Forgive me if I stare.”
04-23 – Wireless Home
Wireless Home
Wireless Home – WMA11B
I heard a funny story a couple of months ago. A colleague of mine went to a presentation about a new piece of automation kit that enabled the remote control of various pieces of household equipment. “What’s really great,” waxed the presenter, “you can turn off the kitchen light when you are downstairs!” My friend couldn’t resist the opportunity to ask the obvious, yet all-too-infrequent question. “Excuse me,” he said, “why exactly would anyone want to do that?” In the pause that followed, it became obvious that the presenter had not rehearsed an answer to that question. What also became obvious was, if he needed to rehearse an answer, he already had a problem.
There have been plenty of examples over the last few years of techno-whiz devices that are going to change peoples’ lives. Indeed, these things go way back – we chortle over the adverts for convenience products that came out in the 1950’s, and guffaw at Morecambe and Wise’s automated house where the armchairs move around on rails, all the time ignoring the fact that we are all still suckers for such gadgetry. Like a lottery winner or a pop star, every now and then one such device really does have a major impact – the mobile phone, say, or the MP3 player, but none have thus far succeeded in changing the way we spend our lives at home.
It was thus with a troubled mind that I accepted the challenge of road testing the Linksys wireless media adaptor. Troubled, not least because I had been wooed by its capabilities and gadget value, without really asking myself, “why exactly would anyone want to do that?” When the box arrived some six months ago, I resolved to find out.
What’s this all about then? The WM11B is one of Linksys’ product offerings for home networking. To make use of it, you first need to have a wireless home network. You’ll also need to be the kind of person who has ripped all of your CD’s to your computer hard drive. Given this starting point, you may have experienced that slight twinge of resentment when you actually have to open a jewelled case and get out a CD to play it in your stereo. If so, according to the literature, the WMA11B is for you.
The Linksys product is very simple in concept. It plugs into your TV and stereo equipment, and makes a wireless connection to your home computer. Once configured, you can browse your hard drive for music using your TV screen, and you can play it using the stereo. At the same time, the device can hunt for still images, and display them on the TV screen as a slideshow. Simple, and effective. But does it work, and is it useful?
I confess to having had a few configuration difficulties with the WMA11B. This wasn’t helped by the fact I already have a Buffalo wireless hub. When I configured the Linksys device with a network cable, everything worked perfectly, but I just could not get the wireless connection to play. I even resorted to calling the helpline number, which achieved only a “sorry, we haven’t tested that configuration response from the technical support person.” How fortunate that I had other routes into Linksys, and the problem was resolved – it was an SSID incompatibility, for anyone out there that cares.
Once up and running, the device did exactly what it said on the box. The user interface was indeed effective, but a little too simple in places – it was screen-based, a bit like a web site, but also it relied on a remote control for access so complex operations were slow or nonexistent. There may be playlist functionality, but I really wouldn’t want to face the challenge! Similarly, the slideshows were dependent on the folder structure on the computer, which was not necessarily how I’d choose to display them.Operation via the TV screen caused the occasional glitch – I got lost in the menus and wasn’t able to reset my location without a “hard reset”, that is, turning the whole thing off and on again. Things went particularly wrong when trying to change the slideshow settings at the same time as playing music, but then how often would you want to do that?
Performance-wise, things worked remarkably well. The audio quality was perfectly acceptable to my uncultured ears, but I get the feeling that a surround sound home theatre setup might be a little excessive for MP3’s played over a wireless connection through a small box. The images on the TV screen were a bonus – I wonder if this isn’t the killer app for people who hate the idea of a blank TV screen. Also, there were some unexpected benefits, for example the front room became instantly tidier without the pile of CD’s haphazardly stacked next to the stereo.
There remains one question to be answered, and you know what it is: “why exactly would anyone want to do that?” To be absolutely honest, the jury is still out. There are certain scenarios in which I could see the Linksys box making a lot of sense, but to have it as a separate device seems to be more of a short-term expedient than a longer term reality. There are a number of other ways to achieve the same thing – for example, iPod users can plug their hallowed devices directly into the stereo, job done – as can laptop users for that matter.
From an industry perspective there does seem to be a place for converged technologies such as the WMA11B, but the fact is, however, nobody has the monopoly on how exactly these things will look, or how they will work together. Device integration is the norm from the hardware manufacturers, indeed, the chances are a company such as Linksys is about to launch a home DVD/CD/MP3 player with integrated wireless reception, and in the future this will probably come with a built in screen. In this way one of the major hurdles will already be overcome, namely getting the different devices to work together. The failure of Linksys technical support was a symptom of a wider problem, namely how impossible it is to guarantee the compatibility of wireless devices. Clearly, gadgets like this need to work out of the box or they won’t be accepted at all, and nor should they be.
Wireless networking, despite its promise and potential, is still at the early adopter phase in the home. Manufacturers face a difficult choice, as they have to test products in an immature market, without knowing exactly what people will find the most useful. In the meantime the continued integration of hardware platforms suggests a time when people buy an all-in-one device and use the features they need. Maybe by then, we’ll be able to get by with a few less remote controls.
May 2004
05-07 – The New Music Industry
The New Music Industry
The New Music Industry
Jon Collins, May 2004
The new music industry’s will look something like the following:
Banks need to get involved in the financing of albums. There is no reason why they don’t do this already, apart from understanding of the risk this is an actuarial issue, not a financing 80. Issey. If money is no longer made from the sales of recorded music, what does that leave. There are concert tickets, at concerts are traditionally very hard to make money out of. Many bands go on tour at a loss knowing that they will recoup the money through additional sales of records. Secondly there is merchandise while this seemingly has nothing to do with the music of the value of merchandising should not be underestimated. We already have all he any schemes, they are called the radio. Commercial radio is supported by advertising maybe this is the way that Web radio will go as well.
While music may appear to step to have a number of problems, the same is not so true of film and there is one simple reason for this: the quality of film is not yet anywhere near the maximum quality that can be attained. The number of bits required to display is thus far exceeding processing power or screen capabilities all and let and therefore is outside the domain of the destructive power of the download.
There are publishing opportunities for music in film, there are also the royalties to be generated from other people are forming your sons of that
There will always be a music industry, that’s the power of the music industry to provide an excessive hold over both the artists and the buying public is hopefully over all that
Perhaps we should consider that there is a place for mass-market music. We truly do it gets what we deserve, or at least ask for, when we buy music that has taken little effort to construct. This is, however, no bad thing in the same way that we enjoy chocolate and the benefits of a nice walk the, there is no reason to suggest that things have to be complicated in order to be worth having. However, just as with a nice walk, there is equally no reason to suggest why people should have to pay through the no. This is an imaginary creation of the music industry.
The music distribution industry also provides a very valuable service. However, in the Internet age, it could be argued that there are better ways of shifting data and copying it onto millions of small shiny desks and tracking them around the world.
George Michael has said that he will no longer record music for money everything that he records, he will be giving a wafer free on the Internet.
The music industry provides three sets of services. The first is recording services this is a very useful function. The second is financing service and the third is distribution services. There are also the parts of the industry that supports the preparation and presentation of live music, however these are usually seen as a separate entity, linking more into casinos and leisure activities and music recording.
The goal after music industry is to control its channel so it organises its three parts, financing, recording and distribution, in a way that will maximise its profits. This is a totally normal thing from business to want to do, however we have to ask ourselves the question how much arts is suffering as a result. If, for example, the fine art and industry was organised in the same way, then we would have nothing but Athena posters.
What’s the music industry needs, is a total reorganisation. First, it should no longer have any role to play in the financing of music. It is this particular fact that set asset it is this particular fact that is causing the majority of the problems that the music industry faces today. Because the music industry is the main investor in music, it is therefore to see the largest possible return on its investment. It is organised in such a way to maximise this return to the detriment of its other facilities, such as recording, artist management, and distribution.
Another role that the music industry plays is artist management.
artist management should involve the management acting in the best interests of the artists. Unfortunately, when the management is acting on behalf of the labels, they are unlikely to provide the best service to the artist. It is up to the artist to find a manager who will act in the artist’s interests.
Within distribution, there is also promotion. For music to be promoted globally it needs a global organisation to do so. This is where a big player such as BMI can come in.
At the downside of the music industry are price inflation, artist exploitation, Channel monopoly was not but let’s not throw the baby out with the bathwater, it should be possible to create an industry which serves the needs of the artists and the public, delivers excellent product, and value to shareholders in the meantime. And the
Many artists are just as hearts on the issues of copyright protection as the record labels and understandably so much of their revenue is derived from royalties from the recordings they have already made up that therefore what are they would find it hard to agree to giving up those royalties of as stupidly better world.
World, I had missed my flight. Fortunately they have been able to rebooked me on a flight in five hours time, so that should give me plenty of time to get to the airport. Instead I chose to stop at the services and by a large latter-day exactly what I did before latter-day I do not live because it has to be one of those things that no person should be without a rainy period of time it all sounds decidedly pretentious but it is nectar and its services the humanity should be not underestimated.
Whatever technologies exist, it is clear that working on an audio, only solution is not necessarily going to lead to protecting the assets that needs to be protected. As long as digital audio exists it will always be possible that converted into a datastream which can then be shared freely. Perhaps an alternative, is to work on ensuring that the audio data is only one part of what is being delivered. There are several alternatives to achieving this. The first, is incorporating other multimedia facilities, which may be accessible online all example by a DVD.
A second alternative may be learned from the world of Tudor. This is to make files are being a part of the industry itself is that a subscription service to a music industry board will costing, say, £25 a year, would be very attractive to a large number of the population. It is a case of getting the price points right and asking the question, would more people would more subscribers take up the offer of a subscription, and the amounts of people buying CDs. If this were the case, then it would enable in a similar way to the licensee, the music industry to continue to function and to promote artists in which ever way it saw fit.
It would also free it from the shackles of the CD formats, the days of copying data onto a desk in order to listen to it given the advances in technology such as/and there broadband, these days should forever be in us.
A subscription model would also enable far greater control over the customers, because it would enable the industry to see what people were trying to download when Ulster therefore additional, added value services, could be provided in order to meet the needs directly rather than trying to second-guess and control the market. As a simple example, consider a new artists, say friends Ferdinand, who is achieving a lot of interest. It would be possible to offer additional downloads to hold subscribers, or example it yet let’s an otherwise inaccessible special interest in information, it is then up two the consumer to decide whether or not to take them up on that offer or whether to access the information in a less organised manner. What we are talking about here is the nature of humanity to go to the easiest most cost-effective solution. When they have a choice between attempting to find the same information elsewhere cheaper all just finding it very quickly there and then, then the chances are they will go to the easiest option even if it costs them a little more. The second human characteristics that displays on, is the desire to have something for oneself therefore if there was exclusive merchandise and other exclusive nondata related mechanisms are example concert tickets then it would be possible to gain a large a subscription audience. We therefore end up with a two tier system in which the music of an artist is available globally for a given for a single price, or even for it all three, and then be additional benefits are available at additional cuts the question is not whether or not it is ripping people are the whether or not people are prepared to pay Ulster in addition, artists can set up, in the should be strongly encouraged to set up, their own relationships with such portals, such that they may also read the same benefits at very little cost to themselves. What we are aiming for here, is a win win model. In which all sides have much to gain.
The success of the portals, follows a similar profile to that of mobile phone companies. In mobile phone companies their problems are twofold. The first, is too expensive the number and quality of the subscribers. The second, is to expand the number and profits of the services that are provided. The parallels screen the easy industry and the mobile telephony industry, indeed the telephony telecommunications industry in general, ruck most fascinating. And should be exploited as much as possible.
Tools word counts
There were attempts when male & Wanna linked up for example, the sea music as nothing more than content creation. This was an experiment that didn’t fully work, and the industry would do well to learn the lessons that AOL and Sony have had to learn. One of the reasons for this is that content and arts are very different models. It arts is seen as nothing more than content, then it will inevitably be reduced as the lowest common do nominator. It content is seen as arts, and it will always be too costly. Therefore the two need to be kept separate.
05-12 – The Price Of Music
The Price Of Music
The Price of Music
Jon Collins, 26 Apr. 04
Marillion (www.marillion.com), the UK rock band, yesterday achieved its first chart placement in the UK Top Ten for over seventeen years. Even as the applause dies down in the heartlands of the band’s fan base however, alarm bells continue to ring at the very hearts of the major music companies – EMI, Sony, Bertelsmann, AOL-Time Warner and Vivendi. Like other recent chart toppers John Otway and The Alarm, Marillion have no place in the strategic plans of the music industry. Just like the problems faced by the corporate behemoths, they are refusing to let the industry off the hook. “We never really went away,” says Steve Hogarth, vocalist with the band since 1988.
How did Marillion hit Number 7? Ironically, by employing the same tactics as the majors, with a little help from their friends. First, they invited the hard core fanbase to buy their next album a year in advance, at what was an inflated price by anyone’s standards. Unbelievably perhaps, over 13,000 copies were sold, providing the band with a marketing war chest that it has been exploiting to its best advantage. There was advertising and press work for the new single, and promoters and pluggers were brought in to ensure maximum airplay, just like a real record company.
Is near impossible to fix the charts these days, but it is recognised that money buys chart success. A carefully placed video on the right channel, subscription to radio services which just happen to result in airtime, exchange of discounted CDs for window space at a retail chain, all of these techniques are so ingrained they’re part of the culture, not to mention more kosher techniques such as billboard and TV advertising. According to Simon Napier-Bell, industry veteran and author of “Black Vinyl, White Powder”, there is recognition that getting into the charts is also a great way to sell records. The corporate approach has been evolving to managing this process such that the latest sounds can deliver the best return on investment.
The impact of the Internet on this seemingly perfect business model has been well documented, albeit inconclusive, but the majors cannot afford to wait and see. The bulk of downloads are by teenagers, so companies have turned their attention to the “grey pound”, older generations who would rather buy than rip. Ironically this goes against the grain: while older minds remain open to the new, they are equally content with the old.
Meanwhile, it is these, more mature bands that are taking their own initiative. The fans are key, thinks the Marillion front man. “If you can enable a dialogue with your fans, you’re in a position to move mountains,” says Steve, who doesn’t believe that Marillion has exclusive rights to this phenomenon. Indeed, there are plenty of other bands that have been nurturing fan bases of their own, from old timers The Stranglers to bands with a younger appeal, such as Thrice. Steve’s advice to new bands is uncompromising. “Instead of gigging round toilets for ten years trying to get a record deal, gig around toilets for ten years and ask people for their email addresses,” he says. “If what you’re doing strikes a chord, you’ll be financially better off while remaining pure and free to do what you want.”
What does this mean for the music industry? Steve sees the writing on the wall, “History will see it as a funny little anomaly that happened between 1950 and 2010,” he says. “While technology made it possible, advances in technology will also make it impossible.” He might have a point: while pressure has been put on file sharing technologies from Napster to Kazaa, it is difficult to see what protections can exist against the biggest file sharer of them all – email. “As soon as we can send an email and attach an entire album, music will become free,” says Steve.
Ironically, while the industry cannot afford to make significant changes without damaging its current business, musicians are in a far better position to test new ideas on the market even to the extent of beating the corporations at their own game. For Marillion, the chart position of number 7 has been accompanied with headline terms like “comeback”, and if that’s what the media has decided, then that’s what the market perceives. Meanwhile, the band can continue to write the music they want to write and as producer, publisher and retailer they see the majority of revenues rather than a paltry 5% royalty.
In the words of the Marillion song, “We get what we deserve”.
05-14 – You're gone...
You’re gone…
A good friend, Tom Vance, died two days ago.
http://www.qconline.com/archives/qco/sections.cgi?prcss=display&id=195038
The Web may be great at bringing people together, but the global village isn’t particularly good for mourning.
RIP Tom mate, one more star in the sky.
June 2004
06-04 – Walking the dog
Walking the dog
Walking the dog, across open fields with the sun breaking through the haze of morning. From the jukebox walkman (for a change - I usually like the peace), come the opening bars of Out Of This World. “This is your day,” sings h, and I wonder why at the same time as acknowledging the inspiration. Fields, haze, one foot in front of the other as the song unfolds and reaches its climax. I feel slightly moved, releasing a sigh just warm enough to be visible on the still dewy air. My head tips back, leading my still-waking eyes to focus on a B-52 bomber. As it lazily crosses the sky, no doubt heading back to Fairford after a mission, I am surprised I didn’t notice it before. I decide to change the soundtrack to something appropriate, but even as my fingers press the buttons, the huge aircraft shimmers into the mists of morning. Only the first line of the song is reached before the plane has disappeared altogether, the morning sun’s gentle rays obscuring its dark shape as I am left with the music and the words, “Can you make it… on your own?”
And I wonder why.
August 2004
08-05 – Sold on voice recognition
Sold on voice recognition
All those voice recognition experiments culminated in an article… here:
https://www.theregister.com/2004/08/03/voice_recognition/
I must say, I’m completely sold on voice recognition, for one simple reason: I’m doing more writing than I could otherwise do. This doesn’t mean I’m getting things done faster (which is also true, but less so as there’s more to life than writing things down), but doing more things. In biz-speak this means productivity - in psychobabble it equates to reduced stress, so overall its a winner! A field with nobody else in helps of course :-)
08-31 – Waitrose All Change In The Supermarket
Waitrose All Change In The Supermarket
Waitrose All Change in the Supermarket
Jon Collins, 31 Aug. 04
All parents knows that a supermarket is not the best place to take a child. Even the most un-material of infants lasts only half an hour before descending into cries of, “can I have one of those?” Other kids will decide that running up and down the aisles playing tag is a far more productive use of their time than waiting patiently by a shopping trolley. They’d be right, of course, but this information isn’t a great help when you’re trying to get out of the place as quickly as possible.
This time however, as I entered Waitrose with my eight-year-old daughter, the cry was different. “What are those, Daddy?” asked Sophie, pointing towards the rack of QuickCheck scanners. I explained, patiently, how customers could scan the bar codes on their own purchases as they put them in the trolley, so they wouldn’t have to queue up at the till. “Can we have a go?” Sophie implored. “No, we can’t,” I snapped, already preparing for what looked like becoming a long shopping trip. Almost immediately, however, I changed my mind. What kind of a hypocrite am I, I thought to myself. All day I write about experimenting with the wonders of technology, and I am denying a simple request from my own child to do the same.
Rather than walking straight past the matrix of devices, as I had so frequently done before, we went over and I attempted to lift a scanner from its bracket. It was locked in place but I had clearly upset the thing, as it commenced a reboot sequence. My eyes widened slightly as its small screen registered a DHCP request to obtain an IP address. I’m not sure what DHCP stands for but I do recognise it as an industry standard protocol, used for connecting desktops to servers and home PC’s to the Internet. The messages on the LCD screen also triggered the nerdy side of my brain into action. “Alright,” I said, in mock exasperation. “Let’s see what we can do.”
We went over to the information desk, where a stack of leaflets invited customers to register for a free trial of the QuickCheck service. When I asked however, the kind people behind the counter were thrown into a turmoil: clearly they had not had such a request very often, if ever. I can only surmise that not many people are testing out the service, or perhaps the issue was the horrifying thought that I did not have a John Lewis account card. Eventually, with a little help from the supervisor and a bright lad from stores, we were in business. I strode away, clutching the hand scanner like the laser gun it had clearly been designed to resemble. Of course, my pleasure didn’t last long, as Sophie quickly wrenched the thing from my grasp.
The controls were simple and effective. Pushing one button enabled an item to be scanned, as it was put into the basket. A second button enabled the item to be removed from the list. A rolling total was given, and there was even room for the occasional special offer to pop up on screen. Shopping was a breeze, as we giggled our way around the store, scanning everything we could lay our hands on. When we had finished our shopping, we took it back to the counter where, sadly, it had to be run through the till in the normal way. This was only a test, after all. As we waited, I watched a silver-haired man paying for his own QuickCheck’ed groceries, using a machine not unlike a cashpoint. For him there were no queues, no wasted time, some might say it was shopping as it should be done.
Overall, I was sufficiently impressed to get in touch with Symbol Technologies (who make the scanners), who in turn put me in touch with Luke Holman, the Waitrose project manager of the QuickCheck scheme. Was it really a strategic technology, or just a gimmicky way to get one over on the competition? “We believe in it strongly, as an alternative way for customers to shop in a branch,” says Luke. So he should – according to Waitrose’s own figures, the QuickCheck scheme accounts for 23% trade at the store’s top branches, and 15% of trade overall. Furthermore and unexpectedly, it is a powerful barrier to competition. “When a competitor opens a supermarket in the same area as a Waitrose store, sales may drop, but the QuickCheck figures remain static.”
Waitrose is in a unique position, as it is a smaller supermarket chain linked to a major retail chain with its own store card. Only John Lewis account holders can participate (though the offer to open an account is available to all), meaning that users of the scheme are already credit checked before they are able to use the system. Fraud is minimised, and even if the genteel clientèle of the store were likely to abuse the system, the occasional spot check keeps that in hand. Indeed, the word is that the takings are positively skewed, in that people seem to be paying for more than they buy, possibly through mis-scanning accidents that go uncorrected.
What of the future? Without getting too big-brotherish about it, QuickCheck and its linkage to the account card means that a customer’s buying patterns can be monitored in real time. At the moment, Waitrose is considering how to target offers to specific customers in the most appropriate fashion. There are potential data protection issues, but things are not at the stage where that is an issue. “We want to send individual offers to the device, then it’s up to the customer. If you want it, you can take it,” says Luke. Systems are currently operated locally, linked to the in-store server, but in the future there is scope to centralise the scheme and offer better information about products as they are scanned, for example linking to a customer’s allergy records, if requested. There are other opportunities, and quite clearly Waitrose is not resting on its laurels in investigating their potential.
Meanwhile, the rest of us would do well to take a leaf out of the Silver Scanners’ book. The initial registration process may be onerous, but once this is out of the way, here is a live example of a technology saving time and effort for real people. Unless chasing youngsters and standing in supermarket queues is your thing, of course.
October 2004
10-22 – Mobility is seasonal
Mobility is seasonal
Thought for the day:
Mobility is seasonal. The idea of sitting alongside the canal right now, in the mud and the gales, fills me with horror. Still, I can sit in the car and at least look at the canal. Which I’m doing.
Actually, I wonder if mobility is really about mobility at all. An animated discussion with Cisco a few months ago led to the phrase “mobility is ubiquity” - in oher words, you can be as mobile as the technology lets you be. At the moment, for example, I am most certainly text-mobile - over my GPRS connection I can update my blog (first time in a while), download my email headers and generally function. I’m hardly graphics-mobile, however, and I’m certainly not media-mobile.
Text is good.
10-22 – Wine
Wine
Fact: I would rather drink new world wine than cheap French wine.
Fact: I would rather drink expensive French wine than, well, just about anything else…
November 2004
11-02 – New UK Rock Station
New UK Rock Station
From Blabbermouth, “A new British rock/metal radio station called Arfm will officially launch from the NEC in Birmingham, U.K. on Wednesday, November 24 at 12 noon (GMT) from the Sound Broadcast Exhibition. British rockers MAGNUM will be part of the launch program with Bob Catley and Al Burrow joining presenters Adrian Juste, Steve Price and Simon Gausden. Other guests include MARILLION, Jim Peterik and TARA’S SECRET.”
So there - I don’t listen to the radio that much myself, but I do like to know there is some decent competition for the media-driven pap stations.
11-26 – Nasty Upgrades
Nasty Upgrades
Ouch! Upgrades can be fatal!
Nasty, messy situation at the UK Dept of Work and Pensions this week, when an upgrade pilot decided to roll itself out to the majority of the 80,000 protected PC’s. Here’s the Register article - its up on the Beeb and various other sites.
Now, apart from this being a cautionary tale just a week after the latest Microsoft announcements for its “management vision” at IT Forum in Copenhagen, this is a welcome reminder of where we should be focusing our security efforts. As we showed in a recent Reg reader study, most of the security issues are inside jobs, through system failure or user incompetence. The DWP case is a bit of both, if I understand correctly. As well as having systems that ensure protection against supposedly Ukranian hackers (I say supposedly, as recent rumours are that said gangster ring is in fact operating out of the USA and covering its tracks by using said republic as a cover), it is far more important to protect people from theirselves and from the flakiness of their own computer systems.
It seems astonishing that a modern computer environment can be impacted to this extent by its own upgrade routines - in this case, sourced from Microsoft. Sometimes, its just too easy for vendors to blame the bogeyman - I’m not sure they’ll get away with it this time.
2005
Posts from 2005.
May 2005
05-24 – Cryptonomicron
Cryptonomicron
Neal Stephenson really is a jolly good writer, isn’t he? He must be - he can churn out more words than aphids on a bed of roses (poor, but I’m working on it). He’s not so strong on names and places, which may be why most of his recent work has involved the same set of families through history, but we’ll give him the benefit of the doubt for that.
The first book of his that was brought to my attention was Cryptonomicron (thanks Fraser). Once I had got into the rhythm of the most exceeding levels of detail, both in context and in conversation, I discovered it was a very good wheeze altogether. I won’t spoil it but I will heartily recommend it.
Particularly for anyone with more than a passing interest in security. It recently occurred that the book was about security, at every level - there was the global security issues of a world war, requiring technical security mechanisms such as cryptography, alongside more physical measures such as whopping great warships. Many were the examples (consider poor Goto) of how one can move from one type of insecurity to another, out of the frying pan and through a series of unexpected fires, before one finally realises that security is an ideal and not a reality. Compare these to the present day tales of financial security and computer crime, the characters constructing ever more complex architectures to protect both their data and themselves. Finally, the characters themselves were either blissfully inside their own comfort zone, or otherwise.
Security, what a mutilayered beast you are. Ignore my ramblings and read the book.
05-24 – eConstructing: Darwin and Punk Eek
eConstructing: Darwin and Punk Eek
It is now over one hundred and fifty years since Charles Darwin published “The Origin of Species.” Despite his own gripe that he wasn’t very capable as a writer, the work remains one of the outstanding achievements of science writing - lucid and accessible to scientist and general reader alike. Of course the book was far more than just a good read. It transformed biological science and evolutionary thinking, and it caused a furore in religious circles. However it was the accessibility of the volume that ensured it finding its way onto the shelves and reading lists, and from there into the dinner table and drawing room conversations. Darwin did not invent evolutionary theories (this could be ascribed to the Greeks), however he most certainly substantiated, elucidated and popularised them in a fashion hitherto unseen.
Darwin was fundamentally committed to the central pillar of “The Origin,” namely that life forms mutate over extended periods of time, for example in response to environmental changes. The reason he gave was “that the greatest amount of life can be supported by great diversification of structure”: evolution is driven by the will of all life forms to succeed and grow, despite the best efforts of both the physical world and indeed other life forms to deny them the opportunity. Despite this Darwin was the first to acknowledge that much remained obscure. “I am convinced,” he wrote, “That Natural Selection has been the main but not exclusive means of modification.”
In its time, Darwin’s principles were the subject of much debate between the establishment and the arriviste, the theologian and the scientist. The concept of plants improving over millions of years was generally perceived as acceptable; however “the monkey theory,” that man evolved from - ugh! - animals, proved difficult for many. Not so today, which sees even this previously unthinkable tenet being accepted as central to the street wisdom of the agnostic West. Without demanding an understanding of the arguments that support it, Natural Selection has been accepted on face value by the general population, discounting of course the religious groups that have stuck to their guns since the theory was first espoused.
In 1972, Niles Eldredge and Stephen Jay Gould were graduate students at the American Museum of Natural History in New York, and they were having problems. The trouble was that their chosen species - trilobites and land snails - were showing precious little evidence of any evolution at all, even through the thousands to millions of years’ worth of strata where the species were found. In attempting to resolve this issue, they developed a theory of their own. The pair decided that speciation theory, which had for a while been the subject of discussion in the biological community, was also appropriate for understanding fossil records. Speciation theory is based on the principle that new species are much more likely to develop within an isolated group than the main population, as any mutations have a much better chance of dominating a small population than a larger one. Eldredge and Gould applied this principle to their analysis of the fossil records and came up with “The theory of punctuated equilibrium.” The central principle is that evolution occurs, in the main, very slowly and progressively, with the occasional evolutionary leap caused by small groups of life forms which have somehow become separated from the main population. This theory conveniently fitted the issues that they raised with regard to the near-standstill slowness of ‘normal’ evolution, not to mention the absence of fossil evidence concerning ‘transitional’ lifeforms - neither one species than another. After all, if the subpopulation is so small, the statistical likelihood of discovering any fossils would be close to zero.
Exactly what the circumstances were that caused the sudden jumps in the fossil record, Gould and Eldredge did not know. It could be down to gradual, though relatively fast in geological terms, changes in landscape or weather. Alternatively the occasional cataclysmic event, such as the comet which is reputed to have wiped out the dinosaurs, could have led to the rapid evolutions of all life forms with survival permitted only to those which managed to equip themselves for the aftermath. Us lay people only have to look at the reputations of various closed communities for interbreeding, and their consequences, to see the potential for truth in the theory of punctuated equilibrium.
Eldredge and Gould published their findings in the book “Models in Paleobiology” and, in doing so, knocked the evolutionary establishment on its back. The theory was, to the pair, as much about the equilibrium (“Stasis is data,” wrote Gould) as it was about its punctuation. Despite this it didn’t go down at all wellwith the older folk of institutionalised Darwinism. “Evolution by Jerks,” or “Punk Eek” were terms characterising not only the concepts but also the hostility felt against the authors.
Interestingly, the progress of evolutionary theory has itself followed the patterns put forward by Gould and Eldredge. Both Darwin and the young turks of New York count as small groups working outside the confines of the mainstream. Both put forward theories that fundamentally changed the understanding of the time. Darwin, in particular, created the right to question what had gone before, however conservatism was quick to set in once again. Gould and Eldredge found themselves going up against the accepted beliefs and practices of the paeleontological, Darwinist establishment. It seems that the right to question accepted beliefs has often to be fought for, with the battles won and lost on grounds of human nature and not of science or fact.
05-24 – eConstructing: From Vitruvius to Zachmann
eConstructing: From Vitruvius to Zachmann
“Architecture is frozen music” Goethe “If you know the notes to sing, you can sing most anything”
Maria von Trapp
In the fourteenth and fifteenth centuries the Black Death swept across the European continent. As it abated it left behind a swelling population and upsurge of demand for goods and services. The following period of regrowth and renewed prosperity very quickly came to be known as the Renaissance or ‘rebirth’. Whole populations were inspired to find a more civilised existence than the Medieval times they left behind, leading to a surge of interest in ‘higher things’ such as science, art and architecture. Many great ideas and inventions came out of the renaissance, such as Gutenberg’s printing press, which were to transform the way that business was conducted and, perhaps more importantly, that information was communicated. Indeed, where would Europe’s revolutions have been in the eighteenth and nineteenth centuries, without the ability to produce and distribute pamphlets to a literate audience?
The rebirth of the arts led to a resurgence of interest in the ‘great civilisations’, namely the Roman and Greek periods. Italian architects such as Donatello and Brunelleschi used to pick around the remains of ruined Roman buildings, seeking inspiration for their neoclassical designs. Shortly before 1450 came a breakthrough with the rediscovery of the treatise De architectura libri decem, or “Ten books on architecture” by Marcus Vitruvius Pollo. As an artillery engineer, Vitruvius served under both Julius Caesar and his successor, Augustus. Some time around the birth of Christ, when Vitruvius was still involved in building works, he documented the principles of the both Roman and Greek architecture so, “to deliver down to posterity, as a memorial, some account of these your [Caesar’s]magnificent works.” The result was a ten-volume manual covering in depth architectural workings of all that he saw around him, both military and civilian.
Leon Battista Alberti, one of the unsung heroes of the Italian Renaissance, translated Vitruvius’ treatise and added his own architectural ideas. Following its translation the book was one of the most influential works of its time, being widely used both in Italy and outside. Since the earliest times, architecture has been seen as a subject worth taking seriously.
05-24 – eConstructing: Life on Mars
eConstructing: Life on Mars
In August 1996, NASA issued a press release that suggested evidence had been found to indicate life on Mars. Meteorite ALH84001, it was discovered, showed several mineral features “characteristic of biological activity” and, more importantly some possible, microscopic fossils of primitive micro-organisms. These ultra-thin, worm-like ‘fossils’ appeared to be segmented, but it was about one five thousandth of the size normally associated with such bacteria.
Skeptics were quick to jump on the claims. As no known life forms on this planet were of the size of that found in the meteorite, they said, it was therefore very unlikely that the findings were an indication of ‘life’, fossilised or otherwise. It was also ironic that the staggering claims came at a time when funding for Mars projects was being sought, against a background of an American public which was looking increasingly unsure of whether their taxes were being wisely spent in space.
Today, they jury remains out on the issue. Interestingly, in 1999 organisms only 20 nanometers across were discovered in rock samples taken from several miles under the seabed off Western Australia. These were found to grow when subjected to more normal conditions and given some food. Despite this being a boost to Mars-life theorists, the fact remains that issues such as ‘Life on Mars’ are, today, far more about speculation than concrete evidence. It could be argued that the existence of life somewhere in the Universe other than our own Mother Earth is an inevitable truth, purely by extrapolating the theory of evolution to other planets. If, eventually, solid evidence is found, it is likely that we are in for another shake-up of our science, religious beliefs and maybe even our social structures. Darwin and Marx would be proud.
05-24 – eConstructing: Of Evolution and Revolution
eConstructing: Of Evolution and Revolution
“An elderly lady attended a public lecture given by an astrophysicist on how the Earth goes around the Sun and how the Sun circles about with countless other stars in the Milky Way. During the question and answer session, the woman stood up and told the distinguished scientist that his lecture was nonsense, that the Earth is a flat disk supported on the back of an enormous tortoise. The scientist tried to outwit the lady by asking, ’Well, my dear, what supports the tortoise? To which she replied, ’You’re a very clever young man, but not clever enough. It’s turtles all the way down! “A Brief History of Time”, Stephen W. Hawking
As far as Darwin was concerned The Origin of Species was never meant to be the end of the story. In the introduction Darwin describes how he had been urged by his publishers to produce an Abstract, “as my health is far from strong.” Darwin needn’t have worried about his health, as he lived for a good twenty years after the publication date. However the completed sketch, in which Darwin planned to demonstrate fully how he arrived at his conclusions, was never to materialise. The reader is assured, however, that the Origin was based on at least twenty years’ scholarship and field work.
Another publication of the same period was also driven by the desires of its publishers. This time, the author was a thirty-year-old political journalist and social philosopher, required to produce a document in time to coincide with one of the many uprisings of the period. The publication, a pamphlet whose first edition numbered only 23 pages, was rushed out in less than six weeks with little time for reflection or review. The year was 1848, the pamphlet was entitled The Communist Manifesto and the author was Karl Marx.
Although the initial print run of the Manifesto went largely unnoticed by those involved in the revolution in Germany in 1848, it has gone on to become one of the most widely read texts ever published. As Darwinism challenged religious beliefs in its day, so Marxism has gone on to become to all intents and purposes a religion with the Manifesto as its holy book. According to historian AJP Taylor, Karl Marx himself claimed that Darwin had done for biology what he had done for social sciences.
Despite the certainty with which both Darwin and Marx held their beliefs, neither ever claimed to have invented something entirely new. The main achievement of both can be explained in the timelessness of each of the texts. Each remains as accessible today as it was on the day of publication.
05-24 – Marillion Bass Museum 17 November 2000
Marillion Bass Museum 17 November 2000
Seeking inspiration. Excuse me while I switch on the lava lamp. There.
This is a tale of giving colour to somebody who has only ever seen in black and white.
Hello. You don’t know me, and well, there isn’t much to know. Am I a Marillion fan? Hell, no. I wouldn?t stick out my neck that far for anything or anyone, I’d only get labelled, heckled, discouraged. This fan stuff is for geeks, saddos without lives, nerdy types who inhabit chat rooms and conventions as a poor substitute for reality. Trekkies, City supporters, Tolkeinites, computer gamers and band followers, they’re all the same, right?
Wrong.
Me, then. I got into Marillion when a mate of mine (Hugh, if you?re out there, give us a wave) came into school one Monday morning with a triumphant look in his eyes and a cassette tape in his hand. We were - what - sixteen at the time? Anyway, the tape. It was Tommy Vance, the Friday Rock Show on Radio One (which I regularly missed, my mind not being the sharpest in the pack), extolling the virtues of this new band. He played Market Square Heroes (which I quite liked), Three Boats Down From The Candy (which, sorry, was not the most impressive debut) and Forgotten Sons, which blew my already challenged mind. I taped the album when it came out (Bought it? You?re joking, aren’t you? What kind of a rich schoolboy do you think I was?), and the rest is eighteen years of consistently solid listening history interspersed with a couple of concert attendances.
There you have it.
Then, in 1999, I joined the Marillion fan club, The Web. Why? Had I finally got round to it/wanted (belatedly) to be on the cover of Marillion.com/been fed up too often with only hearing about tours after the event/felt an inner sense of yearning that could not be met by conventional means? Who can say. Maybe I just fancied something new. Having made the decision, however, I was determined that the passivity would be resigned to the past! Watch out, Web world, Here I come!
So I lurked.
I hid. I read the news, followed the mailing lists, listened in to the online chat. So much for the great arrival. This went on for a year or so, until, somehow, I managed to get tickets for an exclusive gig at the Bass museum. I remember that, at the time, it was very important for me to keep dialling, redialling, tri-dialling on two land lines and a mobile phone. Try that one, Anneka. All went quiet for a few weeks, and then the day came. I found myself curiously at peace as I drove up from Gloucestershire, listening to Made Again to get me into the “live” mood. I expected heavy traffic but found none, I was sure I would get lost but the museum was signposted from the outskirts of Burton. No petrol problems, no floods, all was serene as I found myself in the Bass Museum car park. The entrance beckoned so I wandered inside and asked the first person I saw whether I could get a drink. Another chap escorted me down alleyways and through a room one corner of which was packed full of keyboards, drum kits and other musical paraphenalia. Steve Hogarth was sitting in the middle of it all, picking out some notes as a couple of guys snapped pictures. I kept walking, passing Mark Kelly in the corridor. Then a door shut behind me and I was in the bar. I ordered a drink and slumped down in a chair, murmuring feeble comments like “hang on, wasn’t that…”
An hour later I was queuing, meeting fans who had attended over a hundred events, who discussed each track on every album like they would talk about old friends. “How many concerts had I attended?” I was asked. “Well, three, or is that two…” I stuttered. Maybe I should have left then, while there was still time, I should have returned to my CD collection and stadium-effect stereo system, listening to it loud with the lights off, a cup of Horlicks and a comfy cushion to ease my back. But then the gate opened and we moved through. As I glanced back, I could make out a drizzle-soaked sign in the dark. “No Exit,” it said. I could only go on.
We went in, we ate and we drank, we listened to music. Sure, occasional mistakes were made, intros were missed and timings misjudged but it all added to the reality of it all, the honesty of it all. Here were a bunch of chaps making music together as amicably and as easily as if it had been in one of their front rooms, or mine. The set list was old, new and borrowed songs and arrangements, a couple of jams and the requisite number of encores ? only this seemed out of place as it reminded me that this was a concert, these were musicians and I was in the audience. It was over all too soon, eighteen songs then the lights went up and it could have been time to leave, but I didn’t want to go. It was as if a curtain had been removed from my eyes as the music played, unpicked and unravelled. My drowse had gone, I was waking.
I was staggered by the simplicity of it all. Here were some people, making music. Here were some others, enjoying the music. The band members were being fed by the applause, grinning and delighting in providing its source. The audience delighted in the music, in the closeness, the reality, the honesty. There was no hidden devices, smoke or mirrors, this was people, interacting and having a ball.
I was sold. I walked into the bar with open eyes, ready for action. I bought a drink and turned to see Mark Kelly, looking much the same as he had looked not five minutes before. Well of course he did, he was Mark Kelly. I thrust a scrap of paper and a pen at him. Pete Trewavas was leaning against a table, chatting. Thrust. I started to ask a question, and then Ian Mosley appeared at the doorway. Sorry Pete - thrust. What was I doing ? interrupting one idol to snatch two seconds of handwriting from another? I apologised, then out came Steve Rothery. Thrust. H. Thrust. As I did so I realised what I was doing. Just a bunch of guys, eh? People interacting, eh? Nah. This was idolatry, pure and simple. I stopped and asked H a question - advice for a budding singer? He answered. I answered. We talked about Christmas carols, and as we did things returned to how they should be. A queue of thrusters appeared around me so I backed away, gracefully, returning to Pete and determined to have a real conversation.
Which I did. After a while I went over to Mark and discussed life, British Home Stores and everything for what could have appeared an unseemly long time, but it wasn’t. It was just a chat with a guy in a bar. I’d love to meet him again, and maybe one day I will.
The towels were on the pumps and the organisers were starting to look shifty so I figured it was time to leave. As I drove home I listened to Made Again for a while, only this time it was different - as I heard the keyboards, I thought, “that’s Mark,” and each guitar solo was played by a hand I had shaken not an hour before. Eventually I turned the music off and left myself to my memories of a remarkable evening.
I’ve missed out a few bits and bobs here and there. There was the trip to the States which clashed with the museum gig, but which was cancelled at the last minute, there was the excellent company of Dave, Max and the many others, there was the speed camera flashing on the way out of Burton-on-Trent, but I have bored you enough. At the Bass Museum I was reminded that music was not about CDs and covers, merchandise and marketing, corporations and egos. Even with a band as renowned and as popular as Marillion it could still be a deeply personal experience for all parties. Maybe it was all put on, but I have heard enough opinions of the band and its following to convince me otherwise. I sensed a feeling of community, something which (it is claimed) is so sadly lacking “in this day and age.” Am I a Marillion fan? Who cares. Will I be back the next time? You bet.
05-24 – Prochain Pipeline and the Theory of Constraints
Prochain Pipeline and the Theory of Constraints
This article originally appeared in Project Management Today, in February 2003. (2009 update - the magazine site seems to be defunct so I’ve changed the link).
Summary:
“Jon Collins reviews the ProChain Pipeline product, an add-on to Microsoft Project that supports the principles of the Theory of Constraints. He describes the tool and explains how it can be used in practice.”
For the PDF, click here.
05-24 – The Art of Heroic Failure
The Art of Heroic Failure
“I know you all, and will awhile uphold the unyoked humour of your idleness: yet herein will I imitate the sun.”
“So, when this loose behaviour I throw off and pay the debt I never promised, by how much better than my word I am, by so much shall I falsify men’s hopes; and like bright metal on a sullen ground, my reformation, glittering o’er my fault, shall show more goodly and attract more eyes than that which hath no foil to set it off.”
“I’ll so offend, to make offence a skill; redeeming time when men think least I will.” Prince Hal in Henry IV, Act 1, Scene 2 The man who, as King Henry V, was later to lead his country to an unexpected victory at Agincourt was (according to Shakespeare) the kind of kid that puts parents to shame. Young prince Hal is ridiculed as a maverick and a layabout, the “sword-and-buckler Prince of Wales,” who would be better to be “poison’d with a pot of ale” than become King of all England. In truth and by his own admittance, Hal wined, dined and womanised, he fraternised with vagabonds and cavorted with thieves.
Ultimately, it is the latter group that suffered. Hal knew that, when the time came for him to adopt the kingly mantle, he would indeed throw off his loose behaviour “and pay the debt I never promised,” namely to lead his country in peacetime and in war. All this, Hal conducted with barely a backward glance at the life he left behind him, much to the chagrin and eventual doom of his erstwhile compatriots. Sir John Falstaff, it is said, was never the same following the public disgrace he received at the hands of the newly crowned king. “I banish thee,” said King Henry, “as I have done the rest of my misleaders.” The comfort he found in the bottle was ill designed to prevent the morosity which eventually, it is said, lead to his death of a broken heart. “Falstaff he is dead,” said his mourners, for “the fuel is gone that maintained that fire.”
Hal showed himself to be smarter than either his friends, or his enemies, gave him credit for. “Herein I will imitate the sun,” he announced. He played while playtime was an option, but once his calling came he put such frivolity behind him. One day he was a boy, the next he was king. Would not all parents allow some flexibility in their dealings with adolescent offspring, could they only be sure that, when the time came, all trappings of youth would be firmly placed in the past! Come on, it is only a play, after all.
05-24 – The revolution that never was
The revolution that never was
“I refuse to prove that I exist” says God, “for proof denies faith, and without faith, I am nothing.” “Oh,” says man, “but the Babel Fish is a dead give-away, isn’t it? It proves You exist, and so therefore You don’t. Q.E.D.” “Oh, I hadn’t thought of that.” says God, who promptly vanishes in a puff of logic. Douglas Adams, The Hitchhikers Guide to the Galaxy
Modern religion and ancient history make uncomfortable bedfellows. Like a senile married couple, they shuffle alongside each other with only the vaguest of recollections about why they remain together. On the one hand, history is an unavoidable and beneficial part of religion, serving to set the context as well as to provide a factual basis for beliefs. On the other, some modern religions seem to prefer to leave things to faith alone, as though historical fact might somehow damage the assumptions upon which they are founded.
In late 1947, just east of Jerusalem in an unpopulated area of what is now Israel, a Bedouin goat-herd made a most remarkable discovery. Whilst searching for a lost goat, the boy, known as Muhammad adh-Dhib (“the Wolf”), came across a cave which later proved to contain no less than forty earthenware jars, each stuffed with ancient scrolls in Hebrew and Aramaic. Thus began a tale of intrigue of which the ending has still, over fifty years later, to be played out.
The Dead Sea Scrolls, as they are commonly called, have been stolen, traded on the black market and advertised in the small ads of The Wall Street Journal. They have witnessed, and been affected by the many wars which have involved Israel since 1948. Most interestingly, the scrolls appear to have been subject to a bizarre cover-up by the institution that is the Catholic Church. From the earliest days following the discovery, when a team of Catholic priests took control of the scrolls, there are repeated examples of how the team sought to reveal a minimum about their content and to play down their importance, much to the chagrin of other scholars and the wider public.
It was not until the early nineties - up to fifty years following their discovery - that copies of some of the more controversial scrolls were released to a wider audience. Why the Church wanted to keep a cap on the bubbling volcano that is the Dead Sea Scrolls remains unknown, largely because a number of scrolls in their possession remain unpublished. It is thought that they contain material about a Jewish sect known as the Essenes, whose beliefs seem to echo those of Jesus of Nazareth: however the Essene writings may predate Jesus’ time on earth by up to one hundred years. If it were true, this would cast doubt over the fundamental belief of Christianity, that the Christian teaching espoused in the New Testament originated with Jesus. Heady stuff - it is no wonder, then, that the religious institutions were (and probably remain) worried. Such facts could rock the faith of the whole Christian Church and cause a revolution in its ranks. As yet, however, it would appear that the attempts to play down and delay the evidence have paid off. The controversial ideas that could have been spawned by the two-thousand year old texts have not resulted in anything other than frustrated scholars and a still-ignorant public.
Incidentally, technology has not been able to resist joining the fray. There are over 13,000 mentions of the Dead Sea Scrolls on the Internet, referencing specialist sites, exhibitions, translations, academic publications and discussion groups. To be sure there will also be a fair handful of cranks and off-the-wall sites, but they are part of the rich tapestry of online humanity. The Internet is bringing the scrolls to the widest possible audience and this can only be a good thing.
Overall, the story behind the scrolls could be subtitled “the revolution that never was.” Unlike the writings of Darwin and Marx, and despite the fact that their content ranks just as highly in terms of the impact they might have had, the publication of the scrolls is now unlikely to have any profound effect on humanity. The institutional conservatism of organised religion, this time, has quashed any potential there may have been for a revolution in either thinking or teaching.
05-25 – All About: Broadband Communications
All About: Broadband Communications
Broadband is as much a state of mind as a technology, defined in terms of what it enables rather than what it is the transmission of sufficient quantities of information to enable such applications as multimedia streaming (think using a computer as an interactive TV) or video telephones. Broadband technologies have been around for a long time, but they have been too expensive in the past for the small business or the home user. This is currently changing with the arrival of Digital Subscriber Loop (DSL), which enabling cheap, high-speed communications across the last mile of wire between the telephone exchange and the socket on the wall.
Here we look at the DSL family of protocols and how they fit together. Business benefits are tempered with the current difficulties in rolling out DSL in various countries. We look at these and other deployment issues and consider where broadband is heading in the future.
What is Broadband OK, we lied. We have seen several definitions of broadband and its poor cousin, narrowband:
Broadband corresponds to multiple voice channels in a telecommunications circuit, whereas narrowband corresponds to only one. Broadband corresponds to a data rate of over 1Mbps Broadband constitutes sufficient bandwidth to permit the transmission of broadband services, i.e. streamed multimedia, videoconferencing and the like.
The third definition may appear a little vacuous, but it is the one we favour because it concentrates on the end rather than the means. It also takes into account the use of the term broadband in other spheres such as the Mobile Internet (for example, the 3G broadband protocol UMTS, which has an initial maximum of 384kbps).
Broadband communications have existed for years, at least for telecommunications providers (telcos) and the large corporations that can afford the extortionate costs. What has changed more recently is the development of a range of protocols known as Digital Subscriber Loop (DSL). The xDSL range (x stands for whatever) enables transmission of very high data rates across the so-called last mile the pairs of wires that run from local telephone exchanges to homes and offices. Given the fact that most data traffic will be to or from the Internet, we have another definition of broadband:
Broadband constitutes affordable, accessible bandwidth for the transmission of Internet-based broadband services via xDSL without needing major modifications to existing infrastructure. xDSL is a range of protocols, each of which is more applicable to certain needs. Most smaller organisations and home users will find Asynchronous DSL (ADSL) the most appropriate. ADSL is asynchronous in that the up channel is smaller than the 512Kbps down channel, a model which fits the standard usage pattern for the Internet in which more information is generally received than sent. A further strength of ADSL is that it is always on there is no need to dial up to the Internet.
Business Benefits of Broadband The technological benefit of xDSL-based broadband may be summarised as low-cost, high-speed, always-on access to the Internet. Given how much of a part todays Internet plays in the lives of most businesses, the positive impacts are linked to the business being able to do its Web-based dealings more cheaply and efficiently.
In addition, broadband access opens the door to a number of new ways to use the Internet for the business. For example:
If it has the right skills in-house, it may be more appropriate for the business to host its own information rather than relying on third parties such as ISPs. Conversely, the increased bandwidth opens the door for the business to make better use of externally hosted applications It is also appropriate to mention home use of ADSL. A high-bandwidth connection from the house to the Internet eases the possibility of teleworking (working from home), as corporate systems can be accessed as if the home user was in the office.
Deploying Broadband in the Corporate Environment So is broadband deployment as simple as making a call to a service provider, and asking them to come and fit a box on the wall? Well, largely, yes.
Issues with Broadband There are plenty of things wrong with current broadband, not least in its availability. Our definition is from the point of view of the end-user and not the telco, who must roll out ADSL equipment to all its local exchanges. Some telcos (for example, British Telecom in the UK) have a reputation for heel-dragging and for playing the system to prevent other providers from installing their own facilities.
ADSL (and cable, for that matter) also have a reputation for non-optimal performance. The down bandwidth is a maximum that is then reduced as more users access the facilities of the local exchange.
Last but not least is security. ADSL connections are always-on in two directions if you can get out, others can get in. There is a real risk that your computer will be attacked, hacked or otherwise misused (for example, as a base to send Spam e-mail).
The Future of Broadband The first next step for broadband is the completion of its roll-out this looks likely to take a good couple of years, particularly outside metropolitan areas. Broadband will be remembered not for what it is after all, it is no more than a high bandwidth socket ion the wall to most but what it enables.
05-25 – All About: Component Provisioning
All About: Component Provisioning
Component Provisioning has its origins in software reuse, which was one of the original perceived benefits of Object Orientation (OO) and Component Based Development (CBD). Time has shown that reuse does not come for free to be successful, reuse needs to be taken into account across the entire development process. Necessary mechanisms include:
A comprehensive, scalable and adaptable application architecture to support component deployment Agreed definitions of the structure, interfaces and content of a component. Team structures that recognize the difference between component development and solution development. Appropriate management and reuse processes, including component supply, management and consumption, librarianship and certification mechanisms.
The implementation of reuse facilities introduces the possibility to not only make existing components available to other projects, but also buying in components from outside. Reasons of economics, not to mention the use component-ready application servers and the evolution of Web Services, are driving the trend towards component provisioning. Gartner group estimates that component purchases will increase by 40% per year, to $2.7 billion by 2004. Whether this estimate survives the downturn in IT spending is questionable, however there is undoubted interest in component provisioning from all quarters of IT.
The golden rules of reuse become even more important, in particular the need for standardized, defined component interfaces. One such standard is the Reusable Component Specification from Component Source. This covers XML definitions for:
Technology the platforms to which the component conforms Functionality the interfaces and their constraints Distribution how the component is packaged, distributed and deployed Commerce the licensing, contractual information and product reviews
An additional pre-requisite is the existence of a solid, industry-standard infrastructure both for processing and data transfers. According to the CDBi Forum, such facilities have only become available this year, with the arrival of Windows 2000 for COM and J2EE for Java-based components. XML is becoming the de facto standard for data interchange, further greasing the wheels of component provisioning.
According to the CBDi Forum, components available on the open market should be:
Widely available Adaptable to many situations Easy to integrate Supporting de facto standards Environment neutral Process-independent. Autonomous Secure Of appropriate granularity. Easy to manage Cost-effective
Component vendors include component resellers (such as Component Source), Package vendors (SAP, Siebel), ISVs and other software companies (Microsoft, IBM San Francisco). Package vendors are already componentising their offerings, and it is only a matter of time before they start to sell individual components. Component provisioning benefits package vendors in two way not only can vendors sell individual component, but also they can incorporate other components to augment their own applications. This results in best-of-breed applications that are better able to meet requirements and that offer more flexibility for the future.
As the move towards more off the shelf components continues, the need for standardized architectures grows, such that the selection of a component has minimal or no impact on the architecture. It is not just a case of ensuring application elements. can communicate at a technical level; they must also work together at a business level. One such architecture is the Business Object Component Architecture, BOCA, an OMG initiative. The Web services architecture, in which applications are distributed across the Internet, is a new paradigm that brings architectural constraints of its own. These are still being determined, but are likely to involve components that are deployed as Web services, such that they can be discovered, licensed and accessed from across the Web.
Finally, provision of components requires new types of procurement and delivery channels. A variety of licensing models are being investigated including once-only payments, pay-per-use, monthly subscriptions and risk/benefits-sharing.
Where is Component Provisioning to go in the future? Development would indicate that once component provisioning has broadened outwards, it would move upwards into the domains of business process modelling. Already B2B information flows are being defined and ratified under the banners of XML.org and Biztalk.org, the producers of such facilities have found it difficult to supply B2B without including a capacity for recognising and deploying business processes. Component Provisioning may well end up being faced with the same challenges.
05-25 – All about: Content Management
All about: Content Management
Content Management has been said to be the art of making information accessible. Like all application types, it has gone under several names, including Information Management and Document Management. In these Web-driven days it is concerned mostly with how information is presented to users of the Internet, either externally or internally to the corporation. The bottom line is, if you have document-based information (and there are few companies that do not), you need a strategy to deal with it and there may be applications that can help.
Content Management has hefty overlaps with Portals. These are best defined by example: Yahoo! is a portal, as is FT.com. Portals collate and present information and services in a suitable form for their user base they can be generic, industry specific or focused on a particular topic. A particular form of portal is the Enterprise Portal one which provides corporate employees with access to all the information and services that they need to get the job done.
What is Content Management? Content Management owes its parentage to two converging technology areas, that is document management and the Web. The latter needs no introduction; as for document management, fair to say that it is a well-mined seam for those that know it, and a minefield for those that don’t. To document management we owe one principle, namely:
Everything is a document
This principle is central to understanding content management. Put it this way, everything form of data, from an email or a a spreadsheet to an audio file or a banking transaction, can be considered as a document. This principle becomes even more important when we take into account something else inherited from Document Management, namely XML which is the ideal packaging mechanism for all these so-called “documents”.
More on this later, but for now lets consider what content management is for. Over the past five years, many millions of Web sites have been evolving from from simple, text-and-graphics-based informational sites (“brochureware”) to complex resources linking many forms of information and enabling a far richer “end-user experience” (if you will). Against this backdrop, the “content” - that is, the text, graphics, audio, video and other data - needs to be managed. It needs to be created, verified, delivered, maintained and bumped off when it has reached its sell-by date.
Content Management is often linked to portals - which are no more than content farms, from an application perspective. Content Management does tie into other application types. For a start, as it is Web-based it needs to work with application servers, e-commerce engines and other paraphernalia of the Web. Linkage between content management and CRM are inevitable, in a drive to get that “experience” unique for each and every user - and, of course to log every key-click they might make.
The Business Need for Content Management The evolution towards richness of content has caused increased costs due to the complexity of the information. It is far easier to manage a few pages of text to a multi-layered, multimedia “experience”. Even the simplest of sites have a tendency towards complexity, over time. Content management enables content to be stored, managed and maintained appropriately. It also permits the process of content development to be controlled. The litmus test for a content management application is simple - can you use it to recreate your web site as it was on an exact day six months ago? If you want to know why you would need such a facility, just wait six months.
Content management can be thought of as a springboard. It is not entirely necessary to manage content in a structured fashion, or to use tools to automate it. However, the successful use of content management facilities will enable organisations to do more with less, to manage more information and deliver it more reliably than otherwise. Enterprise-scale content management applications can be expensive, hence commitment is required from the top not only to cover the costs of the products, but also to implement the necessary processes to enable their benefits to be realised.
Deploying Content Management Applications The deployment of a content management application should be considered as an integrated part of a company’s strategy for using the Web. Content management is as much about process as product. Not only is there the content development and delivery process to think about, but also the other processes (aka workflows) of the organisation will be impacted, in particular the customer-facing processes such as marketing, sales and support.
Issues with Content Management Content management is a relatively mature, and hence stable, market and suffers less from teething problems than a number of other application areas. Issues with content management tend to come more from the way it is implemented - implementation of the wrong process, or failure to integrate with external systems and workflows can cause more problems than it solves.
One issue concerns the distribution of content. Content distribution networks relive the pressure on web sites by offering alternative locations for the content. This can solve difficulties of accessing content from specific geographies, not to mention easing the load on a company’s “core” web site.
The Future of Content Management Content management may have the basic model correct, but it must adapt to fit with new technologies and new business models as they come on stream. It will undoubtedly be impacted by the arrival of broadband technologies as these enable new forms of content (such as streamed audio and video) to be delivered. Web services and hosted applications will also impact on content management, not so much in its principles but in the way it is implemented.
05-25 – All About: eXtensible Markup Language, XML
All About: eXtensible Markup Language, XML
XML has its roots in formatting languages used for document layout, typesetting and production. It is said that the languages were too complex, and a simpler version was developed. XMLs strength lies in its simplicity it provides a basis to define all forms of digital data, from simple text and values to voice transmissions, multimedia and 3-dimensional graphics. XML provides a basis for systems to interact and to exchange data of any form. If the Internet is the global network, so XML is fast becoming the global language for information interchange.
Here we describe XML both from a technical and a managerial standpoint. We look at the ways a business can benefit from XML, and how XML-based facilities can be deployed. We consider the downsides of XML and look at how it is likely to ex=volve both as a language and as a movement.
What is XML? The eXtensible Markup Language (XML) is a language for representing data. Quite simple, really. It looks a little like HTML, in that it uses tags (like ) to structure and format the data. It is simpler than HTML in that it does not define specific tags, but it is more complex in that it requires the tags to be defined. The simplicity of XML is its strength. It can be used for everything from legal documents to comic strips (articles have quite rightly coined XML The Tupperware of the Internet).
The Business Benefits of XML Clearly, as XML is just a language it has many potential uses. Its strength lies in its standardisation if more than one party agrees to use XML (and, what is more, agrees on the format of XML to be used), then the two parties can communicate. XML also overcomes many of the limitations of HTML.
Three application areas are exploiting this strength, each with its own benefits to business: these are content management, business-to-business (B2B) e-commerce and Web services.
Content Management. B2B E-Commerce Web Services Business re-engineering.
Issues with XML What could possibly be wrong with XML? After all, it is just a formatting language. The answer lies in Bananaramas first law of IT Its not what you do its the way that you do it: the risks with XML are not so much with the language itself, rather in how it is used. If nothing else, remember the adage real programmers can write FORTRAN in any language.
There is an industry fear that simple XML will become SGML as new capabilities and features are added to the language to support more complex constructions and data types. However most of us will not care, as long as the message gets passed. XML is only the messenger.
05-25 – All About: Hosted Applications
All About: Hosted Applications
In the very old days, software used to be run on huge computers (mainframes) and accessed using ‘dumb’ terminals as clients - dumb because they did not process anything themselves. Then personal computers arrived and it became possible to run software on the clients. At the same time as the arrival of the Internet, it became generally agreed that the best place for large-scale applications was on servers, and clients were best dumbed-down, as graphical displays. This oversimplification leads us to the question - what if the Internet is used to provide the connection between the server and the client? If this was the case, applications could be accessed from, and situated, anywhere, potentially managed by a third party. The concept of hosted applications was born.
What are Hosted applications? A good place to start with Hosted applications is the phrase ‘applications over the wire’. Rather than installing and running applications on your own server, you can outsource them to a third party and access them over the Internet. The third party gives you application access in the same way that an Internet Service Provider gives you Internet access. (Note - they don’t have to be the same party) As usual, from this simple definition things can get quite complicated - there are as many types of hosted application as there are ways of hosting and delivering an application over the Internet.
Business Benefits of the Hosted Model The benefits of hosted applications are a combination of the benefits of outsourcing, with the reduced costs of using the Internet as infrastructure. For the former, consider the following:
- time to market - another company can provide an application faster than you can build it
- reduced overheads - is cheaper for one organisation to manage an application for multiple companies, than for each to manage their own
- resource management - maybe you haven’t go the facilities to deploy or manage the application anyway
Deploying the Hosted Model in the Corporate Environment Some basics on deploying Hosted applications:
- Be proactive - don’t think that the handover of control to an hosted application provider is a simple process.
- Remember security - like other Internet-based applications, security is a primary concern.
- Service levels are essential - be sure you are given (legally) satisfactory guarantees about the service you will be given.
Issues with Hosted Applications Key to the success of a hosted application is its ability to deliver managed service levels, and key to a companys use of a hosted application is the specification (and guarantee) of appropriate service levels for the business.
The Future of Hosted Applications The future of hosted applications is intertwined with that of Web services and SOA. Hosted applications are seen by some to be a temporary situation. It is generally agreed that service providers in general are going through various evolutions, and that the worlds of communications, media and the Internet are merging. It looks likely (how do we know? OK, we guessed) that service providers will form into two categories:
- general service providers, who cover all the bases
- specialist service providers, covering such areas as security, application hosting, business services and the like.
For further information and an update on the Cloud Computing view, see here.
05-25 – All About: Open Source
All About: Open Source
Open source software has been touted as a revolution in software development, striking fear into the major computer companies and glee into individuals who feel such companies have had their way for far too long. Open source involves software being developed using a community model (for community read potentially anybody with an Internet connection), and the resulting packages are released for free thats free as in free speech, not free beer, as one open source project developer has put it.
Here we look at what open source is, and the philosophy behind the development. We consider the business benefits of open source from a development and a deployment standpoint. We examine where to start for open source development, and how the most popular packages Linux and Apaché can be deployed. We look at the downsides of open source, and how the packages and the philosophy might evolve in the future.
What is Open Source
Open source is plagued by two conflicting messages from the IT industry evangelical zeal on one side, FUD (fear, uncertainty and doubt) on the other. Certain open source packages have stolen much of the limelight, in particular:
Linux, the Unix-like operating system Apaché, the Web server package
There are other packages that you may be using without realising it, such as sendmail for mail forwarding. But what is open source? Open source is free software, free to be taken, used, modified. Cynics argue that open source software is anything but free on this, more later but it remains true that it is available at no charge, for example freely downloadable from the Internet. You can buy open source most computer shops stock shrink-wrapped distributions of Linux. However, you pay for packaging and the additional facilities that are bundled. Even this cost is borne only once, compared to a per-license cost for commercial packages. Open source is kept free by licensing arrangements such as the GNU public license not an interesting read in summary, it protects against someone taking the source and using it commercially in a way that hinders its open status.
In addition, it is possible for anybody to get their hands on the source, that is the programming code for the software. Nothing can be hidden from prying eyes, with the result that its developers are more likely to pay attention to its quality. This is contentious, but would appear to be borne out by open source packages in common use.
Business Benefits of Open Source
Open Source gives us software. This is not a glib remark. Open source software packages should be seen as part of the software catalogue available to companies large and small. Hence the business benefits should be taken as being the same as for any other package, that is it depends on the package, As well as the absence of purchase cost, there are additional benefits, which also depend on the package concerned:
Support flagship packages Linux and Apaché are well supported, in that companies like IBM will sell support contracts for them. Other packages often have a good following out there, with a community of engineers whose purpose it is to improve the quality of the package in question. Remember it goes with the philosophy. Stability as there is no vast corporation out there whose revenues depend on you buying upgrades, you can have more certainty that the package you run now will still be available in two years time. Initial costs may be lower for open source. Remember, however, that the purchase cost is only one element of the lifetime costs of any software package. You should factor in costs of deployment, integration and ongoing management for any software implementation, open or otherwise. Perceptions are important, because some software suppliers hotly contest the benefits of open source. Remember open source packages can be as risky, or otherwise, as many other packages in the catalogue.
Developing Open Source
Fundamentally, open source is a collaborative model for software development. It involves companies, academic institutions and individuals working together on software projects. Indeed, development of a specific software package is open to just about anyone that wants to take part. This might sound like anarchy, but in fact it is proving to be a surprisingly successful model surprising, at least, to the cynics who feel that the only way to develop software is through structured processes and corporate hierarchies of staff. (Note that some open source projects do follow structured development processes, and why not.)
Deploying Open Source in the Corporate Environment
Use of an open source package does not have to be a strategic decision, however it may be necessary to convince members of the management that the risks are no different to any other deployment.
Deploying open source need not be so different to rolling out other software products. Here we focus on two specific open source products Linux and Apaché. Despite being ported to many hardware platforms large and small, Linux is attracting most attention as a server OS. There is a short article at PlanetIT entitled What Does It Really Cost To Adopt Linux? which provides a good overview of the issues. Linux is progressing up the processor chain, for example it is now supported as a mainframe OS by IBM. Linux has also been ported to the desktop as well as embedded devices. And, of course, Apache runs on Linux. Finally, it is important to many companies to have a support contract in place. IBM offers services for Linux and Apache, as does LinuxCare.
Issues with Open Source
Dare we mention any issues, and fact the wrath of the open source community? Heck, yes. Lets put it this way - if open source is a bazaar, would you trust every stallholder? Uh-huh. There are flakey open source packages out there, packages with minimal or no support, packages that end up costing more than they benefit.
The best way to counter this is to do as the stallholders. Talk to people in the open source community, post messages to the message boards, ask the questions. This will serve a number of purposes:
It will get you in the habit You will find that answers are forthcoming (and if not, that can be proof that the open source model is flawed) The answers will tell you what you need to know.
The Future of Open Source
According to some pundits, open source is a revolution, a world-shaking example of how online communities will one day rule the roost. Lets not beat about the bush the hackers have Microsoft in their sights. Hmmm we shall see. In the meantime the products themselves will keep rolling off the production lines, taking their lead from what people are asking for (such as gnutella) rather than what the vendors would like. If you want our opinion, that has to be a good thing.
05-25 – All About: Tech Resources
All About: Tech Resources
These posts are taken from the (now-defunct) all-about-it.org Web site I set up in 2000, which aimed to provide a resource about new developments in IT. I’ve uploaded the posts as is - they should be considered as a historical record!
* Wireless Networking * Broadband Communications * Web Services * eXtensible Markup Language - XML * Open Source * Hosted Applications * Content Management
05-25 – All About: Web Services
All About: Web Services
Distributed applications are nothing new parts of software applications, each running in the most appropriate place and on the most appropriate hardware. In general, distributed applications separate application functionality (say, the database engine, the security functionality, the transaction engine and the number crunching) onto different servers. Now, imagine if these application elements employed the Internet as their communications mechanism they could be situated anywhere on the globe. Imagine still further if different third parties managed such elements, we could access them as services over the Web. There we have it Web services. Here we consider the essential elements of Web services, particularly the platform technologies UDDI, SOAP and XML, and we look at the efforts of software vendors such as Sun and Microsoft to provide suitable frameworks for Web services. We consider the potential benefits of Web services and examine whether such applications are deployable today. We look at the issues facing Web services, particularly the fact that much is still vendor hype, and look at how Web services will evolve in the future. What are Web Services? Defining Web services is about as tricky as defining terms like application and infrastructure we all use them, nobody knows exactly what they mean, but somehow we muddle along. Consider the following. Point one a software application can be thought of as a set of software elements, each dealing with one part of the functionality. Point two these software elements can communicate using defined protocols and mechanisms. Point three the elements do not need to reside on the same machine. Point four in fact, they can be situated anywhere in the world and they can be managed by any third party. Whoa! Hang on there. Things were going OK until the last point, right? But why not separate components across the Internet it is a good enough basis for human communications, so why not use it for programs (or even parts of programs) to communicate? The more that this is thought about, the more two points become clear: There needs to be an Internet-friendly set of mechanisms to enable the communications to take place. This is an appropriate basis for some, but not all applications. On the basis of these points (which we shall come back to), there exists the potential to construct applications from piece parts that are accessed over the Internet from different providers. These piece parts are called Web Services. Clearly, mechanisms to support Web Services need to have been accepted as a standard by the major players in the industry in this case, Sun, Microsoft and IBM. Successfully agreed, accepted and adopted are: The eXtensible Markup Language (XML) is text-based formatting language appropriate for defining and transmitting data between application elements across the Internet. XML has been accepted and adopted by all the major industry players well, all the ones that matter. The Simple Object Access Protocol (SOAP) is a standard for sending messages between objects just think that any application element can be considered as an object and you wont go far wrong. The Universal Directory Description Interface (UDDI) provides a globally accessible registry of service providers and what services they provide. It is with application support frameworks where things get tricky. A number of major IT companies are positioning themselves as framework providers indeed, this is where they see that there is money to be made. In particular, Sun, Microsoft, HP and IBM have frameworks of their own namely Sun ONE, Microsoft .Net, HP NetAction and the IBM framework for e-business . The final piece in the Web services puzzle is the provision of a communications mechanism, which enables application elements to communicate via the Internet. Here we have open versus proprietary, with the UN- (and Sun)-sponsored ebXML (XML for electronic business) in competition with Microsofts BizTalk. The other mechanism worthy of mention is RosettaNet. The Business Benefits of Web Services In the past, applications (distributed and otherwise) have been used by single organizations and accessed over the corporate network. The arrival of the Internet has enabled companies systems and their applications to talk to each other directly, for example in the form of a supply chain or a marketplace. Business benefits are summarised by the various vendors of Web services Sun has a good overview of the current and future benefits here (its marketing but so, for the moment , are Web services). We think that the benefits are a combination of the benefits of ASPs (of which Web services are a logical evolution) and good development practice. Maybe, with Web services, the Shangri-la of software reuse will finally be achieved! Deploying Web Services For developing applications, there is quite a lot of material out there on vendors specific Web services offerings (particularly Microsoft), however there is notably less cross-vendor information. As with other areas of application development, patterns provide a means to familiarise yourself with how to solve specific problems. Issues with Web Services Web services are still new and hence the issues are still to be fleshed out. For the moment they can be summarised as: Complexity building web-services applications is not easier than building traditional applications, for two reasons. First, it relies on an understanding of how to build component-based applications. This is the masterclass of object-oriented development while many can write in Java or VB, few understand the key principles of good design for component-based applications. Secondly, building an application on Web services could be likened to building a house on shifting sands. There will be many out there that claim to be delivering Web services, and few that eventually turn out to be reliable. Security its that old applications-on-the-Internet thing. The Internet is an insecure environment and, at the moment, there is a lack of will to implement the necessary mechanisms (such as PKIs) to build a layer of security. One security issue is with publishing service interfaces, in that they are by nature exposing doorways into the systems that provide the services concerned. The Future of Web Services Er hang on, Web services are the future, right? Certainly as far as computer companies are concerned. There are many articles in the press describing the gambles that the vendors are be making to be part of this brave, new Web services world. Are they right? Well, we think so. It fits with the trends towards outsourcing, commoditisation and component-based development to name a few.
05-25 – All About: Wireless Networking
All About: Wireless Networking
This is where radio-based communications enable computers and other devices to connect without the need for wires. The technologies themselves are based on wireless transmitters and receivers of digital data, and can be used to replace wires in corporate and home-based environments.
Here we talk about what wireless networking technologies exist - the current leading place technologies are Wireless Ethernet and Bluetooth. We look at the benefits of wireless networking such as flexibility and manageability, and we cover how a wireless infrastructure extends rather than replaces a wired LAN. Looking into the future, we consider HiperLAN 2 and the predicted convergence of wireless networking with the Mobile Internet.
What is Wireless Networking?
In the simplest terms, a traditional, wired computer network (or Local Area Network, LAN) can be seen as a number of boxes joined together by wires. Often the positions of those boxes are dictated by the constraints imposed by the wires. When most people talk about wireless networks in the corporate environment, they usually mean wireless Ethernet. This does what it says on the tin it takes the ubiquitous LAN protocol Ethernet, and extends it to wire-free environments. Wireless Ethernet supports data speeds of 2 megabits per second (Mbps) and 11 Mbps to put this into perspective, traditional Ethernet could handle 10Mbps and many networks being deployed today support 100Mbps.
The second high-profile wireless protocol is Bluetooth, designed to replace the wires between devices and peripherals (say, a computer and a printer, or a laptop and a mobile phone). Bluetooth is often compared to the InfraRed standard IrDA, which it is probably going to replace. Unless you have very specific requirements, these two protocols should cater for your needs hence we shall concentrate on them here.
The Business Benefits of Wireless Networks The key benefit of wireless networking is flexibility: in the box-wire scenario above, if you take away the wires, you can be more flexible about how you locate the boxes. Hence:
* Hot-desking environments (in which desk space is allocated on a first-come-first-served basis) can be built, configured and modified cheaply and simply. * Office facilities can be allocated and moved as user needs dictate, for example to cater for department reorganisations. * Users can access applications from wherever they are on the site, and even on the move opening up possibilities environments such as warehouses.
It is difficult to find a report that concentrates on Bluetooth from a corporate perspective. This may be for a couple of reasons the first is that no one is really sure of what the tangible benefits will be so it is the intangibles that are focused on. The second is that the business benefits are irrelevant to the manufacturers, which are equipping devices with Bluetooth whether the end-users want it or not. Time will tell.
Deploying Wireless Networks in the Corporate Environment In most circumstances, wireless extensions will be added to an existing, wired LAN.
Issues with Wireless Networks The main issues with current wireless networking technologies are as follows:
* Wireless Ethernet, which is touted to deliver either 2Mbps or 11Mbps over a 30-metre indoor range, in reality delivers less than the maximum depending on the number of simultaneous users, the distance from the transmitter and any obstacles that may be in the way. * Wireless Ethernet devices have interoperability problems, that is devices from different manufacturers are not working properly together hence the need for the Wireless Ethernet Compatibility Alliance. * One of the prime protocols Bluetooth interferes with wireless Ethernet and with itself. * There are also question marks over the security of wireless networks, with both eavesdropping and hacking being major concerns.
The Future of Wireless Networking HiperLAN 2 is a standard defined for next generation wireless LAN environments, and is capable of supporting theoretical data speeds of 54Mbps. Much industry support is rallying behind HiperLAN 2. Note that HiperLAN 2 also promises interoperability with the 3G protocol UMTS, which we discuss in out Mobile Internet section. It may be that, in the future, this differentiation disappears as the technologies converge with each other (and, potentially, with wired technologies). For the moment they remain poles apart.
As for Bluetooth, as chipsets for this protocol are currently being incorporated into many devices, it looks set to carve itself a niche in the future.
05-25 – IT Analysis In A Cold Climate
IT Analysis In A Cold Climate
This article was originally written for Brand Illumination (edit-) in 2002.
Even in the best of times, IT analyst relations can be something of a black art. Analysts can prove inaccessible, uncompromising and fickle; attendance at events is never fully guaranteed; furthermore, it can be difficult to see the tangible results of briefings or other analyst relations exercises.
In these less than favorable market conditions, it becomes even more important to target the right analysts with the right information. To ensure that todays limited resources are used wisely, there are a number of considerations that should be taken into account.
Target and Nurture Analyst Relations All analysts are different. Fact. For a start, an analyst firm will tend to focus on specific areas of competence, either horizontal (by technology type) or vertical (by skill set). Even the largest firms differentiate themselves, for example on depth and quality of research compared to hands-on implementation experience. Within analyst companies, too, the types of analyst will vary. Essentially there are two camps: market analysts, who consider the position and perception of your company and its products, and product analysts, who are more interested in ensuring you can deliver on your promises against real world needs.
Resources have never been infinite, either in time or monetary terms. In the current climate, it becomes even more important to ensure that you are targeting both the right companies and the right analysts within them. To ensure you have the correct mix, consider profiling analysts based not only on experience of a technology or market segment, but also on such criteria as perceived influence and skill set.
Make Briefings Worth It At the best of times, briefings can be dull for analysts as much as for vendors. This is often down to the fact that briefing sessions are inappropriate, badly planned or conducted. Just as it is worth targeting the correct analysts, so a correctly planned and structured briefing can provide the best value to all involved. Consider the following:
Only use source information that is relevant to the briefing at hand. Dont waste anyones time (analysts or your own) in briefings, by slogging through irrelevant presentations that have been picked off the shelf.
Customize your message before the briefing. Understand why an analyst wants to attend a given briefing, and make clear why the briefing is taking place. Because the VP of product marketing is here is less relevant than because we have restructured our product line.
Ask the analyst what he wants to get out of the briefing at the beginning, or preferably before it.
Analysts Have Bills Too Recognize that IT analyst firms are currently facing the same miserable market conditions as the rest of the industry. Of course, it is important that you achieve your agenda rather than theirs. However, if you are able to provide an analyst and his or her company with timely, tangible value, the analyst is more likely to have something to say about your company and your products.
In short, take the time to understand research programs and assignments of the analyst firms you have in your sights, and use this information to make the correct people available to support the analysts that you meet. For example, if a firm is conducting a market analysis study, there is little point in providing a technical expert at a briefing; similarly, a brand manager will be of only limited use if a product comparison is taking place. Either of these situations may lead to the worst-case scenario of not being mentioned at all.
Choose Bandwagons Wisely In the glory days before the IT bubble burst, things seemed so simple. In the past, and based on their researches, the largest analyst houses have been able to give credible views (to an extent) on what is hot and what is not. Today, while there continues to clear existence of several hot topics for example, storage virtualization, Web services or mobile technologies. What is less clear is when, or even whether, the market will bite on these juicy worms.
Currently, even the majors are as much in the dark as everyone else about which way the market will go: not just up or down, but into which specific area. The result is a pervading lack of confidence todays predictions are couched in caveats that can only serve to undermine their credibility. It may well be that the current hot topics will already be cooling off by the time the industry starts to pick up momentum, so be sure that your marketing dollars are carefully allocated.
In conclusion, the entire IT industry is currently riding a storm and we all have little choice but to sit tight. However most companies do not have the luxury of doing nothing until the industry picks up. When forced to make limited marketing dollars stretch as far as possible, it becomes more important than ever to ensure that the right message is getting across, and that the correct people are listening.
05-25 – Porcupine Tree Boston 22 June 2002
Porcupine Tree Boston 22 June 2002
“I know something you don’t know, doo-dah, doo-dah…” I admit, I was in a bit of a silly mood as I walked up the aisle of the plane flying from Boston to Heathrow. The night before had been a treat, a feast of sight and sound, and there was unlikely to be a single person on the flight who would have a clue what I was listening to, or what they were missing. Should I try to explain, no doubt they would smile kindly back in the knowledge they were sitting next to one of those people who takes some things just a little too seriously… After all, it was only a band, surely?
Perhaps they would be right. Perhaps… No, scratch that thought. The Porcupine Tree gig in Boston the night before was worthy of the highest accolades, indeed it deserved to be shouted from the rooftops and church spires across the land. The final, lingering, memorable moment, of five musicians and artists ranged across the stage, their eyes alight with joy and embarrassment as they absorbed the collective delight of the audience during the second standing ovation of the evening, was testament to the whole event. Such an unlikely success - the Berklee Center was assigned seating only, balconied more like a cinema than a music venue, hardly conducive to building a rapport with the audience. There was no bar, no loosening of minds, bodies or spirits - not that dancing was an option. The joint billing with Opeth, who had had first bite of the cherry, and the ludicrously early curfew of 11 pm meant that the Tree’s set was restricted to an hour and a quarter, leaving little room to manoeuvre. On the upside the sound was good, the stacks of speakers designed to fit with the acoustics of the theatre, a location which prided itself on its musical heritage.
And somehow, everything just came together. By the time eleven o’clock hit and the band attempted to leave the stage for the first time, it had already been a perfect night. Perhaps the desire to stand came as much from being forced to sit through more than an hour of music that was designed for anything but sedentary listening, but nobody would deny the rapturous applause was deserved. Something must have happened offstage, for though the clock on the wall facing the band read a minute past witching (or at least legal licensing) time, Steven Wilson led his players back onto the stage for an unexpected encore. “We’ve just got time for this,” said Steve, his acoustic at the ready, leading into a version of Trains as a sea of death metal and psychedelic rock fans meekly sat back down again. Not for the first time that evening, I considered how Porcupine Tree was a guitar band, with Richard Barbieri’s aural canvas, and Gavin Harrison’s intricate rhythms serving as a complex click-track for the layers of interplay from the three guitarists standing before them. And the vocal moments, though essential, sometimes appeared as little more than a respite, a sorbet between the courses of guitar. Throughout the gig it had been the same, admittedly because it had concentrated on the heavier, more guitar-filled numbers such as, well, most of In Absentia.
There were a few older songs, including Last Chance to Evacuate Planet Earth and one I had never heard before, but even the so-called slower numbers had their fair of share of “rock posturing”, as Mr. Wilson would put it. And songs such as Strip The Soul were a take-no-prisoners, full-on barrage of axe-wielding with Steve and John Wesley, the on-tour guitarist who fits in with the other members of the band on stage like he has been playing with them for years ( and I dearly hope he will continue to do so) - Steve and Wes were playing off each other so in sync, that there appeared to be some kind of telepathic link between the two. It is no criticism of Wes that the best guitar moments came from Steve, as his hands whirled across the frets like dervishes. Being the front man, band founder, virtuoso and musical visionary has its perks, after all.
The sound was impeccable, at least to my untrained ears (Ian on the sound desk was as self-critical as ever). That’s not to say it was like listening to a stereo that’s not turned up too loud. I felt the bass ripping at the legs of my trousers as I sat, the drums and guitars waged a war of attrition against my senses with only the occasional solo Barbierism or break of acoustic guitar to soothe my perforated eardrums and stress-fractured brain cells. That’s not quite true of course. There were the gaps between the songs, filled with amusing banter from Steven, who seemed as bemused as the audience by the seating arrangements - or indeed requirements. “You will have to seek alternative ways of expressing yourselves,” he said once, using the same line to good effect later to silence a heckler.
From the very start of Porcupine Tree’s set, opening as they did with the currently obligatory Blackest Eyes, they set themselves apart from the band that had preceded them. This is not to fault Opeth, who played a thoroughly convincing set of guitar rock. Having said this, Opeth (“most of you will know us more as a death metal band,” said the singer) didn’t seem entirely comfortable with their new material, being a little more melodic and mellow than their usual fare. So I am told - I confess to being an Opeth newbie. It felt a little like if Marylin Manson had been asked to play at a party for underprivileged children - good music, good songs, clear competence, but just a little unsure and stilted. In comparison, the band that followed was almost choreographed in the completeness of its sound, and was relaxed in a way that only people that are at the peak of their art can be. Heck, maybe the Opeth fans saw it the other way around.
As we left the venue, opinions were divided on whether the enforced seating had helped or hindered the enjoyment of the performance, but one thing was for sure - that it had been the best Porcupine Tree gig that any of the crowd I was with had seen, if not one of the best gigs ever. As it was only my third, I couldn’t comment too deeply. Walking back towards the hotel, I remember wondering that if Porcupine Tree are this good now, how much further can they go? I know something you don’t know, perhaps, but as they continue to tour and build a US following, I hope this is a situation that doesn’t last for much longer. Shout it from the rooftops indeed.
05-26 – I shall say this only once
I shall say this only once
Well. Having crawled my directories for articles, imported old blog lines, linked the links, chosen the template, uploaded the PDFs, tested the pages, and otherwise got things up and running, I find that I have a web site. All feedback welcome.
05-26 – Making IT Matter
Making IT Matter
I keep having the same conversations. At trade shows, on-site with technology customers and integrators, in workshops and at analyst briefings, the conversations would conclude that:
Traditional use of Information Technology (IT) has fundamental flaws In the future, IT is moving towards a service based model. Business need to take increasing control of IT. The problem is more political then technological. Nobody really knows or understands how to move things forward.
This has made for some very interesting, informative and interactive conversations, which have often worked best with a beer in hand. I understand that the same conversations are happening elsewhere, and wherever I listen, I hear similar stories. However, it remains all talk.
Meanwhile, a couple of years ago the IT market crashed. Oops. It may be coming out of the downturn now, but there is every danger that the new IT industry that emerges is the same as the old, a market which has failed to learn from its own mistakes. If there is one good thing that should come out of the crash, it is the opportunity to spend time to understand not what went wrong, but how things could be done better the next time around.
Unfortunately, in the IT industry the precedents do not offer much hope for the future. Never in history have people and organisations failed so obviously and so often to learn from the lessons of the immediate past, preferring to push forward to the next potential solution rather than dwell on current failures. This approach remains the norm in many IT vendor organisations, whose business models are still rooted in the past. The concept of a silver bullet was first exposed as a fraud back in the seventies, and despite the increasingly cynical veneer that many like to portray, as an industry we are still hoping for a cure all. From the vendor perspective, such a salve is referred to as a killer app something everyone wants to buy. In other words, success wil be judged by sales, and not by problems solved.
Hope and help is at hand, indeed many companies are already taking the initiative. There is a great deal of will at the highest levels in the largest vendor companies, and there are increasingly strong demands from end-user organisations. Will is one thing, but a way is quite another: there is a lack of understanding of how to get there, and there remain huge hurdles to be overcome. There are customers that get it; however, there are plenty of customers and plenty people inside the IT organisations that do not. This report is designed to address this issue and provide a blueprint that organisations can work to.
This blueprint boils down to the delivery of IT as a service to the business. This is not a technological goal, more a pragmatic one; it is also not an easy goal. However, as we shall see in this report, it is the goal: it is what IT is working towards, and it is what business is demanding. By focusing on this goal and working back from the answer, this report gives you the tools and approach to help you get there.
This report contains the following sections:
IT is Dead. This introductory section describes how we have got things wrong to date. Particularly given the fact that we are trying to achieve whole order of magnitude changes, for example the agile business. This chapter is kept short we dont want to rub our own noses in it, do we?
Long Live IT as a Service. We cant do without it, but what do we actually want? Lets work this out then we can work back from the answer, which is a service-based IT architecture with all the trimmings.
The Barriers to IT as a Service. If we’re all agreed on what we want, there must be reasons why we haven’t been able to get there in the past. This is where things get interesting by understanding the constraints and pitfalls, we stand a chance of overcoming them.
Preparing for IT as a Service. To overcome the barriers, there are some measures we must implement before we can start considering how to manage or deploy service-based IT. Not least the business has to understand itself, and the IT department needs to fundamentally restructure.
Managing IT as a Service. The IT manager is given a kingly role in this book, as the representative of the business and the customer of technology. This chapter explains why by considering the roles and responsibilities, the supporting tools and frameworks. It then asks the question how well are you doing?
Delivering IT as a Service. IT can be delivered as a service on a project by project basis. If everything else is in place, each project can move through relatively traditional steps while assuring that the basic service criteria are included in the mix.
Selling IT as a Service. IT vendors need all the help they can get, not least to match up to the new breed of service-savvy customers. No-nonsense guidelines explain the service solution value chain and how vendors can make their own evolution towards commoditisation.
Hope for the Future. This concludes by summarising the requirements of any service-based organisation and ranking them into maturity levels. There is no right answer, other than progress.
Annex Scorecards. These are provided to help the business, the IT department, the integrator and the IT vendor gauge where they are along the evolutionary path. The scorecards also serve to rank IT suppliers, so that IT customers can make more informed choices about who they should be depending on.
Note that this report is focused on making things work better today. It is not a collection of nice-to-have theories or unmappable best practices of one organisation. Throughout, this report incorporates a wide variety of case studies and examples, covering both where things have worked and where they have not, so we can draw conclusions from the successes and failures of the past.
It is worth mentioning what this does not cover. It does not delve into the potential new business models as advocated by the current best practice business thinking, such as the agile enterprise so loved by the Harvard Business Review. Rather, it presents an approach to delivering a concrete framework of technology that can work for any business, agile or otherwise. A castle build on sand cannot stand this aims to resolve the issue of the sand, rather than focusing too strongly on the castle.
Note that this report refers to technology and IT to describe a combination of things, which some may disagree fall under the same roof. For example, Telecommunications and IT are all sides up the same mountain, solving the same problem, as illustrated with metropolitan Ethernet and VoIP.
05-26 – Thought for the day
Thought for the day
**
Never put down to malice what can be ascribed to stupidity.
**I think it was Mark Twain who was first quoted as saying words to this effect, and if he didn’t he should have done. I heard an alternate way of expressing the same on the radio this morning, a commentator said something like, “some people ascribe to conspiracy theories, I ascribe more to cockup theories”. These two principles relate to a third, expressed to me by Phil Tee (who recently founded Njini, having set up Micromuse and Riversoft, but I digress), “the world is rarely as complicated as you think it is.”
For some reason it is a human trait to expect the worst of others, and to react with suspicion or look for hidden agendas. Most agendas aren’t that well hidden truth be told, so we could all save a lot of time getting on with solving our own problems rather than inventing new ones. If we apply the law of Tee as well, we might all achieve our objectives a little faster.
05-26 – Wireless in Toronto
Wireless in Toronto
I was in Toronto back in November. A phenomenon that I have never fully been able to understand, is the prolific nature of wireless networking in Toronto, coupled with the relative absence of security. Wherever I would go in the city, there would be, it seems, some kind soul that had left their home wireless network open, to enable me to access the wider Internet. I dont feel too guilty about this, after all it was Microsoft wireless networking that would stumble across the open port, but once I realised that this was happening, I found it both useful and intriguing. Looking more deeply, one thing I noticed was that many of the services were blocked to outbound SMTP access. This was a consistent finding. Even from the 22nd floor vantage point of my hotel room (the Days Inn on Carlton Street), there seemed to be no shortage of helpful souls in the apartment block opposite that were only too willing to spare a little bandwidth for my humble needs. At it seems that there is some kind of cooperative wireless understanding developing among the wireless illuminati of Toronto. If this is the case, then I am all for it.
Toronto was cold, gloomy, and rainy. My usual determination not to use public transport was beaten down by the force of the rain, so I waved for a taxi. To my heaven sent surprise, the taxi driver was local celebrity, Mr Geography. He opened with the question, “Answer me this, and I will give you your taxi ride for free - what is the country with the smallest land area in Africa?” I answered Lesotho but he said, “No, it is the Gambia.” He proceeded to tell me exactly how different the land mass of Lesotho was, compared to the land mass of the Gambia. He gave me another try, and then another, before introducing himself. It turns out hes been on TV, and he was even in a copy of the most recent edition of Macleans magazine, ostensibly the Time magazine of Canada. What a way to brighten up an otherwise dismal day.
05-27 – 770’s, OQO’s, batteries and voice
770’s, OQO’s, batteries and voice
Everyone else seems to have been writing about the Nokia 770, so I thought I would join in. Looks like a sexy device, indeed, battery life may be an issue, but thats true for any portable computing platform with a decent colour screen. I remember two years ago I said I would never want to replace my PalmPilot with a colour PDA, as I never would remember to plug it in. Well, here I am with my Dell Axim, my Archos jukebox and of course my trusty Vaio, all of which would think five hours of battery life was a good run. What ever happened to those mats that you could just put a device on (suitably modified, of course) and it would charge without cables. One day, every table should have one.
Of course there have been some quite significant advances in battery technology, but thus far they have eluded the mainstream. There was recently announced a battery that could charge in a matter of minutes, which immediately made me think of the possibility of charging stations in coffee bars and on high streets. See you at the charging station at 3, well, perhaps not. The other very interesting technology is to do with fuel cells, which (as far as I can tell) convert naturally available products into carbon dioxide and water, generating a trickle of electrical current as a by product. A fascinating array of companies are aligning to manufacture and deliver the fuel cell supply chain: all the usual suspects, plus companies like Bic, the Biro and razor company who also has a sizeable chunk of the global market share in disposable lighters and lighter fuel canisters.
The fear is that, to power one of these newfangled, high-resolution colour devices, well all have to walk around with huge cylinders on our backs. Youd only had to add a virtual reality headset to get that frog man effect. Perhaps some smart company will bring out flipper-like shoe attachments, which generate additional electricity by capturing the sound of all that slapping.
As for those screens, I heard on the radio yesterday that the eggheads could now grow Nanotubes as an array, apply a connectivity layer and a phosphorescent layer, et voilà, you have a low-power, very thin screen. Prototypes are currently running at a few inches across, clearly production is a way away but its looking good.
So while battery capacity may not be improving very fast, charge cycles times are set to improve drastically, and fuel cell technologies open up a wealth of new options (I would not be against carrying a can of methanol in my backpack, though it might cause some disputes at the airline check-in desk). Perhaps somebody could invent a fuel cell that runs on vodka. Recognising that alcohol is essentially a natural product, people might start wanting to create their own fuel. Indeed, it would be awfully green, however, it would also be totally illegal in many countries. The situation might arise where, in tumbled down barns at the ends of rutted farm tracks, secret stills would be producing a villages supply of fuel cell fuel. But we digress :-)
Meanwhile, spare a thought for the OQO. This can claim to be the first product to market as a fully fledged computer with a jacket-pocket form factor. It is shipped with Windows XP, but uses commodity hardware, so there would be nothing to stop running Linux. An application of these devices his voice recognition, which I firmly believe it still to have its day in the spotlight. There are several reasons for this, not least the number of people I know who are developing symptoms of RSI, or back problems, due to spending too many years typing at a computer. The technology is now here, in that the latest version of DragonDictate is perfectly adequate for transcribing ones voice into words. I am shocked and stunned see that neither the OQO nor the 770 has a microphone socket, thus preventing it from being a perfectly serviceable and totally appropriate voice recognition device. Incidentally there is a version of IBMs ViaVoice for Linux, but this needs a bit of work in more ways than one.
Incidentally, this entry is being voice dictated as I cruise down the M4. My computer is on the passenger seat, and I glance at it no more than I look at the speedometer or the clock. Fortunately in this sun, the VAIO has one of the best screens there is; equally fortunately, I have a 12 volt adaptor so my battery life is protected. Phew.
05-28 – Let the puns begin…
Let the puns begin…
Our good friends at The Register mention the launch of FLOSS - standing for Free/Libre/Open Source Software. FLOSS is an EU initiative to promote understanding of Open Source. Laudable perhaps, but the same cannot be said for the acronym. Quote: “FLOSS is a global phenomenon, particularly relevant in developing countries, and thus more knowledge on FLOSS outside Europe is needed.” Absolutely - but for those with less knowledge of IT it could be an ill-scoped initiative for dental care in the third world.
All the same, its good to see they’re cottoning on, and that they have the bit between their teeth.
05-28 – Terse...
Terse…
… is a fine word and should be used more often. As is flange.
I wonder whether, statistically, there is a tendency for people who speak in short, clipped sentences to favour communication by Blackberry and other Smartphone types.
(Sorry for the short message, this was sent by PDA :-) )
05-30 – Community Services
Community Services
I’m still quite a newcomer to this sort of thing so I’m commenting on the blog of a usual suspect - Jonathan Schwartz at Sun. I intend to grow my horizons as I get the hang of things.
“As will become more obvious by the day, you can compete against a product, but it’s close to impossible to compete against a community,” says the man here - largely in reference to Netbeans vs. Eclipse. This may be correct if “you” is an individual, but in large part most of the “you”’s will be communities themselves, variously supported by vendors and other bodies. Competition occurs between communities and within communities, so I wouldn’t be rushing to prepare for the death of the individual competitor just yet, as it might not exist.
Also, and just an aside, if Jonathan’s so into the new thinking wrt communities, blogs, “the conversation” and the like, why doesn’t he allow comments on his blogs? Looks a bit singular to me.
05-30 – Sudoku
Sudoku
OK. It’s addictive. So there. More soon.
05-31 – I feel slightly sick...
I feel slightly sick…
I know that much of life is a facade, created by cold-hearted businessmen with an eye only on shareholder value. All the same, a small part of me still wanted to believe there existed such a thing as “music of rebellion”, even if the bands delivering it up were signed to the major corporates.
So, when I stumbled across the web site of Grabow (tagline: “putting showbiz into your biz”) and found that many so-called rock heroes were available for office parties (albeit big ones) and corporate events, I was just a little bit uncomfortable with the whole thing. Nothing wrong with the model, I’m sure the Grabows lay on a great show - but with just about everybody represented I just feel slightly sad. It does conjure up a raft of images however - fancy Slipknot playing at the shower curtain ring sales summit (John Candy RIP), or Eminem at the oil seed suppliers convention? Perhaps if I started saving now, I could get AC/DC to play my retirement bash in 20 years’ time. Maybe we could share the occasion.
I shouldn’t be so surprised - a few months ago I had the dubious honour of seeing Carlos Santana play at a Cisco event. Frankly he didn’t look like he was enjoying it much - a bit like if his kids had disturbed him in the middle of his favourite TV show and asked if he would play a few songs. Might not be too far from the truth.
05-31 – Voicing a concern
Voicing a concern
What a missed opportunity. I’m so disappointed I could scream… silently of course. What am I talking about - voice recognition.
Oh, that one, they say. You’re into that, they say. True enough - but on and off, truth be told. When I wrote the portability article at The Reg a few months ago, I was struck by the revelation that voice rec was just waiting for the right form factor of computer. Now, examples of said form factor are starting to roll off the production lines, and very sexy they all look too, with one HUGE proviso - they mostly lack a microphone socket. Perhaps I could hope that connectivity such as Bluetooth, builds in the idea that wireless mikes are the way to go. However I’m assured by people in the know that the sound quality over Bluetgooth is insufficient for voice rec. Wired mikes are the way forward, in the short term at least, and only a minority of the devices support this. Notably the Antelope (available now) and the Tiqit (later this year). The latter’s processor is a bit poor, but an older version of Dragon Dictate should work just fine*. As for the others - the Oqo, Vaio and Flipstart, just don’t go there.
Ahm tellin’ ya, voice rec’s the way. The time savings possible from being able to dictate any time, any place etc, together with the improved QOL (quality of life, lummox) from not being slumped in front of a glowing screen, will more than make up for the still-dodgy battery life on these devices. It’ll probably transform blogging as well - watch this space!
* Obtaining an older verison is not as easy as it sounds of course. Having purchased the latest version and finding it didn’t work on my PictureBook PCG-C1X (now retired to the cupboard, incidentally), I resorted to eDonkey to download an older version. To this day I’m still not sure I did anything illegal.
This post was - doh! - typed.
June 2005
06-01 – If you haven’t heard of Google Maps yet…
If you haven’t heard of Google Maps yet…
Check the O’Reilly blog for an expose of the potential of Google maps when linked to Web services. We haven’t even scratched the surface.
06-01 – Video module for Skype
Video module for Skype
Dialcom has announced a video extension to Skype, which I’ve just run up and it looks pretty good - full screen mode and everything. Webcams do have the issue that unless you’re TV trained, it always looks like you’re not paying attention. All the same it smacks of a killer app to me.
06-02 – Silicon: DRM in business
Silicon: DRM in business
My latest Silicon.com article is up, asking whether businesses can benefit from DRM. In short, the answer is - yes.
06-08 – Alphadoku
Alphadoku
I’ve created a puzzle I’m calling Alphadoku - ambitiously, I’ve also registered the domain. I know this is based on the Sudoku principle, if anyone’s seen anything similar I’d appreciate knowing but I haven’t seen anything like it myself.
See here for more information.
06-08 – First Phish
First Phish
I’ve just received my first phishing email to my work account - a pain as it indicates my name’s now on some spamtankerous list. I’ve pasted it below: it gives us one more reason why people should learn to read and write properly, as anyone could then tell there is no way it would come with a corporate seal of approval. “thank you for co-operation” indeed.
_Dear HSBC Bank member ,
Technical services of the HSBC bank are carrying out a planned software upgrade. We earnestly ask you to visit the following link to start the procedure customers data confirmation.
https://www.ebank.us.hsbc.com/servlet/com.hsbc.us.ibank.signon.Logon
We present our apologies and thank you for co-operation.
Please do not answer to this email - follow the instructions given above.
This instruction has been sent to all bank customers and is obligatory to follow._
06-08 – On production
On production
Last week I was in Monaco at a Dell Product Showcase. Particularly interested to see the multimedia applications and the games we dont get as much exposure to this side of things as Id like, and not just for gadget value. Every time I talk to people in the production industries gaming, film, music, TV and so on, I realise that these people see the creation and distribution of entertainment and information as a universe in itself, like a sphere with a number of exit points. When we talk to the content distributors, not least the service providers, it is clear that they see themselves as the universe, as a sphere with a number of entry points and a limited number of conduits towards the consumer. Not that either is right or wrong, just that the best perspective has to be to see the structure that can be created by both. Some people get how this should look, but many do not, and so they remain inside their spheres.
In the evening we somehow manage to get into the local club a place called Jimmyz. In the knowledge of the early start the next morning and having been up since 5 in the morning, I duck out about midnight. The next day I discover that Bono was in the same bar, apparently surrounded by numerous girls and a smaller number of hefty men. The body mass probably just about balanced out.
06-08 – Paris - Presentations and Pavements
Paris - Presentations and Pavements
Just on the Eurostar on the way back from presenting at an IBM event, on Project Portfolio Management. It would be very dangerous for anyone to pretend that portfolio management, programme management or other such topics are somehow new rather, what we are seeing is that the tools are starting to support the concepts in a more comprehensive way than they have in the past. The hard bit, as Lawrence Webb would say, remains in the human layer.
Today, Ive mostly spent coming home making my way to the Gare du Nord via various cafés, stopping for espresso and email before moving on. I walked through the Marais and to my favourite corner of Paris, the Place des Vosges, which somehow maintains its sun-dappled serenity despite the noise of scooters in the background. The meaning of mobility to me is to be able to keep up with the job (white collar in my case) from a Parisian street, as easily as from an office desk. Were still a long way from that but perhaps thats a good thing imagine the chaos if everybody started taking their work to the public parks.
Reality check three hours on the Eurostar and Im just about caught up with my email. One can have too much accessibility.
06-10 – Linked Out
Linked Out
When I put the case for Plaxo and LinkedIn on the Register a few months ago, I made the point that the tools that come with these services are worth signing up for even if you never connect. Well, they’re getting better. LinkedIn now has a downloadable toolbar for Microsoft Outlook that offers a “grab” button, so you can select a chunk of text from an email which contains name and address informaiton, and it will automatically create an address book entry for you. If this was built in to Outlook, but its not so this is a good start.
The LinkedIn upload from Outlook still doesn’t work with Firefox by the way. Firefox is a great browser but if it wants to be ubiquitous it’ll need to cover all uses - some sites I go to can’t support it either, at the moment.
06-13 – Indebted
Indebted
I can do no more than mention this as I don’t claim to understand the full picture. $40 billion of debts have been written off with immediate effect, saving an estimated $1.5 billion a year in interest payments. Clearly there are a raft of conditions (which is the bit I don’t fully grasp) but the write-off has to be seen as a good thing. What is now required is that us decadent westerners get to grips with what caused the situation in the first place and our role in that, and we also apply some conditions on ourselves to ensure it cannot get to the same state again.
06-16 – Shop of the present
Shop of the present
A funny thing happened on the way to the Metro RFID shop of the future yesterday. For a number of perfectly valid reasons, as I arrived in Düsseldorf I took a taxi to the only fragment of address I had - namely the above, which I knew was situated in a suburb called Neuss. Before long we pulled up at what appeared to be a hypermarket. “Shop of the future,” I thought. “I wonder what’s inside.” So I paid off the cab and waved him on his way. After a few minutes of walking around, the truth dawned. This *was* a hypermarket, absolutely of the present. All rather surreal - or rather, real, when I was expecting pretend. It wasn’t a problem - very kindly, the store manager (I sat in his office like a kid who’s lost his mum) sorted me another taxi and I was on my way.
RFID turns out to be a lot more simple, interesting and complex than I previously understood. Simple - it is no more or less than a standardised code that can be attached to any object and thus linked to a piece of data, somewhere. Interesting - from a philosophical standpoint, we have seen a major evolution of our understanding of data with the arrival of XML - again a simple construct but which enables data to know something about itself. RFID extends this concept into the physical world, enabling a wealth of innovation to be built on it. Complex - nobody knows exactly where this takes us, and there will be a number of technological, practical, and even socio-political challenges to be faced along the way. For the moment there is time to consider all of these things as the technology is not quite yet mainstream. RFID tags still cost about 30 cents each, which is prohibitive for many applications, and the scanners and other kit items are a long way from being commodity items.
Retail is currently leading the way, but my guess is that the first opportunities lie in other domains - tagging of tapes and optical disks for more efficient archiving, for example, tagging of fine arts and museum items to simplify inventory taking, maintenance of asset registers by medium and large organisations (as anyone who’s had to crawl around under a delivery of 50 tables, label them up and log their asset numbers will understand). Easy-to-access information about poisons, solvents and so on. Ski passes. Recorded delivery and restitution of lost parcels. Military procurement. Drug labelling. The possibilities are endless. The challenges are endless too - not just integration and data cleaning but security, civil rights, the power of the major corporations, fraud, misuse and so on. The important thing at this stage is to be informed.
06-23 – Frisson
Frisson
Now, there’s an excellent word.
06-23 – The mail must get through
The mail must get through
What perfect days these have been for working from home - perfect, that is, had my ADSL connection been working. Eleven o’clock yesterday morning was the time the deity of all things connected chose to break the link between my home network and the wider world. I have been stumbling along for the past 36 hours in the nether world of GPRS, trying to cope with 9MB emails and voice calls all coming down the same line. I could resort to a good old modem connection, but there is no wall socket in my office so I’d have to perch in the kitchen, not ideal.
It is quite fascinating - just at the same time as we (at Quocirca) are putting together a report on the importance of email, that this should happen. The psychological state one finds when one’s email is no longer available is akin to losing one’s door keys or address book: email has become a prop upon which other functions take place, and without which we start to flounder. Or at least, I do.
September 2005
09-19 – Today I shall mostly be?
Today I shall mostly be?
Flying to Vancouver. At least, that?s how it feels at the moment ? a nine-hour flight gives the impression that a whole day is passing; as we near the end of the journey however, we change our watches and find the whole day is only just starting. It?s a sensory illusion ? the day that is, not the flight ? entirely constructed in the imagination, bounded by sleep.
In other words, before I waffle on too much, a long-haul flight offers a wonderful opportunity to reflect on the most banal. Day dreaming is positively encouraged, given that there?s nothing else to do (the three films have finished, the cards are filled, the email inbox is up to date etc). Soon, a slice of pizza will arrive and offer a way out of the doldrum, itself a little surreal given that it is only ? now ? 9 in the morning. Pizza for breakfast, whatever next.
So, it?s been a while since my last entry to this blog. Three months, if I?m not mistaken, and with good reason as all my spare writing energies were engaged in finishing the book. This is now done, and is (I understand) due out in a matter of a few weeks. I?m still in too deep, a bit like decorating a room I can still see every brush stroke. Already however, the details (and the painfully long hours) are starting to fade.
Which is good.
09-24 – Seven Lessons from London
Seven Lessons from London
Several funny things hapened on the way back from Vancouver, some better, some worse, so I thought I’d share them as cautionary tales. Ignore them at your peril.
1. Never leave your hotel details on the plane
It was all so simple. Having spent three days in Canada, I would touch down at London Heathrow at 1.30 in the afternoon. At 4PM I was due at the Institute of Directors to give a presentation, so there was enough time to get to a hotel, change, shower and maybe even squeeze in ten minutes of rest. To my delight and surprise, I did sleep on the plane, this was even more surprising given that there were babies everywhere - Air Canada must have given a discount for multiple babies or something, as they were next to me, behind me and across the aisle - but they all slept (that womb-like rumble probably helped) and so did I (the Melatonin probably helped as well). My relief was so great, I failed to notice I was no longer in possession of the piece of paper containing my hotel details, a fact I didn’t notice until I was on the Heathrow Express to Paddington.
That’s OK, I thought to myself, the hotel’s just round the corner from Paddington, isn’t it?
2. Never trust essential details to memory
I walked round the corner to Lancaster Gate, asI recalled a map which showed the hotel just on the edge of Hyde Park. As I trundled along, my black wheelie flight case meandering behind me like a labrador puppy, I checked my watch: 2.15 PM. Loads of time. And there was the hotel - the Hyde Park Thistle. Perfect.
3. Always call to check the reservation
As I spoke to the lady at the Hyde Park Thistle, I realised I’d made a mistake. There was no record of my booking: no worries, she told me, there was a Jon Collins checked in at the Kensington Park Thistle, just a 15-minute walk across the park. Could she please check with the hotel, I asked, as I didn’t want to make the same mistake twice. Having tried to call the Kensington Park (to no avail - as the lady left a call to her own desk ringing, she explained to me the hotel chain’s 3-ring policy). Never mind, she was confident my hotel was the Kensington Park, nothing could go wrong.
4. Never trust computers (hotel systems or otherwise)
At the Kensington Park Thistle at 2.45, disaster struck. When the hotel had no trace of me (how inevitable, in hindsight) on their computer, I decided to check my original online booking so I booted up my computer. Or didn’t. The message “Operating system not found” appeared, and a clicking, whirring sound could just be heard, like a tiny man trying to start a motorbike inside my hard drive. This was not exactly what I needed. After twenty minutes or so, the desk clerk had got through to Thistle Central Reservations and had found my booking - replete with booking number - and he had scribbled it on a piece of paper. Jon Collins - Thistle Euston. With 55 minutes left, there was no wayI could get to the hotel before going to the IoD. All my best-laid plans had failed.
5. The tube is faster than the taxi
After heading towards Kensington High Street tube (direct to Green Park, 5 minutes from the IoD) I decided to hail a taxi - not least to compose myself and change my shirt, avoiding eye contact with passers by as I did so. The traffic at Hyde Park Corner was typically heavy for a mid-afternoon, and we went nowhere for a goodly while, eventually arriving at Pall Mall at just after 4PM. I had called ahead to explain the situation, so everything was fine.
6. Always take a backup with you
It is a reasonably standard experience to arrive at an event and be told that one’s presentation has already been installed on the speaker’s laptop. To the extent that, the last of my worries was that I would need to bring an electronic copy of my pitch. However, the falling faces as I explained the plight of my laptop told me something different. Now, despite the fact that I tell others on a regular basis to do good backups, I have been known to be less than reliable in my own backup routines. For once in my life however, this time I had a USB stick in my pocket, and I had synchronised it with my hard drive only a week or so before. Through this double quirk of fate, I was able to put a copy of the presentation onto the other speaker’s laptop.
And thus, victory was snatched from the jaws of defeat. Until…
7. Keep a spare shirt
The presentation went well, and a couple of people told me afterwards how relaxed I was, I explained how I had believed I was already dead - nothing could possibly get any worse. I had lost a laptop and I felt like I’d been wearing the same clothes for two days - which indeed I had. I was delighted to be informed that the IoD had a shower in the basement, and I freshened up in time for dinner (as an aside, the IoD comes highly recommended - lovely people, excellent service and great value). It all went well until the puddings: we were seated as tables of twelve, at round tables with large, square, white tablecloths that hung to the floor. Indeed, they came over the shoes of one poor soul who - and I hasten to add was perfectly in control of his faculties - who managed to entwine his foot in the tablecloth and fall backwards, pulling an entire load of empty plates and not so empty glasses towards him. Fortunately everything stayed on the table; not so fortunately the glasses of red toppled over and landed in such a way as to send a flurry of merlot droplets towards yours truly. I was not so much drenched as well-spotted, and the shirt I still required for the next day’s meetings was now ruined.
Still, it could have been worse. As we left I was asked if I planned to take a taxi.
“No, I think I’ll walk,” I said.
October 2005
10-03 – Rush - Chemistry available for pre-order
Rush - Chemistry available for pre-order
Here’s the message that went out on the wires last night:
Just to let you know, the Rush book I have been putting together over the past two years is now finished, all bar the shouting. According to my sources, “Rush - Chemistry” will be going to print Monday week, ready for release at the end of October. It is available for pre-order here.
You can also pre-order it from Amazon.co.uk or Amazon.com - if you do, feel free to use one of the links, every little helps!
10-11 – Audioscrobbler
Audioscrobbler
This looks like a really great idea - I’ve only just got round to trying it out, but the idea is to let others know what you are listening to. The charts, thus generated, are very different from the ones radio would have you believe, not least because they reflect the staying power of songs and artists, not just the ability to get promoted. My Audioscrobber page is here, by the way.
10-11 – Comments issues
Comments issues
A couple of people have tried to put comments on posts, for some reason they can’t at the moment, but I can’t see why. There is an upgrade to Wordpress, I shall be implementing this shortly.
Update: I believe I’ve resolved this now.
10-18 – New version of N E O now out
New version of N E O now out
There are few products I will endorse unreservedly but Nelson Email Organizer has been a bit of a life changer for me - the quantity of email I receive seems to follow an exponential curve all of its own, this last quar ter I received as much email as I did the whole of last year, so goodness only know what its going to look like in twelve months’ time!
Anyway, I was recommended to NEO by a colleague, and I wonder now what on earth I would do without it. It indexes all of my 40,000 emails and lets me search by sender, attachment type, category and so on, as well as search string. I’m currently upgrading from 3.0 of Neo Pro to 3.1, which triggered me to write these few thoughts while I wait for the thing to finish.
There’s a free version of NEO available from the web site, I can’t remember what prompted me to upgrade to Pro (apart from the fact that its so damn good), but I would do it again. The only weakness I can think of is that it doesn’t search attachments, still, can’t have everything!
November 2005
11-03 – Chemistry is out
Chemistry is out
My publisher has just told me that copies of the book have now been received from the printers, which is a few days in advance. Hurrah! I shall be sending pre-order copies out as soon as I receive them.
For further information or pre-order see the Chemistry page. Also, here’s a banner ad produced by my son, they do grow up fast :-)
Also, it was great to see the book as “perfect partner” to the R30 DVD on Amazon.com. Not sure how these things are worked out, but I’m not complaining!
11-05 – What is the music industry, anyway?
What is the music industry, anyway?
The recent, utter faux pas by Sony BMG to hide copy protection software the computers of unsuspecting individuals raises a number of interesting questions about what kind of service music companies are trying to provide to their customers. This debate is raging on, and yet another opinion on the subject won’t particulalry help at this stage, so I thought I’d take another tack and consider exactly what the so-called “music industry” is for in the first place. Through my various roles I have met a number of people inside the biz, whose opinions are directly opposed to the general perception of what the industry stands for - so are they wrong, or not being heard, or is it that the industry itself is far more complex than we allow?
Following various discussions I’ve been led to believe the music industry occupies a number of roles. Firstly, it is a loan agency. Musicians and artists are considered in terms of market potential and business risk, and are loaned money to make recordings and pay for their management, marketing and distribution. This latter point is important - there are plenty of artists that didn’t realise it was their own money being used to these ends, when they signed the initial contract. The “advance” is exactly that - an advance payment against expected royalties, a.k.a. a proportion of the money made from each sale. Some bands - Marillion is an obvious example, but there are now plenty of others - have twigged the fact that they can get this money from elsewhere, on more reasonable terms. It is surprising that other financial institutions do not get wise to the potential of funding the arts, but in all probablility they just do not understand how to underwrite the risk.
Second, the music industry offers a recording and packaging service. Many music companies own their own studios, as I understand it, and there are independent studios (each with their own specialist skills and reputations) that can be booked. At the end of the day however (as illustrated by David Gray’s White Ladder), you don’t always need an expensive studio to produce a good album. Once the music has been mixed (final arrangements selected) and mastered (made to sound on the DVD like it sounded in the studio), there is a manufacturing process involved - again, each stage can use in-house facilities or can be outsourced, the same as with any business.
Then there is a global distribution network. There’s no magic here - the creation of a global organisation is a painstaking task, requiring the creation of local offices, relationships with suppliers, channel partners and the media in each geography, an understanding of legislation and market dynamics, and so on. This is probably the real battleground for music companies, as it is the only area where they can really differentiate themselves (apart from the artists, but they are subordinate to this in importance - the best artists in the world will not make as much difference to profits as having the better ditribution network).
There is a sales, marketing and promotion service. Again this is global - once a major label has decided to put its marketing muscle behind an artist, they will appear on every street corner and in every magazine, whether they are any good or not. This is a hugely valuable service, but it is not essential that it is carried out by the label itself - indeed and again this role is often outsourced.
Finally, there are other areas to the biz, for example the publishing business, which protects the rights of artists in its stable. Anyone can own the rights to an artist’s music - Michale jackson owns much of the Beatles catalogue, for example, and David Bowie has sold the rights to his own back catalogue as a going concern.
While this may be an oversimplification, these five areas give some indication of how complicated things can get. While the recording elements may just want to get the music heard, the distribution elements seemingly work in cahoots with the publishing elements to protect their existing models. Enough for now - the sad thing is, that while certain parts of the biz may be demonstrating their intransigence, it is the industry as a whole that suffers. There is a huge market for music, and it may be even bigger than the existing market suggests; however the proportion of the market that depends on traditional models will inevitably shrink.
The bottom line is this: give value to the people - make the benefits outweigh the costs, and they will pay. There are too many examples to state here, from Simon Cowell to Steve Jobs, from Marillion to George Michael. There is every indication that people want to spend a good proportion of their money on music in all its forms; ill-conceived attempts to implement rights management technologies are only doing damage to what is already an industry in trouble.
11-07 – Qualcomm and SCO
Qualcomm and SCO
There’s been a recent spate of releases from Qualcomm about anticompetitive behaviour - not least this one, which is a responds-to-allegations kind of release. Today we saw the release “QUALCOMM Files GSM Patent Infringement Suit against Nokia” (not yet available on the Web). While this patent litigation in mobile device companies is not news, the release has to take credit for having one of the best lines seen in a recent release - “Patents that are essential to a standard are those that must necessarily be infringed to comply with the requirements of the standard.” Took me three goes before I could even understand what it meant…
But anyway, none of the device companies are having a particularly good time of it at the moment, but do they really have to descend to quite so much bickering? While there may be valid short term commercial reasons to do so, its surely not in anyone’s long term interests, least of all the end users of technology.
It reminded me to go check out the current SCO situation. While SCO made quite a hullabaloo about patents a year or so ago, it was rather more quiet about its near de-listing from the NASDAQ due to accounting errors. While any litigation may be rumbling on beneath the surface, SCO is now getting on with its core business - promoting its products - which has to be a more healthy way to spend its time. Not saying any of these things were connected, just that the majority of people don’t really want to know, to them its all so much dirty laundry.
Meanwhile, in the operating system world, we are starting to hear of companies using Linux more as a lever than as a strategy. More than once I’ve heard companies saying Linux was the best thing to happen to Solaris, and no doubt it has impacted SCO in the same way - meanwhile of course, Microsoft is having to modify its own strategies to ensure it is competitive with Linux. End result: everyone is better off, no vendor is complacent, companies are forced to innovate ever harder to keep their customers sweet.
Which is just as it should be.
11-08 – Self-important, exhibitionist geeks
Self-important, exhibitionist geeks
In which the writer castigates the anti-bloggers, then the bloggers, and then everybody else, just to be evenhanded
My wife said to me the other day, “aren’t blogs just the online diaries of self-important, exhibitionist geeks?” Interesting perspective, I thought as I was tucking into my breakfast. That day I chose scrambled eggs, and I was wearing… not really. What I really thought was, “better write a blog about that.” Besides, I don’t eat breakfast.
There’s plenty of debate going on about the strengths and weaknesses of blogs, and I frankly don’t get it. On one side there is a camp that says “blogs are going to change the world” - a view which to me is both flawed and dangerous. On the other side there is a camp which says “blogs are an irrelevance”, or worse, speaking with palpable indignation that blogging should broker any attention whatsoever. As one who clearly has a blog, I may be biased, but equally I find myself in neither camp, which is confusing.
So, what it all the fuss about, and why is it causing such a polarisation of views? At its heart, a blog is no more nor less than a very simple Web publishing mechanism. Were I writing this in FrontPage and then running Web Publishing Wizard, would I write anything different? I don’t believe so - having uploaded plenty of text-based content to the Web over the years, the only difference to yours truly is that I don’t have to muck around with other editors. I type, I hit “publish”, and I’m done.
Ultimately however, the end result is online information, in the same way as magazines, books and newspapers hold hard copy information. There are plenty of publications that are shoddily written, poorly edited or in bad taste, but nobody would ever say “all magazines are bad”, for example. So, why should we say the same about blogs? Similarly, there are Web sites a-plenty that give us chapter and verse of the goings on of some obscure family in Maine; there are plenty of shabbily produced, poorly formatted and otherwise dubious Web sites. These are publicly accessible, and often unavoidable as they somehow get presented by search engines as “most relevant” despite all attempts to the contrary.
Somehow, however, we ignore such Web sites quite easily, but it is abhorrent that similarly low levels of quality might exist in blogs. To the anti-bloggers, this is proof if any is necessary that blogging is a crime against humanity and should be stopped. Now.
Meanwhile, the lowering of the bar to more simple Web publishing has also opened a number of opportunities, which leads to the other side of the coin. The IT industry has a tendency to hype the latest trend: this is possibly due to the fact that much technology is disappointingly dull and deserves a bit of a lift, but more likely for commercially minded types it is a way of maximising chances of commercial success. Finally, and whether we like it or not, this is a trait among tech types, we really do get genuinely excited about things technological and their potential to change the world. And so we have the overhyping of blogs - how many thousands of bloggers are really going to turn their waxings into hard currency, for example? On this, the Greek chorus may have a point - there is a lot less to blogging than meets the eyes of some zealots. Meanwhile, perfectly good columnists are calling themselves bloggers for fear of becoming irrelevant, a bit like parents trying to join in the conversations of teeenagers.
Perhaps both sides as presented above are missing the point - I can already hear little voices questioning my definition of “blogging”. So far in this piece, the interactive nature of blogs has been missed, a clear discrepancy. It could (and no doubt it has) been argued that a blog is no more than a single user newsgroup, a bulletin board channel in which the moderator creates the content and other contributors are reduced to mere commentators on the main story. This would be true were blogging in some way hierearchical, but instead blogs form a meritocratic network of networks. The blog itself is of limited value; a network of blogs, where participants exchange views and develop ideas, is really where the action is. Blogs obey Metcalfe’s law - the value increases exponentially based on the number of links between them.
Let me repeat that, then: a single blog is of limited value, and this is where I fully concur with the anti-bloggers. In the signal-to-noise ratio of the blogosphere, this is the noise - feel free to ignore it. While there may be plenty of such blogs, they make no sound if you just close the window. I don’t want to downgrade the value of using blogging tools for more conventional web sites - I’ve seen charity sites and other news feeds operate with a blog mechanism, and why not, its just a tool. Even if this were useful just to the people writing their own blogs and commenting on the blogs of others, it would already be of enormous benefit to themselves; as it happens, many blogs are generating quite a substantial readership, which suggests others can benefit from the debate whether or not they choose to contribute.
Finally then, there is the strength of syndication. Blogs feed information and content, and it would be very difficult to separate the wheat from the chaff without some filtering mechanism. So, we have tools like Bloglines and Pluck, which aggregate blogs into a single window, suggest blogs based on current settings, enable the easy addition and deletion of blogs. To look at single blogs in isolation would be a bit sad, it is the power of the many that matters - and the better blogs will undoubtedly filter to the top.
So, there we have it. Should we expect today’s bloggers to be the media moguls, business leaders and presidential candidates of tomorrow? Don’t be silly. There may be one or two characters who ride the wave better than others, and come out on top - and good luck to them. Blogs are already used in a multitude of ways - from major companies market testing ideas, to hobbyists sharing information, and indeed no doubt to self-indulgent diarists. For myself, I really do not see the relevance of what someone had for breakfast, nor am I interested in the golfing progress of an executive who has clearly been prompted to add more of himself into a corporate communication. In the meantime, as a mechanism to share information, test ideas and build relationships, blogs deserve at least a place at the table. What we are witnessing is the democratisation of technology, a lowering of the bar, and that is always to be applauded.
For me, a blog serves as a channel for ideas, and a placeholder for news: the blogging mechanism has enabled me to create a three-column, dynamically changing Web site with minimal trouble. Also, as I am supposed to be a commentator on all things technological, I believe I should at least try these things out, just as I play (and sometimes struggle) with the latest gadgets. No more, no less, and if just one person has read this far, then its been worthwhile.
Next time I’ll try to answer my 12-year old son’s question, asked out of the blue the other day: “What exactly is the point of Linux?”
11-21 – Duping Google
Duping Google
There was a very interesting chap on this morning’s “Start the Week” on Radio 4. He pointed out that, according to the US Patriot Act, government agencies can access information on third part servers (e.g. Google, in his pitch) without either requiring the consent of the information source, or even having to disclose that such access is taking place. This is fascinating and curious (to say the least) given the amount of previously private traffic that therefore become agency-accessible. For example - standard voice calls need a warrant; from this definition, voice over IP (Skype, etc) does not.
OK, Skype is peer to peer, and I don’t know whether the definition of “third party server” covers caches and other hardware mechanisms for getting information around the Web, so there should (and no doubt will) be a debate on this; meanwhile, I was wondering when people would start to consider duping the system. For example, if some agency is curious about my particular search requests (unlikely, admittedly) and it I was in the slightest worried about this (even less likely), I would start seaching for random items just to confuse the system. Given the availability of API’s to Google etc, I might even write a script to search for random sequences of dictionary entries, and run it as a screensaver - my own search entries would be lost in the noise.
Of course, if everyone started doing this, it would be a disaster for Google and other search providers, who are far more interested in the commercial potential of their search data. There are a billion searches a day apparently, offering huge opportunities for the companies collecting it - and no doubt, for the agencies that plug into them. In protecting against the latter, great damage would be done to the former.
At the end of the day however, would the end users care? Would they care enough to protect themselves, and would they care about any damage caused by such protection? Probably not - to either. This example of what might happen is just one of many potential future end user behaviours, the majority of which won’t happen. The point instead is that all the highest aspirations of either Google or the US government are ultimately hostage to the vagaries of the technology consumer.
2006
Posts from 2006.
January 2006
01-11 – A new dawn, a new day...
A new dawn, a new day…
… and a very happy New Year!
I hate to scare anybody but, believe it or not its already the 12th of January. That’s more than a third of the way through the first of twelve months, is it any wonder that Christmas comes around so fast. A bit like builders who have unexpected complications when they’re half way through a job (i.e. all of them), we’re all a bit in denial when it comes to the passage of time.
All the better reason to pack it all in, that’s the plan anyway. And so it is that I have left that kindly bunch of fellows (male and female) at Quocirca, and I am taking a month or so off to write another book. I expect to be back on the analyst circuit before anyone notices I was gone - I’m already missing the company! For more information on what is an industry analyst, watch this space.
Meanwhile, the Rush book is doing well. It appears to have been generally quite well received - 8/10 in the latest (February) edition of Classic Rock for example, and there’s a couple of good reviews on Amazon UK. We managed to catch the next print run to fix any typos and clangers, so thanks to anyone who submitted them. Following being held up by customs the book still doesn’t seem to be available in the US, but the distributor has assured the publisher (who has assured me) that it should be released any day now. I was also told that most copies have been snapped up by Amazon, so it’s unlikely to appear in any retail outlet any time soon! Meanwhile, as you know you can always order from me, please specify if you want it signed. I decided not to put the postage prices up in the end, I’d rather take the hit than make it prohibitive.
There’s plenty of other things going on, all of which will unfold in good time. For the moment, I can say “… and I’m feeling good” - no doubt this year will have its ups and downs, but that’s only to be expected! Good luck in all your endeavours, and may our paths cross frequently.
Jon
01-18 – The VaioPad - it works!
The VaioPad - it works!
Warnng - this may get a little nerdy. Eighteen months ago I wrote an article for The Register about voice recognition, in which I questioned why there wasn’t such a thing as a handheld voice recognition computer. It wouldn’t need new technology, I argued, just the clever use of existing components. I even speculated that even my trusty old Vaio PictureBook, despite losing several keys and being virtually unusable as a portable computer, might suffice as a voicepad.
Ever since then I’ve wanted to find out if it were possible. Over the New Year I finally decided to give it a go, taking apart said PictureBook and re-assembling it. And guess what - it works! There’s no reason why it shouldn’t of course, but its still good to test these things out. That’s what I’m sticking to, anyway.
There are some pictures here. I remain slightly bemused why the oh-so-innovative computer industry can’t create something like this - I’m sure the market is there, and lets face it there’s little to do other than package things up. I had high hopes for computer companies like Oqo selling a voicepad, but they don’t seem to be interested either - now the Oqo ships with Tablet PC, all they have to do is ship a mike! Oh well, if and when it comes, you heard it here first.
01-18 – The VaioPad - Pictures
The VaioPad - Pictures
Here’s some pictures of the Vaiopad. Note I didn’t have the recognition software running at the time - the priority was to get the page done. First, a front view in its case - note it is upside down, to facilitate being viewed when carried. Or something.

Here’s a picture of the edge, showing the duck-taped hole to reveal the ports.
The computer out of its case, again upside down for no apparent reason.
Here’s the computer, demonstrating how the screen is indeed detatched from the base. I think it might have voided the guarantee, but I’m not sure ;-)
Finally a shot to show how the screen re-attaches to the base of the laptop.
01-26 – The Bells
The Bells
Here’s a snippet from Mike Oldfield’s Web site:
“Hello you ! It has surely been a time of big changes and realisations these last few months, not least of which has been the writing of my Autobiography. This is nearing completion with the help of my co writer Jon Collins and hopefully will be published this year if not early next. During the process of bringing this book to life I have realised a great many things about myself and my position in the scheme of things.( Oh for the gift of hindsight ! ) This will surely influence my future personally and creatively. I have contractually three and a half more years to work on a new piece of music so this will give it a chance to evolve and mature nicely. Meanwhile look out for some promotion for the Platinum Collection , the book , and who knows what else the fates may send . With all my best wishes : )”
It’s a cracking story, and I’m delighted to be involved.
01-28 – Alphadoku Solution
Alphadoku Solution
I’ve had a few requests for the solution to the 25x25 Alphadoku puzzle I posted last year, so here it is.
February 2006
02-01 – Hooking up with MWD
Hooking up with MWD
As from today, I’ve thrown my IT analyst hat in with Macehiter Ward-Dutton Advisors (MWD), a UK-based analyst house whose focus is IT/Business alignment. In the end it was quite an easy decision - my IT focus has always been the same, though I have couched it in various ways according to the terminology of the time. Initially I shall be concentrating on service management - my first job is to define it so watch this space! I must stop saying that.
You can check the press release on the MWD site, or download it from here.
Neil Macehiter and Neil Ward-Dutton are great guys, and I’m really looking forward to the coming months. Can I also take this opportunity to wish them happy birthday, as MWD was officially launched exactly one year ago today.
02-06 – Nobody Likes Feedback
Nobody Likes Feedback
Its Monday morning and the start of a new week. First thing to do - check email, have a browse, read some blogs and check Amazon for the chart position of the Rush book (OK its in the four-thousands now, but why not, its all good procrastination). The eye inevitably strays onto the reviews… and ouch! There’s another three-star baby in there. “Not well written, badly formatted, poor pictures.” Hum.
Following an initial, “What the heck” denial, the reaction is to put oneself into an entirely self-serving justification mode, “If only he knew what effort went in, etc, etc.” Of course this is a huge mistake. Next stage comes the, “Maybe he’s right,” (which of course, he is); then the recognition that, of course, he is right, both to have his opinions and in his justification of them.
All of this within the space of about a half hour, a microcosmic version of the four stages of trauma (denial, anger, bargaining, acceptance) or the four stages of team dynamics (forming, storming, norming, performing). Yet again I discover that my feelings are not my own, but purely some sequence of chemical reactions to cause various synaptic connections to form and reform in my grey matter.
Hum.
Speaking of connections, I notice that Garr Reynolds put a handy summary of the Cluetrain Manifesto up on his blog, over the weekend. If I may summarise his summary, the manifesto boils down to three words - “Markets are conversations.” About five years ago it appeared for many as absolutely the right thing to be said, just when the internet was fouling up corporations’ efforts to put some kind of sheen on their doings. It was also, so totally just at the wrong time, a victim of the bubble bursting, all of its sage ideas left to rot as bricks overtook clicks in the fashion industry we call IT. So - its good to see the Manifesto re-emerging, or at least being given a bit of credit.
Meanwhile, here’s the link. I fully applaud Amazon’s ability to print comments, indeed, I use them all the time when deciding what to buy. However, the interaction is all one way - there can be no Cluetrain-style conversation initiated here, either between reviewers, or between reviewer and author. This opens up a whole set of questions, not least, should the author actually be able to comment on the feedback of others? In the case above I would have loved to do so, but I’m not sure it is always such a good idea; not least because it might stymie further critique, but also because the author’s own comments might not necessarily be that valid. I’ve seen the former when I’ve participated on mailing lists - there’s no better way to halt a perfectly good tirade, than for the author or artist to wade in with their own opinions. Its one of the reasons why I’ve stopped posting things to the Rush lists, for example, though I confess I do have a quick peep every now and then. As for validity, one could probably determine what stage I was at (denial, anger etc) based on whatever comments I made.
Second (and this goes to the heart of Cluetrain), are all markets really conversations? For a conversation to exist there has to be both a channel of communication open, and a common language available to, all sides. I believe that Cluetrain is saying, companies should open their ears and start listening to their customers that are already in conversation. Perfectly valid, but many customers are considerably quieter, and it would be a big mistake to prioritise conversations with the more vocal customers, over serving the needs of the less vocal. Conversations are valid up to a point, but then, whether its a book being written, or the latest brand of soap powder, or a new gizmo being released, there has to be scope for leaving people to get on with it. Equally then, on the other side of the fence, producers have to accept there are consumer-oriented conversations that should take place without the producers being present - kind of, “would you mind leaving the room now please, we want to talk about you in private.”
As always, the keyword is going to be ‘balance’. There can be no absolutes here - if anything, with blogs, discussion boards, conversations, feedback and the like, it all serves to illustrate how far we have to go with these interactive technologies. It’s one, big, indeed global experiment, one in which I am very happy to participate.
Feedback welcome ;-)
P.S. Hands up anyone who thought this post was to be a comment on the Rush EP…
02-08 – Avecho - what did I miss?
Avecho - what did I miss?
Go away for a month or so, and all sorts of things get missed. Like Avecho, the antivirus company that refused to tell anybody how its technology worked. I did a brief bit of consulting for them a few years ago, which was all a bit tricky as they wouldn’t tell me either. “Its so easy, if we tell people, they’ll all do it” they said - so I was left none the wiser.
Of course, it left my mind racing. I ended up with my own theories as to how it worked, which now have largely been superseded by the availability of “sandbox” protection mechanisms.
Anyway, the company went into administration in December, and was almost immediately bought by another company, Stylish Ltd who I have never heard of. Even more secrecy, all very interesting. I had a quick browse round the Web and found the following post by a disgruntled employee, but once again, I find myself none the wiser.
Still, at the time it was nice to see Essex.
02-08 – On Googleseep and Netabuse
On Googleseep and Netabuse
So, Google’s blocked BMW from appearing on its search engine, because it stuck in a whole bunch of spurious pages to get its ranking higher. Hurrah for free speech and democracy!
Do I really mean that? No. For a number of reasons.
First, when did any Web site worth its salt not do things to increase its ranking on Google and other search engines? Plenty of Web sites that incorporate keywords to make them more attractive to indexers. There is a whole spectrum of quite acceptable keyword use possible before this becomes abuse of the mechanism, and there are plenty of other techniques, including crosslinking between sites and so on. While Google may have picked on an obvious example here, there are no generally agreed criteria that I know of to determine what is use and what is abuse.
Second, as Google is only a company (controversial view, I know) at the end of the day, it still has its own corporate responsibility to think about. Unilateral action to block certain Web sites that contravene its rules is fine in principle, but while there is no clear process to do so, Google itself could put itself in a difficult position. Should there be a warning for example, a 30-day resolution period, room for appeal? Google isn’t obliged to do any of these things, but the sudden finality of the current approach doesn’t seem particularly evenhanded. Either Google is to restrict its service to all of the Web sites it indexes, or it may find itself be open to the criticism that it is abusing its power.
Third and finally, this smacks of easy targeting. That is, “BMW is big and obvious, therefore bag it.” For a while now I have noticed it becoming almost impossible to search for certain types of information on Google. The majority of links are to sites to purchase products, and not to provide information - the result is wading through reams of results to get to useful information. Somehow purchasing sites are prioritised over informational sites - this will not be by accident. Some companies have multiple sites, each of which is selling the same catalogue of products so that it can appear in multiple places in the search results. Bloggers employ their own techniques - “please link to me”, they tell their friends, “that way my ranking will increase.” Such actions are totally un-police-able, and yet they’ve reached the point that if you want to appear at all, the only possible action is to join in. And so we have “Googleseep” - if Google is intent on slamming the big doors to make a good-sized noise, it will of course find its indexees seeking every possible “acceptable” way of getting around any regulations and mechanisms it imposes.
Clearly BMW overstepped the mark and needs to do something about it - last I heard, the company had fixed up its site and was waiting to be re-instated. All the same, even if Google’s actions were well intended and not just a thinly veiled attempt to generate some publicity (as some might suggest, and as if they’re not getting enough already), I think the company’s on a very sticky wicket indeed. Either it’s indexing the Web as it stands, or its not. If it wants to make the world better, it could adapt some of its own Chinese filtering technology (great article by Martin Brampton, by the way) to give all of us a way to see through the clag that litters most searches. If democracy is about freedom of choice, personal filtering of search results would be a good start. Perhaps the second point above could be somehow resolved with a Googletest utility to verify the acceptability of a site - would it be too much to suggest that untested sites are further down the rankings than tested ones?
As a final note, ‘BMW’, ‘motor car’, ‘red’, ‘convertible’, and please link to me, that way my ranking will increase…
02-08 – Rush Addendum updated
Rush Addendum updated
(edit Feb 9)
For Rush readers, I have put together a printable PDF of the addendum entries found to date. There are 22 of them, we shall know in time if there are any more to be found. Of these:
- Ten are dating or timing errors - Seven are typos and editing errors - Five are factual inaccuracies
Truth be told, I was feeling more than slightly uncomfortable about the number of errors found in this book. As I was flicking through to put together the above addendum, I started remembering exactly how much of a mountain of information (some inaccurate) that I waded through to compile the book itself. It didn’t make me any less regretful of the errors, in particular the inexcusable typos (Getty! Argh!), but I do still believe that we did everything we could to get it as accurate as we possibly could at the time. In a word, sorry.
Oh well, onward and upward. Thanks to all those people who have pointed out the mistakes above. As indicated, the next reprint (due imminently) will fix the first ten issues in the text, leaving twelve which will probably now have to wait for the paperback.
02-10 – The IT brain drain?
The IT brain drain?
I had a very interesting conversation today with an IT consultant who was previously a network manager and who has been around the block a few times, so to speak. He saw one of the biggest problems in today’s IT environments being that so many of the IT staff had been outsourced, many companies in question now lacked the capability to make decisions about new technologies. He’d been talking about this problem to outsourcing firms, who were now suffering from the fact that companies had less and less of the wherewithall to involve them in new technology deployments.
It sort of stands to reason - a bit like in the game of Monopoly, where you need a threshold of cash. Without enough of it, you find you spend more than you make and very quickly you find yourself out of the game. If this is a trend, then IT managers will find it increasingly hard to keep their levels of technology understanding current, and keep up with what is once again an accelerating field of innovation.
Unlike Monopoly of course, large corporations are extremely complex and the reduction of IT skills in the management layer is harder to identify. For now, we can only mention that this could be an issue, and hope we are wrong.
02-10 – The people's front of WOMM
The people’s front of WOMM
I’ve been doing a lot of thinking recently about relationships. That’s business relationships, not personal ones - though admittedly the line does blur, as it probably would in any community. Community - we’ll come back to that, but there are a number of strands of thinking I want to bring together. Bear with me, and (in the words of Morpheus), let’s see just how far the rabbit hole goes.
First, back at relationships, it is a well known fact among sales people that good relationships drive good sales. This is not just about quantity, but also about quality - there is the oft-told anecdote of the feed salesman who spent years building a good rapport with a farmer, and eventually he won the business. Insert your own, favourite anecdote here.
Secondly, as well as quality there is also the question of quantity. I know of several ‘networking’ organisations that are essentially about lead sharing and pre-qualification between small businesses. BNI is one for example - the people that use such organisations swear by them, as they are a way of spreading the word, and the load.
The Internet (as a third point) has enabled relationship-based organisations to go into a kind of overdrive. A while back I wrote about Plaxo and Linkedin, two very different mechanisms for sharing contact information, keeping in touch and generating leads. There are plenty of others, Ecademy for example, whose goal (I paraphrase) is to turn its subscribers into power networkers - people who can maximise the potential of their global network. Make lots of money, that means.
Fourth we have blogs. This may seem kind of irrelevant at this point, but power bloggers do operate as a kind of network, as illustrated by James’s post here. The blog is both a communications mechanism and a marketing tool, and so it fits neatly into the toolkit for internet-based networkers. It’s also host to a wildly diverse set of conversations.
Finally, put it all together and what do you have but Word of Mouth Marketing. I’ve been hunting around and there appear to be several definitions for this, based largely on the starting point (choose from 1-4 above) of the person doing the WOMM. There’s plenty of other stuff that we can throw into the WOMM bucket - tipping point theory for example, and no doubt the wisdom of crowds (I can only guess because I haven’t read the book. I know, I know, I will…).
All make sense so far?
Good.
Then perhaps you can tell me why I have a problem with these things.
Its not a very big problem, admittedly, but its still a problem. (As a digression, I still remember listening to Paul Hawken’s “Growing a Business” tapes all those years ago, when I first started my own company. �There are two kinds of problems, that�s good problems and bad problems,� he said. �Don�t get me wrong, they�re both problems!� Very good tapes by the way, highly recommended.)
The issue I have with all of these things is that they totally ignore the concept of community. “Community” is the raison d’etre of any network, and yet it is somehow assumed that the network itself is the engine of success, and not the community. Not true, I say. Bloggers also have a tendency to organise themselves into communities, and yet the assumption is still that the medium - in this case the blog - is more important somehow than the message. Not true again. A cursory inspection of any part of the blogosphere will reveal that bloggers are clustering around certain memes, and certain individuals will find themselves closer to the meme-core than others.
Its the same thing in music. There’s only a limited number of notes and so many ways of expressing them, but many music fans will express a preference for a certain subset of bands, or a single band. Go onto Audioscrobbler and follow some of the chart links around, look how people tend to listen to the same set of bands. These bands may sound very similar to other bands that are outside their own “listening community” - so they may get frustrated that they do not get the exposure outside their own ecosystem, their musical tide pool (obligatory Rush quote here “each microcosmic planet a complete society”). So be it - that’s communities for you.
Certain individuals manage to transcend their own communities, so Robert Scoble and Seth Godin in the blogosphere, and Thomas Power in the networking sphere, are like Green Day in music. You’ll see them participating in multiple communities, or indeed, multiple communities participating in them.
Interpersonal communities are a phenomenon as natural as coral reefs and shoals. There are very big ones and very small ones, each has similar characteristics with its own spiritual leaders and mavericks, administrators and regulators. Its the same everywhere, in sport, politics and (dare I say it) religion. I’m not knocking them - I love communities, I have my own that I love to participate in, both at home and at work. One unassailable fact of any community is that it is about active participation - passengers do not reap the same rewards as, and are not generally well considered (though the internet does make room for read-only participants). While priorities can change over time I wouldn’t want to be a absolute passenger in any community, and therefore, to join one, I would have to say goodbye to others.
All I’m knocking is that many networking organisations and blog-fests try to give the impression that they are not communities at all. I think this is a perfectly natural thing to do as well - affiliations come with their own baggage, and people don’t always like to emphasise them. Equally, it is perfectly natural for communities to wish to grow their membership. Communities have many benefits, naturally these are appreciated by their members, and members wish to grow their own communities for a number of reasons - to enable others to share the benefits, to grow their own pool of influence (or indeed, revenue) or for a multitude of other reasons. The phrase “Join us”, while usually innocuously meant can sound a little sinister, so recruitment takes place often in a more indirect way.
There are downsides of communities as well. What works in one community is totally irrelevant to another. Communities use different terminologies, which are often opaque to outsiders. Inside the communities, individuals find it difficult to understand why others don’t join them in debate, when the truth is that often there’s no doorway in, let alone a welcome mat. I know of a number of blogs that I find difficult to understand - “they don’t speak to me, but that’s fine,” as one musician once said about the music of another. Indeed, this post may all be gobbledegook to some people who came to this site, but make perfect sense to others (I hope).
That is indeed fine - that’s communities, and everybody is free to choose their own.
Still with me?
Blimey.
So - its not the fact that these communities exist that is a problem, but the fact that this first fact is not acknowledged.
Which brings me to word of mouth marketing - WOMM.
I already made the comment that there are different kinds of WOMM (horrible acronym - but I can’t be bothered to write it out every time, even thought it’s taken me longer to say I can’t be bothered!). The different WOMMs are based on the communities that the WOMMers come from so the business network WOMM makes little mention of blogs, whereas the blogger’s WOMM sees it as an essential pre-requisite. Bloggers see the conversation as an end in itself whereas power networkers see WOMM as a way of growing the network, and blogging is one tool to help that growth.
As we can see therefore, its about communities, but surprisingly the community factor is given little mention.
One thing that WOMM suggests is that to be a successful WOMMer, you need to “infiltrate” or at least participate in specific communities that will give you the biggest WOMM bang per buck. I’ve experienced it myself: with the Marillion book I was already in discussion with the online Marillion community when I came up with the idea of writing a book, and when Marillion decided to authorise it, they were able to advertise it to their own subscriber community. With the Rush book, it was perfectly natural to me to sign up to Rush lists so I could glean tidbits of information and inform the community of what I was doing. It seemed perfectly natural, indeed it would have been folly to do otherwise.
That’s all fine (at least to me) so far. What is less fine to me is the suggestion that there is a single discipline that works for every group. If this is more about the community than about the network, then what works for one community will not necessarily work for another. For example a lead generation network may be very comfortable exchanging leads and growing markets that way, but that approach won’t cut much ice in other communities, that are used to following different rules. Indeed, it is highly likely that each community already conducts some kind of WOMM activity, so successful WOMM would need to tap into that, rather than trying to impose some external structure.
Unsurprisingly given the principle of community, WOMMers are forming communities of their own. There is already a WOMM association, the WOMMA, there are WOMM companies (with their own blogs), and no doubt there will follow other organisations, each with their own way of WOMMing. No doubt each will come up with its own way of doing things that works for its members and their immediate associations. No doubt also, there will be arguments between different organisations - this is community rivalry in action, and neither side with be wrong, as such. “People’s front of WOMM? Nah mate, we’re the WOMM popular front!”
It’s all about community. It always was and it always will be. For myself, should I be invited to a networking organisation, as I have been (to many over the years), or to join a club or association, I may well say no. Not because there is anything wrong with the community itself, but because it is a community. Perhaps I don’t believe I would fit, or perhaps I wouldn’t make it a high enough priority to make it worth my while. In the words of Groucho Marx, “I’d never join a club that would have me as a member.” Not strictly true in my case, but I do recognise it is about community, or joining a club, which requires more than a tick in the box to benefit. You get out what you put in.
Equally, while (I think) I get the principles of word of mouth marketing, I think we all need to recognise that there will be different kinds for different communities, and that many communities were doing it for themselves, long before the term was invented. Blogs or no blogs.
02-10 – View from the chair
View from the chair
Is this the first blog entry written from the dentists chair? I started when drilling, I’m now having all sorts of moulds squeezed into my mouth. To distract myself I am trying to think about interpersonal networking and relationships, but the only relationship I can think about right now is between my teeth and the array of equipment beside me.
(this post was later edited for clarity)
02-13 – What does Hormel think about spam?
What does Hormel think about spam?
In one of those Monday morning moments, I got to wondering what the company that originally registered the term “SPAM” (the meat-like substance) thought about the term “spam” (the email scourge). A few minutes on Google and I found the following page on one of Hormel’s web sites. It boils down to the following:
“Let’s face it. Today’s teens and young adults are more computer savvy than ever, and the next generations will be even more so. Children will be exposed to the slang term “spam” to describe UCE well before being exposed to our famous product SPAM. Ultimately, we are trying to avoid the day when the consuming public asks, “Why would Hormel Foods name its product after junk e-mail?” “
That, with its reference to “mickey mouse” unsophistication and so on, I thought was a reasonably sanguine assessment.
Then I wondered, was the company ever able to send out any product-related email at all? After all, what would you expect your filters do with email sent from spam.com?
02-19 – Any Eastenders fans out there?
Any Eastenders fans out there?
I first started talking to the multi-talented Peter Polycarpou a goodly while back, when I was singing a few numbers from Miss Saigon (how’s that for declarative living), and I asked him through his Web site whether he had any tips. We’ve not had much reason to converse, so our email addresses have lurked, unnotticed, in each others’ address books. Until recently that is, when Peter got in touch to say he’d be appearing in this week’s episodes of Eastenders on the BBC, (I believe) as Yiannis Pappas.
Now I’m not much of an Eastenders follower but I do know that Peter’s a lovely chap and a great performer. I also think this is a great example of connectedness - its not just about the links in the chain, but what messages get passed and what actions they trigger. For me, it means I’m actually planning on watching Eastenders for the first time in years! His site has also linked me through to the blog of Caroline Lucas from the WTO summit in Hong Kong, another unexpected demonstration of connectedness as well as being a fascinating account of what went on in December. The truth is out there - if ony there wasn’t so much of it!
02-19 – Got me foxed
Got me foxed
To open with a caveat: it might have been something I did. However, to my knowledege the only recent change to my Internet Explorer settings was when I downloaded the recent series of security patches. It still works - after a fashion - but tends to lock up when too many windows are open. It also now fails to be able to get into the admin pages of my blog - sometimes.
There is a fascinating pseudo-philosophical subtext here, about what security is for in the first place. To my mind it’s about preventing things that could slow one’s productivity, while giving a certain level of protection to one’s data. These principles are true for individuals as much as for enterprises. Trouble is, what happens when the very effort to implement security results in a loss of productivity, or in data compromise? That just blows everything.
Sadly, its a fact that malicious attacks are far less likely to cause any anguish than issues caused by personal incompetence or flaky software. I know this is true from my own experience and from those around me, and while I was at Quocirca we conducted some research that proved it reasonably conclusively.
It’s all very interesting. Right now however, what’s most interesting is how I can actually do what I sat down to do. There are a number of options, at the top of the scale I could reinstall everything on my computer, but I’ve already opted for the more mundane - to run up Firefox. And it works.
This could be a cue for another riff, on the subjects of software construction, overcomplexity, incumbent responsibility and so on, but I don’t have time for all of that. The switch has been made, and now I’ll need reasons to switch back. End of story.
02-19 – If this post appears…
If this post appears…
…. it means I can post from anywhere, by email. No more excuses, just queue ’em up and let ’em go.
Meanwhile, I’m building a list of open questions that I’d like answered. I’d like to suggest one possible future - Douglas Adams was right.
02-19 – Nothing to declare
Nothing to declare
“If you are fully in control, then you aren’t going fast enough” - Mario Andretti
It’s time for a change to this Web log. There are a number of reasons to this, not least that, while my days have indeed been packed, often they’ve not contained things I’ve been able to talk about - “just trying to close a deal with XXX,” for example, or “talking to the management of YYY,” I suspect that neither XXX nor YYY particularly want me to blather on about our ongoing dialogues. Second, its been nigh impossible to mary my two worlds (music and IT) in blog format, and finally, writing about my day isn’t really what rocks my boat. The real issue that I think would be worth documenting, is to do with coping with everything that’s going on, navigating the complexities of the global village we find ourselves in. Technology is more and more about people, and people are more and more about technology, to an extent that the boundary between the two is shifting day by day. Grannies are texting their grandkids, and Sri Lankan tea co-operatives are communicating directly with Surbiton mums. The Internet is both a symptom and a cause, while the cost of an international call has dropped from pounds to pennies. There’s syndicated newsfeeds, blogs and podcasts, wikis and word of mouth marketing, mashups and Web 2.0, all thrown into the mix with gay abandon.
There’s so much going on, so what does it all mean? To be honest, I have absolutely no idea. I do know that I would never have set out to be a writer if I hadn’t been in direct email contact with other authors, and discovered they were individuals just like me. I understand the blessing and the curse of communication, when bands like Marillion can regain their mainstream status, and others can appear from nowhere, without the backing of a monopolistic corporation. I do know that major companies are between a rock and a hard place, trying to retain their positions while implementing new strategies, all the while watching new companies steal their markets from under their noses. I do know that there are huge, geopolitical effects of all this change, some of which are real, and some of which are imagined to push some agenda or another. There are psychological and spiritual aspects, as people disciver like minds and communities, but suffer the consequences of having too much of a good thing. There are upsides and downsides, hidden agendas and ill-conceived plans, power struggles and opportunities for genuine, honest progress. Technology might be some, if not all of the cause, but equally, so much of technology is primitive, incomplete, non-inclusive. We stumble from one oasis of technological goodness to another, be it with mobile coverage or getting to our email. Ubiquity remains a distant dream, while integration and accessibility issues mar all but the simplest of interactions.
These are fascinating, turbulent times. We are right in the middle of something quite unique, a show that shows no sign of ending. We’re emerging from a period of technological recession, conferences are buzzing again and the teddy bears are back on the stands. Its exciting again, largely because nobody has a clue where we’re going to end up - a roller coaster ride for sure, rickety at times, but one which doesn’t just come round to the starting point and heave a sigh before raising the barriers. On this trip, there’s no getting off whether we wanted to or not.
All that to say, that’s what I’ll be writing about. I’m glad that’s clear.
02-20 – Doing no evil
Doing no evil
Following my post about duping Google’s search records , I noticed that Google currently has no intention to give up such data to the US government apart from when required to by law. This sounds like a dubious distinction given that the government’s asked a judge to rule on the matter, but at least Google is showing some signs of resolve, following its capitulation to Chinese interests. Enough has been written about the latter, so I won’t go on.
02-20 – Horses for Courses - Myspace and Music
Horses for Courses - Myspace and Music
All networking sites are created equal, but some are more equal than others. Myspace for example, seems to be rapidly becoming the port of call for musicians and bands. I know of a few bands that are up already (I’ll dig them out and list them), Simon Apple (I think) and John Wesley for example, and a music producer recently said to me “you can find them on Myspace” in the same way that you might talk about the best parties. Fish has just announced a Myspace presence as well.
Sort of repeats the theme that its all about community, methinks.
02-20 – SC101 Network Storage Device
SC101 Network Storage Device
I gave this device a mention on Silicon in December, and suffice to say I wan’t very happy with it. There’s been a firmware upgrade since then which has improved things somewhat, but still no cigar - its losing the connection with one of my computers, while being OK with the others.
02-21 – Its all going to go horribly wrong
Its all going to go horribly wrong
Wordpress has moved from the version I’m on (1.something) to 2.something - it was 2.0 but there were so many bugs in the initial release that a new release was issued some days later. Eager to experience the benefits of the upgraded version and in the name of research and growing my understanding of what’s out there, I’m going to take the plunge over the next few weeks and upgrade. I expect one of several outcomes:
- it all goes horribly wrong, and I shall be left with a gaping hole where my Web site used to be - it all happens as smoothly as manure sliding off a well-oiled shovel - something in between
I will of course be backing everything up, but even that offers no guarantees in my experience. Oh well, all part of the fun.
As I was looking at the Wordpress site the other day, I noticed that Wordpress was now hosting accounts - for free I think. Strange - on the basis that I would have paid, I would have charged. There’s a business model (if anyone asks how to make money out of free online software, hosting has to be the first answer - but not if everyone gives it away). As a side note, its “powered by Automattic”, a spin-off company from Wordpress that claims on its front page, “Blogging is too hard.” Insightful… but anyway. There were a couple of Wordpress hosts at the time I set up this site, but I didn’t know either and doing it myself seemed to be the best option. Now I’m not so sure - after all if it’s good enough for Scoble its good enough for me (how much influence does this guy have?)
There’s still some things to do in blogging that currently pass me by - tagging for example, trackbacks and so on. I hope that Wordpress 2.something will offer a suitable base for my continued education. I also notice - this is as much a bookbark as anything - that Yahoo seems to rapidly be hoovering up some of the better blog-related companies, Flickr and the like, and has forged partnerships with both Wordpress and Moveable Type. To inner-circle bloggers this will be nothing new, but to mere initiates like me, its very interesting particulalry given that Google still hasn’t really got its own blogging act together.
In the meantime, anyone know of any better RSS readers than Pluck for IE (which seems to have wheedled its way back onto one of my machines) and Sage for Firefox? I’ve tried a few and they’re all either too primitive or too buggy for my liking.
02-21 – More on briefing analysts
More on briefing analysts
Following James’s tips on briefing IT industry analysts, here’s some of my own from the archives - a bit rusty, but applicable.
02-22 – Getting engaged - to a blog
Getting engaged - to a blog
Everyone, as Tom Cruise once said about his now ex-wife Nicole Kidman, “is absolutely right.” Apparently, saying these words is the best way to prevent an argument, or at least resolve one. And while it doesn’t seem to have worked for our Tom (though perhaps it worked better for Nicole, whose own thought was, “finally I can start wearing high heels again,”) I have to say that when it comes to what’s going on in the collaborative technology space right now, he has a point.
What exactly is going on? Nobody’s sure, but one thing is certain - there’s certainly a lot of people talking about it. “Markets are about conversations,” announced the Cluetrain manifesto and several other pundits before then (not least my old boss, Robin Bloor, when he wrote about the electronic silk route in 1998 and before). With the advent of the blog of course, there appears to be more people talking about these things than ever. In May last year, Jonathan Schwartz wrote about what he termed the Participation Age on his own blog , and there can be no greater demonstration of this concept than the fact that its all being talked about on the blogs of others. The blog is a cultural phenomenon, for sure. But is it the end, or just the beginning?
There are still plenty of people who have never even heard of blogs. “You shoud have a blog, “ I said to my ex-colleague and business dynamics guru Roger Davies. “A blog?” he said, “What’s that?”
Now, I’m not dragging my good friend Roger through the slurry of the blogosphere in order to show him as some kind of luddite; exactly the opposite. The fact is, there’s a massive number of supposedly, highly connected people out there, who have never heard of blogs. Another old colleague Clive Longbottom makes the following point in a recent column for Silicon.com:
“Blogging is on the increase - at least the number of people who write blogs is growing. Our research shows that blog readership is still miniscule, and is moving more towards community-of-interest style usage.”
If we consider Tom’s first law - that everyone is right - we arrive at a conundrum. The bloggers are right, that there is something very exciting going on, and yet the non-blog-reading community is also right. Jonathan Schwartz and Clive Longbottom are equally correct to say we are in a new age of participation, and yet the majority are not participating.
To square this circle, we need remember only that the blog is a symptom, not a cause. The fundamental principle behind participation is the act of engagement, of joining in. Blogging for like minded types is no different to SMS text messaging for teenagers - each went, or is going through a similar growth curve, and blogging will no doubt find its level. People can argue about signal to noise ratios and claim to be the first to notice the clutter, but what they fail to do is remember that much conversation serves no purpose whatsoever - us Brits will talk about the weather for example, quite happily, for hours sometimes. Markets are indeed about conversations, but conversations are primarily about relationships and how these can be nutured, sometimes over a period of years.
This principle of joining in is of primary importance. With blogging, as with SMS, each provided a new mechanism that was appropriate for a certain type of conversation. The barriers to joining were lowered to the point where a large enough number of people could engage, and - lo and behold - they did, and are. Call it an application of Metcalfe’s Law. In other words, blogging isn’t an answer, it’s a mechanism. So are all these other “declarative living” tools that are springing up, for sharing preferences, photos, books and so on. They’re mechanisms, each appropriate to its audience.
Joining in is nothing by itself, however - unless all we want to do is talk about the weather. People join conversations for a purpose, in business as in leisure. Sometimes that purpose (one suspects, the vast majority of teenage texts) may be to support the growth of the relationships concerned, be they one to one or within the group. Ultimately there has to be a higher purpose than participation itself. I suspect we shall see a continued evolution of blogs, from the individual and observational type blogs to community-oriented, news-based entities. However blogs will never cut it in their current form for anything other than providing an online mouthpiece; for multi-user, project oriented interaction to take place on the same scale as blogging, other mechanisms will be required, which are still to be developed.
How do I know this? Simple - because its happened before, and because of the same premise that people use when talking about technologies that pre-date blogs by decades. “Blogs are nothing new,” they say, citing Newsgroups for example, or even uucp-based forums, which both provided an appropriate transport for the capability we have with blogging today. In the same way that Hypercard pre-dated the Web, but didn’t quite reach global phenomenon status, similarly cyberspace is littered with failed projects for collaborative working. “Failure” is a harsh term - all are successful in their own way, but none has achieved that elusive “de facto” status.
I believe that both the blog and SMS are like prophets of old, forecasting greater things to come. We are still waiting for the real hero of the piece - the globally agreed standard for collaborative working. When we have this, then can the age of participation truly begin.
Are there any contenders? Undoubtedly - but therein lies another post.
02-24 – Ego Sum
Ego Sum
Great little post on the MWD blog, about egosystem vs ecosystem :-)
I’m still getting the hang of what should go here and what should go there - basically if it has a personal spin it goes here, and a corporate stuff goes there. I would imagine James, Stephen and Cote would argue that blogs should be individual, while Dale and Helen would say there’s more of a role for the multi-user blog - I think that there should be a place for both. Tom’s first law, of course.
I really need to sort out my categories.
02-24 – Thought for the day...
Thought for the day…
Never trust anything to humans.
02-25 – Thought for the day
Thought for the day
It’s not the critic who counts…
02-26 – Mobility = Ubiquity
Mobility = Ubiquity
Here’s a concept that came up in conversation with Cisco a year or so ago, and which popped back into my consciousness due to the closing remarks of this week’s, always-a-winner Round-Up:
“And finally - a challenge for you. Dashing about in central London, trying to find somewhere from which to file this newsletter, the Round-Up managed to walk for a full 15 minutes without finding a Starbucks. Can you beat that?”
Well, in all honesty, yes I can - finding WiFi in central London has been a nightmare, that is until I found that I could sit next to the window in the Oxford Street Borders cafe and hook into a (legal) free signal. Even when you can find a kosher link, there’s no guarantee you’ll be able to connect; if you get disconnected it’ll probably think you’re still logged in. Etc, etc.
But to the point. At the moment let’s face it, we feel hooked into the cyberstream only when we’re in front of a computer. When we’re out and about, we’re reduced to looking at the Web and our emails through a cloudy porthole, with voice access to the few people we’ve actually remembered to load onto our SIMs. A level of mobility we have, perhaps - in that we can be found wherever we are, and some form of communication can be made. True mobility however, the kind that service providers and app vendors (think: mobile video, or mobile CRM) are trying to push, will not really be possible until there is some critical level of access, for a critical number of people, in a critical number of ways.
Until there is ubiquity of connections, connectedness can only ever be sporadic. Connectedness - the feeling of joining in, the ability to link to thousands of applications and services in a way that we want, is still a distant dream for the mobile user (though realistically, they’re probably dreaming of other things right now). Laptop users can experience a level of mobility as they bounce from coffee shop to coffee shop, but even this is a long way from the what we’d need - you have to stop, sit down, switch on and connect before anything can happen, it’s hardly seamless.
Now, this may come across as a rant, but it isn’t meant to be. The point is, that when ubiquity is cracked, then we really can get on with mobility. The City of London experiement is one to watch or Google’s San Francisco plan. I particularly like the Google plan, with its two-tier service plan (free at lower speeds) - it fits with the “Wireless Pavement” idea I’ve been advocating for a while. The wireless pavement (sidewalk, guys) is the idea that it costs to lay a pavement, but it is delivered at a municipal level, for free because we all recognise the spin-off benefits. Neither do you pay to go in a mall, but these are enormously expensive to run. Of course you do pay, but not directly - you pay for a shirt, or a coffee, or a lampstand, and a cut from that goes towards the cost of the building, pavement, whatever! I must blog all that - perhaps I just did!
We digress. The lack of ubiquitous access is currently a bottleneck on progress, caused I suspect largely by incumbent service providers not wanting to release their traditional grip on the cost of access. They’re like the toll keepers of old, forcing everyone to travel down their own, pot-holed routes. The network is being forced wide open as we speak, and I fully expect this to lead to the nirvana of ubiquitous, seamless access, which in turn will lead to a whole raft of new innovation, new ways of connecting, new ways of doing things. I believe this is where connectedness will get really interesting, as the online and offline worlds, the voice and data networks merge.
Why should I get all excited about this? Because for a start, it is unexplored territory. I don’t know what will be the apps and services that rock people’s boats, or in what combination - all that chatter about location-sensitive services was largely driven as a revenue opportunity for the SP’s - there may be something in it but I’m not convinced it’s the “killer app”, or indeed whether there will be a killer app at all (though it would have been useful to know where a pertol station was, on more than one occasion!)I see “microwave oven” innovation - unique compinations of functionality that give whole new ways of doing things. The mashup applied to the PDA, perhaps. The opportunities are global - James has ranted on more than one occasion about the lack of innovation in Europe, but let’s remember that both the mobile and the open source revolutions started in Scandinavia.
Here’s one thought of how things might go - applied to the lowly address book. Add Plaxo-like updating, RSS-like feeds, Google maps, GPS and a reasonably powered PDA and the address book becomes a dynamic hub, changing in real time. The possibilities are endless - suddenly you can find out all the parties you haven’t been invited to, for example… perhaps we should leave that one there!
02-27 – What’s an IT analyst, anyway?
What’s an IT analyst, anyway?
I’m conscious of a couple of things as I set out to write this, what I hope will be short, entry. First - there is an ongoing debate about the nature of the analyst business, and the credibility of the analysts themselves. Second - many of the people who come to this site have no clue what I do as a day job. I hope to hit both birds with this single stone.
It all starts with buying and selling of Information Technology (IT). There are three kinds of product in IT, which are:
- - commodity items, where a product has been simplified (I use the term guardedly, I don’t mean it is now simple) and packaged to such an extent that it can be sold off the shelf. Examples include PDAs, office software packages, hard disks and so on.
- - solution components, where a product can still be packaged, but it doesn’t serve much purpose on its own. Examples are databases, blade servers, storage arrays and so on.
- - IT solutions, where often-complex combinations of products are packaged and configured to meet often-complex demands. Examples include some of the enterprise applications, storage area networks, enterprise management software and so on.
End-user organisations of all sizes frequently need help, both in deciding what to buy, and what impact their procurement choices may have on their business. That’s where IT analysts fit in. Their primary role is to keep tabs on what technologies are available, what they can be used for and what constraints they impose. Based on this understanding, IT analysts can also offer advice to vendors - the IBMs and Microsofts of this world, to help determine what technologies are more appropriate for which audiences, and indeed where they should be investing their research, development and marketing dollars.
Within this blanket description, there can be a number of sub-types of analysts. End user companies do not require analyst reports on commodity products, for example. For commodity products, analysis is more vendor focused and tends to be oriented more around market research - which countries or regions are buying which products, from whom. Such information can be used to drive advertising and sales campaigns, but the “understanding” work is already done.
For solution components, analysis is done proportionately for both end users and for vendors. Very few organisations need to know what a database is, for example, but a guide to who sells what kinds of databases, and what features are present, is useful to both sides. End users can use such information to produce a shortlist of vendors, and vendors can use it to keep an eye on the competition - or indeed to promote their product as “best in class”.
For IT solutions however, analysis (whether paid for by vendor or end user) tends to be more focused on end user businesses. White papers can explain the ramifications of technology types, or help organisations understand what are the business problems to be solved - this kind of report is often supported by focused market research, for example to help understand which product features give the most benefit, or which issues are the most important to be solved. Longer reports can be produced, explainng the solution area and providing information about which vendors provide products to support the solution concerned.
There are a number of other categories. Some analyst firms choose to focus on specific areas, such as security or data management; other consider only their local regions. Some companies focus on service providers, or consultancy firms, or particular industries. Some are actually end-user consultancies that perform some analysis, and advise vendors almost as a spin-off benefit of their end user knowledge. Still others are market research organisations that specialise in IT, and so on.
When done right, IT analysis can be enormously beneficial to both sides. However there are some potentially major hurdles to be overcome. Not least, the analyst can play an highly influential role in some major buying decisions - either in terms of quantity (at the commodity end of the scale) or deal size (at the IT solution end of the scale). Influence can be direct, indirect or obscure and untraceable - if I was a strong advocate of hosted applications, for example, I might well attract the attention of hosted application vendors to promote the advantages of their products over in-house applications.
Quite rightly, then, analysts should be subjected to considerable scrutiny. Not least, there is a requirement for transparency - any formal analyst recommendation should only be served with equal helpings of context (“who does this apply to, under what circumstances”) and justification. This goes for the simplest of quotes, up to the most detailed of reports - even if the context and justification are not included for space reasons, they must be in some way available.
A second issue comes from the very nature of analysts to be “market makers”. The analyst’s job is to articulate the strengths and weaknesses of what are sometimes very new technology areas. One way to do this is to define the area concerned, generally by naming it in some way: in doing so, advertantly or otherwise, the analyst firm creates a market for that product. This was a reasonable approach in the past, when the majority of software applications were custom coded and most IT purchases were of solution components. These days however, we are in a very different playing field - IT refuses to be pinned down in the same way. SOA, for example, offers a terribly important set of principles that any IT shop should apply, but it does not map onto any set of products in particular.
Trouble is, IT marketing in some ways depends on this need to classify products, and some analyst firms fall into the same trap - areas such as Identity Management, for example, are impossible to characterise in this way. Some analyst firms still insist in trying to apply the old models to these new technology areas, but this just plays into the hands of vendor marketing departments who can then jump on board the bandwagon. However, given the fact that most products can only partially fit the definitions, the main result is confusion on all sides. Disparagingly one could refer to this as bandwagoneering - only in most cases, the wheels fall off the wagon almost the moment it is set off down the hill.
Finally, the vendor’s requirement for analysts is very different from that of the end users. Ultimately vendors are answerable to their shareholders, and they exist primarily to make money out of IT sales. End users require analysts to help them get the best value out of IT, to make their businesses run better. In the fast-moving game of IT, sometimes the end-user requirement for business value has been subordinated to the vendor requirement to make money. I’m not saying that analysts have been directly complicit in this, though of course this is what market research is for. What I would say is that sometimes, the analyst industry as a whole hasn’t always been as vocal as it could have been to protect against it.
I believe it is the combination of these issues that has led to the credibility, relevance and independence of analysts to be called into question. I agree that change is necessary - but I wouldn’t level the finger at any particular firm. Instead I would argue that the IT industry as a whole needs to change its marketing tactics, that the analyst industry needs some new models to help itself help its end user customers, and that finally that these end user customers need to be more demanding of their advisors. All of these things are happening, perhaps a little too slowly for my liking. I don’t believe the final answer lies wholly in blogs, or in open source research, though these are highly valid options, and indeed may be the necessary catalysts for change. Blogging is indeed rocking the foundations of analysis - indeed in some cases there can be a fine line between the two.
Meanwhile, there is some very good work being done by analysts across the globe. Perhaps now is the time to get back to basics, to assess what services IT analysts should provide, to whom, and how. The jury’s out and the only thing I know for certain is that it won’t be me that makes the final decision. I watch, and analyse, with interest.
March 2006
03-01 – The laws of connectedness
The laws of connectedness
OK here goes to try to define whats going on in this brave, new, collaborative world, Ive drawn up a series of laws. These will evolve feedback welcome. Thats the point see law 16!
Laws of connection
1. Connectedness is about joining in 2. Joining in happens automatically when the barriers to joining are low enough 3. Connections form between individuals, not organisations 4. Connections link devices, services and people 5. Connections are two way 6. The value of connections increases based on the number of touch points 7. Connection is a means to an end: the end is participation
Laws of participation
8. Communities form as a natural consequence of connectedness 9. Communities define their own mechanisms, language and etiquette 10. Individuals occupy roles within communities 11. Participation can be active or passive, hub or spoke 12. Declaration is a pre-requisite to active participation 13. Participation is a means to an end: the end is collaboration
Laws of collaboration
14. Collaboration is the achievement of goals by a connected community 15. Goals benefit individual participants, not the community 16. Active feedback is essential to achieving goals 17. Success is proportionate to the number of participants 18. Open collaboration is self regulating
These laws are not specific to any technology or group. Example themes that have driven these laws are: text messaging, Make Poverty History, Blogging, Marillion, BNI, LinkedIn, Cluetrain, peer to peer, mashups, Warcraft, ecademy, eBay, street teams, open source, Skype, Flickr, wisdom of crowds, SOA, agile development, Usenet, Sharepoint.
03-01 – Thought for the day
Thought for the day
Are bloggers the new masons?
03-02 – Thought for the day
Thought for the day
Snow brings out the child in everyone.
03-03 – My-name-is-J-o-n
My-name-is-J-o-n
So, I understand that the MSN search engine is now “powering” Microsoft.com. This is not good news, for one simple reason. When I put in my own name, it comes up with “Were you looking for joan collins?” Well, no, I wasn’t. I’ve had enough of that kind of victimisation and abuse in my lifetime. Of course I immediately put in a complaint.
After all, what would Uncle Phil say? :-)
On the plus side - at least I was the first “Jon Collins” - and the second, third, fourth and fifth. I’m not going to give in to their thinky veiled flattery, however.
03-03 – Re: Innovation in the UK and Europe
Re: Innovation in the UK and Europe
For some reason I can’t post this comment on James’s blog, so I’ve put it here. If anyone spots any offensive words let me know!
So, re: this post -
I had to read this more than once - I started disagreeing, then I realised I did agree youre spot on that we seem unable to turn theory into cash.
Despite the absence of cash-earning I think we sort of expect companies to form in the US, so we leave them to it theres plenty of innovation in the European software industry. I pointed you to Exoftware, an Irish company theres a great page of links on their Web site, and they are leading the Agile Alliance in Europe, worth a browse. Similarly I used to participate in an OO conference (OT) that was totally freeform, great stuff. Much was theorising, which was a downside, but on the positive, some of softwares best thinkers are European (Jacobsen, Fowler). You should also take a closer look at the patterns community very active, very communicative, very SOA and very European!
Fascinatingly to me, and Im trying to work out why, this very active community is not too heavily into blogging. I think it might be that the communities formed before blogging, and therefore theyre sticking to older mechanisms for a variety of reasons. Blogging favours individuals, perhaps nobodys in favour of sticking their heads above the parapets (apart from Grady Booch of course, but thats an IBM thing!) Theres country differences as well I did a presentation in Germany yesterday, and when I asked the audience, they all said they read blogs. In the UK financial industry version today in London, nobody did. Very interesting. Perhaps, also, youre not reading the blogs in French, German, Spanish and therefore not hooked into whats going on at a local level?
Anyway, all good food for thought.
03-03 – To Ecademy or not to Ecademy?
To Ecademy or not to Ecademy?
That is the question - and I just cannot decide. What it boils down to is that, to join this illustrious band of networkers, I have to pay money. Now I’m very happy to cough up when necessary for a useful service, and I’m sure Ecademy is exactly that, but that’s not the point. Right at the moment I can barely keep up with the number of “free” social networking facilities I’m using, and I believe these are only going to increase in both reach and functionality over the coming months and years. With this in mind, are there really any additional benefits I could get from a paid service, and - most importantly - would I be able to devote sufficient attention to get my money’s worth? In all honesty I just don’t know. When I was talking to Plaxo last year I did raise thsi question with them - how exactly are you going to make money? I wasn’t convinced at what they said then, and neither am I absolutely sure about what Ecademy can offer me, now.
I’ll keep on thinking about it - but for the time being I’m afraid I’m going to have to go for the “when in doubt, do nowt” school of decision making.
03-06 – Register for UK transplant
Register for UK transplant
I just got an email from my sister in law, which read:
| > I always meant to, but never?got round to properly registering on the UK register to donate my organs for transplantation.? But I happened to see a poster with the website address on,?and now I have done it - hurrah! ?It’s so easy on-line and I would like to give you the address,?in case you would like to do the same.? The website makes for interesting reading. > > Now we’re in Lent it seems to me to be?a good moment to do something that in future may help the lives of 10-15 people. |
|---|
She was right - it took no time at all.
03-07 – Is this what radio's all about?
Is this what radio’s all about?
Just a question, not a statement. The suggestion from this - the position of the UK’s Radio One in the scheme of things musical - is that concentrating on playlists and the lowest common denominator, gets results. While people might question the tactics, it is difficult to argue with the outcome.
03-07 – Today I shall mostly be...
Today I shall mostly be…
… sitting on an aeroplane in the rain, waiting to get a slot onthe runway, so I can go to Nice. Humpf. At least they’ve let us use our gadgets - I’ve run out of sudokus.
03-07 – Video is its own killer app
Video is its own killer app
I’ve just installed the latest edition of Skype - which incorporates video. Despite spending (aka wasting) a few hours trying to work out why my video camera didn’t work - before remembering I had to reset the laptop motherboard (thanks Sony) - the whole thing was remarkably seamless. I then set it up on one of the home desktops, and my daughter Sophie and I spent a happy, if random half hour testing the thing. Think - Dad wandering around house with laptop, Sophie zooming in on her nostrils, you get the picture.
But it works, its simple and straightforward and it will be an invaluable tool for those home calls when I’m out of the country. Already I felt more connected, in a nice way. Of course its been possible for years - but with Skype’s critical mass, USB and broadband, the barriers to entry have lowered. Expect it soon on a console near you.
03-09 – Playing with words
Playing with words
I was waiting for a call this afternoon, and thinking about all this analyst credibility stuff that’s been flying around the Web. I thought I’d set out my stall and say what I believed were the key facets of IT analysis as I would like to see it done. I started with transparency and clear explanation of results, and it sort of grew from there - before I knew it I’d made a word or two. Here they are - you can consider it my analyst manifesto 1.0, and I shall do my best to stick to it.
FREEDOM of research
RELEVANCE of results
EFFECTIVENESS of advice
EDUCATION, not evangelism
TRANSPARENCY of methodology
RESPONSIVENESS to requests
ATTRIBUTION of sources
DECLARATION of interests
EXPLANATION of conclusions
03-09 – Thought for the day...
Thought for the day…
Denial is the root of all evil
03-09 – When did blogs tip?
When did blogs tip?
I’m currently re-reading The Tipping Point by Malcolm Gladwell, largely because I could no longer remember what “Maven” meant and I thought I’d better check. Also, Mr Gladwell’s started his own blog - I’ll dig out the link when I have a moment.
Right now, I have a question - when did blogs tip exactly and what caused it? I’d put it about 2 years ago, but its a guess. One thing I do notice is that the law of the few - the tipping point theory that explains how trends are transmitted - relates both to blogs and to bloggers, which are both Connectors and Mavens. Its an important point, and one which merits more thought. For now consider this a bookmark!
Final note - Connectors link people to people; Mavens know lots about things, and want to tell others. Sound familiar?
P.S. Also just bought - the Wisdom of Crowds - that’s next!
03-10 – Hors l’anglophonie
Hors l’anglophonie
Following up on James’s project to look for euro-innovation, I thought I’d start hooking into blogs that weren’t written in English. Here’s a first - of Tristan Nitot, founder of Mozilla Europe. I’ll start from here and see where it goes.
I’ve also got in touch with some old colleagues at Frmug… seems like aeons ago!
03-11 – Update to Marillion book
Update to Marillion book
Update: I’ve now submitted the fixes - thanks for all your help!
We’re reprinting some copies of Separated Out - and taking the opportunity to fix a few errors in the text. If anyone knows of anything they’d like to see fixed that’s not already been listed here, please could they add a comment to this post. Thanks!
P.S. If anyone was having problems accessing the Separated Out microsite, these should now be fixed - just hit refresh on your browser. Thanks again!
03-27 – Thought for the day...
Thought for the day…
The blind can see things others can’t.
03-30 – Guys in the bar - New Yorkers and analysts
Guys in the bar - New Yorkers and analysts
Is always dangerous to do too much of the same thing at once. Such as, for example, reading the current ‘must-read’ book list of The Tipping Point, The Wisdom of Crowds, Freakonomics and now Blink, back to back. Doing so has led me to a whole number of potentially totally invalid conclusions, but which are interesting enough to write them down anyway. Not least, all three authors are based in New York. There’s a ‘so what’ there, until you take into account that all three are written (at least in part) by journalists or columnists, two of whom work for the New Yorker and one (the second Stephen) who has written for the New York Times. Now, good as these books are each in their own way, I can’t help wondering. Malcolm Gladwell (who was first with his Tipping) offers a glowing endorsement for his stable mate’s Wisdom, and another for the Freaks across the street. This latter is perhaps the most surprising, given that both the Tipping Point and Freakonomics write about exactly the same incident ’ the falling crime rate in ’ ahem ’ New York, but give totally different reasons for the causes. In Gladwell’s case it is down to the broken window theory ’ fix the window and people will know that such things aren’t tolerated, and therefore will be put off bigger crimes. However Levitt and his partner sets this theory as part of the process, and not the cause of the turnaround ’ which is put down to the changes in abortion laws 15 years before. Despite these contradictory findings, the book is seen as thoroughly endorsable.
Don’t get me wrong, I think all of these books are valid, but I can’t help thinking of the foursome (or at least the three journalists) as guys in a bar, waxing lyrical and theorising about the events of the day. They sometimes agree, and sometimes disagree. One writes a book, and gets it published; so does his friend, and then so does the couple of guys that come in from time to time. Of course these hale chaps are not alone in writing books of this type ’ a cursory glance around the shelves in Borders reveals that there are plenty more where these came from ’ nor would I like to suggest that the authors acted in any way untoward. It does seem a bit weird however, that the authors of the books that are currently at the top of the pile, should all come from the same place. Cue a shrug. Perhaps its down to a shared writing style (the books are similar in this) that is piquing the interest of the reading public, or an example of the tipping point phenomenon itself. Who knows ’ but it is the collective endorsement (as measured by sales) that validates each set of theories and their authors.
Something else has also struck me quite hard about these books - that they seem to be written all about me, or at least me as an analyst. Now then, now then, this isn’t a cue for some over-indulgent self-absorption by a narcissistic blogger. Or perhaps it is - but let me explain what I mean. The books actually all seem to be written about the IT analyst business, and how it actually serves the needs of the IT industry as a whole. For example: the Tipping Point describes the law of the few, and talks about connectors (people who know lots of people), mavens (saddo geek types that write articles such as this one, delving into the detail) and salespeople (those who present information so to inspire others to act). These are all analyst traits. Freakonomics is about looking for the real drivers behind the trends, delving deep in the data to help the bigger understanding of what’s going on. That’s an analyst job if ever I heard it. Blink concerns pattern-matching skills, and the ability to reach conclusions with only a handful of facts: an essential trait for any public-facing analyst. Finally ’ and here’s the humzinger ’ we have the Wisdom of Crowds. This is less about the analysts as individuals, as it recognises that everyone can be, and often is, wrong. The clincher is when an IT vendor takes a room full of analysts, pays for a hotel room and a nice meal, pitches to them and then asks for feedback. Ah, there’s the rub ’ the collective feedback is probably as good as a vendor is ever going to get in terms of advice, which is food for thought if anyone’s worried about the analysts exploiting their position by getting the free meal. Its also a cautionary tale for vendors ’ particularly those who are less structured about how they gauge feedback! TANSTAAFL of course, but while the cost may be low for each individual analyst, the combined value is maximised for the vendor.
Indeed, when asking what’s an analyst anyway, we can take some quite deep insights away from this. Are bloggers analysts? Perhaps. What about consultants, internal or external? Possibly. Marketeers? Maybe. From the analyst’s perspective, the real trick comes from deciding to treat one’s capabilities as a maven, connector and salesperson as a full time job ’ I would argue that bloggers, marketeers and consultants can become analysts, if they choose to step up to the plate, to make themselves available for advice and to publish what they find. From then on, its up to their customers to endorse their role, by accepting them as part of the crowd that delivers the combined wisdom. Without this external validation, a bit like Malcolm, James, Stephen and Steven, we are all just guys in the bar.
… P.S. Its probably worth adding that this post was based on a discussion, over a beer, with one Larry Velez of Forrester. Hi Larry, QED!
April 2006
04-04 – Towards free wireless?
Towards free wireless?
Are we heading towards the wireless pavement, or are examples of free wireless just exceptions rather than the rule? Peter Cochrane would like to think the former, and I hope so too - but it probably comes down to how wireless is regulated. I shall be keeping an eye on services such as Fon.com and perhaps Myzones.com (which seems to have evolved since it was written about above).
04-08 – Microsoft Vista and new horizons
Microsoft Vista and new horizons
There’s some funny goings on inside computers and other devices at the moment, many of which have some kind of impact on the consumer space. Here’s a selection: Intel is releasing advanced management technologies ’ I don’t know the full story, but these enable a computer to be interrogated via the network without even booting up, for example to aid diagnostics or firmware modifications. Motherboards are coming out that can power down unused elements, saving electricity and reducing heat output. USB sticks can store and run applications, and even boot their own environments. Laptops are being released that can boot either as a computer or as a DVD player ’ I understand the latter boots with a stripped-down Linux kernel. All kinds of mobile devices are coming to market, some more useful than others (MP3 camera, anyone? Games console phone?) but which are becoming cheaper as fast as they are growing in complexity. We have set-top boxes and games consoles that are, in effect, computers, but they hide their true form behind a vastly simplified, appropriately customised interface. We have mobile phones and MP3 players pre-loaded with MySpace and XM/Napster services, each offering what is, to all intents and purposes, a browser interface.
Not so very long ago a computer was a computer, it had a processor in the middle of the motherboard and some memory on the side, it ran an operating system and supported a variety of applications. From this general purpose model things are becoming awfully specialised ’ these days, the device and the application are often bundled. The iPod, for example, is a computer with a hard drive, a processor and memory. I assume it runs its own chip-level OS and application, straight from, and on top of the hardware.
In parallel with this, there is an evolution in how consumer computers are being used, particularly by kids. When my children log onto the computer they tend to use it for email, chat and web access. As we have discussed before, the kids of today see the Internet as a place ’ Myspace is a community, a joint (though they wouldn’t use that word) to hang out in. Anyone with a number of kids will have experienced the fights to get on the computer ’ not to do anything ‘productive’ but to see who of their friends is on. We can see Mark Thompson’s latest pronouncements about the future of the Beeb (think: teenagers, content, communities), as both corroboration and catalyst of this trend. Whats interesting in all of this is that technology is becoming a lottery. Nobody’s too bothered about who’s responsible for a given device or application: most important is, is it cool, and do my friends have one. The Myspace device (coming soon to Europe) and the iPod are competitor products, but nobody really cares who was responsible for what happens inside the box.
What’s Windows Vista got to do with all that? Perhaps nothing, and that’s just the point. Microsoft’s hold on its incumbent position has always been based on two premises ’ first, that the operating system is a necessary basis for general purpose computing, and second, that people want to standardise on the same platform as everybody else. Now, however, the ‘thing’ is migrating to the application layer ’ For Myspace and XM it is the portal, for the iPod it is the interface, the device. If the goal is to give the masses what they want, what’s to stop running a Myspace environment directly on top of the silicon? Could we envisage a device that offers integrated videoblogging and email, with nary a Windows logo in sight? It doesn’t take a rocket scientist to work out that a Google-branded device (probably “powered by” Sun, you heard it here first) might do rather well, not yet but at some point in the future. In a sociological twist on Metcalfe’s law, its less and less about the technology, and more and more about the shared experience: the success of any such device will be down to whether a critical mass of like-minded types jump on board.
I don’t know the final answer to this, but I do know that there are going to be even more choices in the future than there are now. As consumers form into communities, each community will choose the most appropriate mechanism for the time, and after a while it will move on. In such a lottery there will be many participants and few winners. Of course Microsoft has its own games console (the 360), which is doing rather well; however, it is unlikely that the company can retain its present level of penetration in the consumer space on games console sales alone. Equally, Microsoft has all kinds of digital home initiatives, but for device manufacturers there is little incentive to pay a stipend to Seattle. Windows Embedded means locking oneself in and reducing future flexibility, and’indeed, companes like News International (owner of Myspace) are increasingly in competition with Microsoft/MSN. What possible incentive could there be, to’shackle themselves to the enemy?
Don’t get me wrong, I don’t believe Microsoft is sitting on its laurels. Neither is it losing on all fronts - the messenger preference of local kids appears to be MSN, and for email, Hotmail. However,’the number of fronts is opening all the time. And while Microsoft may have me-too offerings in the shape of Microsoft Search and Windows Live, it doesn’t take a rocket scientist to work out that Microsoft’s core faith lies in its operating system. Microsoft is Windows, and without it the company becomes no different from any other software and media company.
Over in the business world, there’s plenty of milage left in the OS (though this is fragmenting as well - think Hypervisor). In the consumer computer’market however, the days of Microsoft’s dominance may well be coming to an end. If indeed, the market continues to exist at all.
Hat tip to Cote, without whose comment I might have got away without writing this!
04-11 – The last yard
The last yard
There’s been various writings about the Internet as a place - for better or for worse. Nothing new, yada yada yada. It may be true that the kids of today are growing up in a world where chat is preferable to phone, and MySpace discussion boards are a valid hang-out. As my son would say, “fine, whatever!”
To me however, there’s a fatal flaw in this “place” thing. First, at the moment the dream of virtuality is encased in a couple of kilos of tin with a 17“ screen. Of course, I could carry this around the house with me but that’s not the point - I generally don’t, and nor does anyone else. This is the last yard - between the screen and my eyeballs. If I want to “jack in”, the last yard requires me to go perch at a desk somewhere, much as I’m doing right now. Indeed, let’s work through my last ten minutes:
- - in the kitchen, I am thinking over something discussed last Friday. I decide it would be worth writing about.
- - I head upstairs to my computer and forget what it was I was going to do
- - back downstairs, I make myself a coffee and it comes back to me
- - I decide its not worth the effort, so I sit and read the paper
- - I return upstairs to check my email
- - when at my computer, I think, “oh sod it, might as well post”
Its hardly communications at the speed of light, is it? My own foibles aside, I don’t believe we are even one degree out of 360 towards really integrating the Internet with daily life. The whole process above is neither natural nor practical, nor particularly inspiring towards seeing computers, and even our darling Internet, as anything more than a distant haven that we can visit from time to time. This is the Internet as a place, but its a place we have to go, rather than the place we are in - adequate for now perhaps, but not even scratching the surface of what should be possible.
04-12 – Voice on PDA: close, but no cigar
Voice on PDA: close, but no cigar
In the Wim Wenders film a la “Bis ans Ende der Welt” a man talks into his handheld computer and it notes what he says, even as he goes off topic as he reacts to the events around him. Its something I’ve wanted to be able to do ever since the film came out in 1991 (that’s do the voice thing, not go off topic - I’m quite capable of that already).With this in mind, I’ve just done my once-yearly trawl around the world wide morass to see whether anybody had yet cracked the speech-to-text idea on a “standard” PDA. A few years ago they claimed that processors were to slow, but as I now have a Dell Axim that has a faster processor than my very own Sony Voicepad, I remain forever hopeful. Perhaps its still a little premature, however. There seem to be a number of vendors selling command-and-control packages, notably Microsoft, Speereo and HandHeldSpeech but that’s really not what I’m looking for. VoiceSignal comes closest, with a voice-to-SMS program called VoiceMode; not sure it’s out yet, or if it is it’s only on specific devices. One possibility I was toying with was to install Linux on the Axim, then ViaVoice for Linux and see how that ran. It has been reported that IBM no longer supports ViaVoice, but it appears that it still does within its developer kit, so maybe that’s a possibility. I downloaded the SDK but I didn’t meet the pre-requisite requirements (to buy WebSphere!).Perhaps I should wait for Oqo pricing to fall, or for Origami to hit the mainstream and become pocketable (have you seen the size of those things?). For now, however, mainstream use of handheld voice remains out of reach. Pity.
Update: should have mentioned Research Lab, who sell a speech recognition SDK for Windows CE and its derivatives.
04-13 – Latest tool from last.fm
Latest tool from last.fm
Of course, all you media junkies will have known about it for ages, but I didn’t, so there. There’s a feature of last.fm that allows you to have your personal charts displayed on your web site. How cool is that? I do wonder whether there’s a Heisenberg factor in last.fm, i.e. that listening preferences are influenced by?their visibility. Its possible, but perhaps temporary.
Meanwhile, I’ve finally set myself up on Myspace. Can’t help thinking we need some of Neil Macehiter’s federated identity to tie all these virtual worlds together!
P.S. Speaking of virtual worlds, here’s a virtual community that talks about them…
04-13 – PR 2.0
PR 2.0
Once every few months I meet with James Cooper of AR/PR/MR firm Ascendant, just to chew the fat and see what’s happening on the other side of the fence. Inevitably this time, the conversation turned to blogging, in particular “the press release is dead” writings of Tom Forenski. I disagree, for what its worth - but I do understand people like Tom need to present one side of the story. After all, who’s going to listen to someone that just says, “here’s a new tool, I suppose it’ll be useful for some things…“Anyway we covered word of mouth marketing, the influencer community in general, what makes a good European analyst in this context and so on. It was an interesting chat, thanks James, and thanks for your additional remarks by email - I copy them here, and no doubt we’ll be looking back on them in a few months:
Hi Jon, I think I found the article you were talking about yesterday. A few initial thoughts:
- It is still a heavily US debate (doubtless it will morph here as well, but the market is smaller and more subjective)
- I think commercial interests will ensure blogging doesn’t disrupt PR to a negative level but becomes just another medium as we discussed
- The ‘trust’ factor has to be a big issue here
- I think in Europe for mainstream communications/business it will just be an extension of the chatroom and useful; a few key blogs will emerge that simply become another CW or Silicon.com; the authors will be the equivalent of today’s journalists, or editorial input will be achieved by responding reactive or proactive messages directly
Having said all that I am still new to it J James
04-13 – What goes around... Freecycle
What goes around… Freecycle
My mum just sent me a link to Freecycle, which (once again) looks like a thriving global network I’d never heard of. You can sign up with your local chapter and offer things you don’t want anymore, to like minded individuals. Cracking idea - already my eyes are scanning the office for junk!
Here’s the blurb from my local group:
This Freecycle group matches people who have things they need to get rid of with people who can use them.? Our goal is to keep usable items out of the landfill.? By using what we already have on this earth, we reduce consumerism, manufacture fewer goods, and lessen the impact on the earth.? Another benefit of using Freecycle is that it encourages us to get rid of junk that we no longer need and promote community involvement in the process.? Free your inner pack rat!
04-18 – A printing service, not a book
A printing service, not a book
I stumbled across Kevin Kelly’s Web site this morning, and found two full texts of books he had written. This may be old news to many, but it wasn’t to me. Here’s what he wrote about what he’d done:
“Out of Control was one of the first books to be available in its full text online. All 230,000 words are still available for free on the web here. (This fortuitous opportunity came about because my literary agent, John Brockman, was among the first to realize the value of online rights before the publishers did, so when he negotiated my book deal in 1990, we kept the online rights.) I mention this elsewhere, but its worth repeating: You are free to print out the whole book; if that will help you read it, please do. But I can save you the hassle, time, and paper spent printing it out. Click here to get a printed version, nicely bound between color covers and mailed to your desk, all for about $16. Think of this as a printing service, not a book.”
I think this is fascinating, and Kevin’s absolutely right - the same idea can perhaps be applied to the music industry, which can be (let’s face it) very good at packaging things in a way we want to handle and listen to them. All good food for thought.
P.S. OK I changed the associate link :-)
04-18 – Freaky
Freaky
I love the Internet - having just read Freakonomics, I can now make comments on the authors’ blog. Specifically on a post about something I was trying to dig out anyway - the link between nutrition and behaviour. You can find their post about it here.
04-18 – Online gaming as a social experiment
Online gaming as a social experiment
It is all to easy to abuse one’s position as an industry analyst and technology commentator. I’m not talking about backhanders from IT vendors with the unspoken promise of a mention. Rather, it is possible to steep oneself in the overflowing and delightfully warm spring waters of geekdom, claiming it somehow fits in the category of “research”. I make no such excuses, therefore, for my current “Level 60” status in World of Warcraft (that’s as high as you can go, folks), nor for the fact I’ve recently picked up copies of the galactically large Eve Online and the beautifully rendered Guild Wars. All of these fit into the category of MMORPG, or massively multiplayer online role playing games, the title itself sealing the nerdy nature of the whole affair.
All the same, it is difficult not to be impressed by the phenomenal rise in interest about such “games” - I understand that Blizzard Entertainment (incidentally, currently looking for a PR manager) was pretty much saved from the very real financial wolves, due to the success of its World of Warcraft title. I use the term “game” guardedly as, while these things are undoubtedly designed for leisure time, there’s as many social aspects as there is slashing and burning. Eve, for example, is a great deal about trade - players are presented with a quite complex set of tools for buying and selling virtual commodities, from ore up to space ships, and can learn to hedge and profit from the imaginary markets. All of the titles encourage some form of collaboration - tasks get easier, puzzles are easier to solve and monsters are quicker to kill, and indeed the whole thing does become a great deal more sociable.
Perhaps most fascinating is where the line between “virtual” and “real” starts to become less well drawn. There was the famous case last year of a very real murder taking place, due to the “theft” of a not-so-real sword and its subsequent sale on an auction site - the authorities were powerless to do anything about the “theft” as, after all, the sword did not actually exist. eBay is serving as an international currency market for virtual gold: when I told a trading expert about the practice of “framing” virtual gold and selling it in this way, his first response was, “ah, money laundering.” Very recently, an online funeral was held for someone who had died in real life; unfortunately the virtual event was trashed by an opposing faction, who “slaughtered” the attendees at the funeral and videoed the result. There’s a fascinating expose of this at the most excellent Virtual Worlds web site, which also discusses such issues as men taking on female forms and then being propositioned for cyber sex.
There are many potential links that remain to be exploited. For example, online guilds often have their own forums and discussion boards, and there are in-game chat channels - it would make sense to bring both of these in line, so online and offline discussion and chat could be integrated in some way. The social networking aspects of the games are thus far under-exploited, but all it would take would be an API between (say) Eve and Myspace, and the rest would take care of itself. Perhaps both would drive standardisation across games, with the potential that trading could take place, and even that characters could move, between virtual worlds. There’s a very good, if tongue in cheek article about the possibilities, here.
The fruition of such ideas would not be without causing other issues. For a start, vendors of online worlds are currently very keen to ensure they keep their subscribers, and are unlikely to open the door to churn. Neither do most people want to reveal their true identities outside the gaming world - the current debate on identity management is already extending to “digital social environments” that include MMORPGs. The number of different kinds of abuse, from the aforementioned laundering to good old fashioned bullying, will be limited only by the imagination. Frankly I have no idea what is to come, but I do know that it stands to be very interesting indeed.
04-19 – Testing the Qumana blog editor
Testing the Qumana blog editor
David Terrar’s blog put me on to Qumana, a blog post editor that works with Wordpress. It looks very nice and it didn’t cost anything - the business model is (I believe) advertising based. This is a test post - if it works out I think I’ll probably stick with it as it can work offline, has a spell checker and adds a number of blog features I’d never use otherwise! The message below can be removed, I believe.
Update: One click and Qumana posted straight to the site! Rocking :-)
Update: Just deleted a post - don’t know what that does to the feed…
Update: Not obvious how to create a new post (right click in the system tray, dummy ;-) )
Powered By Qumana
04-20 – I now have a Russian email address
I now have a Russian email address
I certainly didn’t mean to do that. It all started when I was following up on my MMORPG post, to see whether there was such a thing for the Pocket PC. Yes there was, I found, a game called Sphere from a Russian developer Nikita. Following the FAQ in English, I followed a “where to buy” link and found myself on several web pages in Russian. It was reasonably obvious (through a translation web site, anyway) that it was asking me to register, which I duly did - goodness knows what I’ve set up to be my test question, but anyway, I got there.
To my surprise, I now find myself with a “@ya.ru” email, accessible via?the?Web. Unfortunately I still have no idea where to get the game. Oh well, it really is a shrinking world.
Update: Again through online-translator, a wonderful tool, I have been browsing?the Russian version of the Web site and found the download page. It transpires that the game itself is only available in Russian. I think I’ll leave it there!!
04-20 – Spam in Russian
Spam in Russian
Following up from my previous post, if I used my Russian email address, would I get Russian Spam? Do English-speaking spammers send English spam to Russian addresses? If not, wouldn’t it just be the perfect Spam cure, to get a non-English speaking email address? If I got Russian Spam, I wouldn’t understand it anyway!
What did I miss?
04-22 – del.icio.us on Wordpress, anyone?
del.icio.us on Wordpress, anyone?
Quick question - could anyone let me know the easiest way of integrating a daily/weekly feed of del.icio.us bookmarks into Wordpress? I can’t find an obvious answer in any of the obvious places.
Incidentally, as I get more into this blogging thing, I’m more and more convinced that the future of blogging is in tagging. You heard it here first - or maybe you didn’t.
04-26 – The latest in anti-navel gazing technology
The latest in anti-navel gazing technology
I was starting to be very impressed with Qumana. Then, on my way over to San Diego, I indulged myself in a “why do I blog” post. You know that sort of thing - I’m participating, how cool is that, its a social revolution etc etc. It was a great post - lyrical, even poetic, with just a soupcon of hubris.
I saved it in Qumana to upload when I got to a connected space… or at least, I thought I did. When I got there, all I found was a file with a link to the one blog post I mentioned, from David Terrar. All my musings, rantings, cleverly counterpointed justifications and criticisms were gone.
I can only assume that Qumana, being the latest in social technology, has some kind of leading edge, anti navel gazing filter (I can’t remember if this was my idea or Cotes, but by that time we were just guys in a San Diego bar). In which case it really is an incredible piece of technology. Alternatively, perhaps it was just a bug. In more ways than one, I should probably get around to reporting it.
If it was that clever, perhaps I just did.
04-27 – Port forwarding
Port forwarding
Doesn’t look like I’m going to get onto the San Diego airport network right now…
https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=
Get the picture?
04-27 – This flight's been too long
This flight’s been too long
Is it just me that thinks of “Rock the Casbah” when he sees “Lock the Taskbar”?
04-27 – Today I am mostly playing with…
Today I am mostly playing with…
- RSS Reader - neat, but requires .NET framework 1.1
- RSS Publisher - no clue how this works
- Toshiba Bluetooth stack - “fun” getting bluetooth headset working with Sony laptop
- iEx - who needs a Mac…

- Synergy - multiple computers on a single desktop. Neat.
- Second Life - could I/should I be the first virtual analyst?
04-28 – Analysts and alcopops
Analysts and alcopops
Its nearly the weekend. Last week, Duncan Chapple wrote a piece on analysts and wine. I said in the comments that I think he missed a few beverages, and I’ve added a few more here:
- real ale - a little nutty but better than the ordinary
- lager - bland, but reliable
- cheap whisky - best watered down
- port - you’d think it would improve with age but it doesn’t
- good brandy - aging, but well formed and consistent
- vimto - sickly sweet but strangely endearing
- guinness - impenetrable
- caffeine free diet cola - so much taken out you wonder why its there at all
- alcopops - seems good at first but leads to confusion later
- “new” coke - invention beyond the call of necessity
- vodka - don’t notice its there but makes a big difference
Any more?
04-28 – Marillion on the box
Marillion on the box
Not that box, but Pandora - apparently, its part of the “Music Genome Project.” I’d not heard of it, maybe I need to stay in more. Believe it or not, I don’t live, breathe and sleep Marillion, even if I did write a book about them. So, when I was prompted by Pandora to put in a band that I liked, Marillion wasn’t the only potential option. Honest.
What came next intrigued me to make the post, and yes, it is about Marillion as much as its about Pandora. It came up with the following text: “We’re playing the following track because it features electronica influences, a subtle use of vocal harmony, mixed acoustic and electric instrumentation, a vocal-centric aesthetic and extensive vamping.” Now, then. Apart from the fact that I’m not absolutely sure what “vamping” is, I bet my navel that this is pretty much why the majority of Marillion fans I know listen to?the band. Its also pretty much the polar opposite of what non-fans think Marillion is about these days.
Having played me “You’re Gone” (pretty much the polar opposite, etc, etc), its then played me “Can’t Explain” by ISM, “Forever and a Day” by the Dissociatives and “Where’s the Man” by Scott Weiland. What have they got in common? Something with Marillion, apparently. Also, the fact I’d never heard any of them. Three out of?the four share an apostrophe, but I don’t think we can count that. They all have similar descriptions, akin to the above. Maybe that’s it. There’s various things I can do from here - the first is “subscribe”, and I think, why not. I can pick one of the songs so far and go down a different route, or I can just let it play - there’s the occasional advert apparently, we’ll see if it starts to bug.
Finally, and the reason I launched it in the first place, I can link it to my Last.fm account through this mashup by Gabe Kangas. Next up: “Buffalo Swan” by Black Mountain. Cool. Now, either Pandora is staffed by people that are trying to subliminally point the world towards certain bands (“AC/DC? Hmm, yes, try this - lots of vamping”) or there really is something in it. We’ll see, but for now I’ll side with the latter.
04-28 – Something for the weekend, phperhaps
Something for the weekend, phperhaps
Just downloaded a couple of PHP editors - from here and here. Now all I have to do is remember how to program. Expect an ultra-powerful Alphadoku generator/solver by Monday.
That was a joke.
May 2006
05-03 – QTek 9100 - the perfect handheld?
QTek 9100 - the perfect handheld?
A few years ago I was doing some work for Sun Microsystems, something to do with wireless, mobility and all that. One day I was in London, near Westminster and I had one of those “a-ha!” moments. I distinctly remember popping into Macdonalds in the old city hall, opening my pad and drawing what I considered to be “my perfect handheld”.
Earlier today I found the piece of paper lurking in some old files. Here it is:
For the past few months I’ve been road testing the QTek 9100 (linking to where I nicked the picture, thanks). Here it is - looks awfully similar, hein?
So the question is - now I have it, how does it stack up? I’d love to say it has achieved handheld perfection, but it doesn’t, quite. There are a number of reasons:
- It is sloooow - only a 200MHz processor for all that voice and data gubbins, it just doesn’t cut it. It is usable, however it isn’t what I would call slick. There is a Skype client but that runs like an absolute dog which is a shame, as teh dev ice could almost pay for itself based on a few months’ Skypeout.
- the screen is too small. I need a screen I can comfortably cut and paste sections of documents on, and while the resolution is great, the screen itself is not quite there. Not really an e-books client at this point.
- it is buggy - sometimes the email client crashes for no apparent reason (sorry Microsoft, but I would have thought you’d cracked email by now).
- its too thick. Nuff said.
Apart from that, it does a pretty good job. Its got some features I didn’t specify - note the absence of Wifi on my picture for example, which dates it somewhat! It achieves my personal goal of a single device which could (potentially) be used for everything I do, I’m a bit of a laptop junkie these days so I’m not sure that’ll ever be necessary, but anyway.
No, its not perfect, but it is getting pretty close.
05-04 – How to make a John Collins
How to make a John Collins
Taken directly from Sam Malone’s Black Book, 6th Edition?(bought in Cheers Bar in Boston):
Fill Glass with ice
2 oz Whiskey
Fill with sour mix
Shake
Dash of Soda Water
Garnish with Cherry and Orange
For the sour mix:
In blender:
1 Egg White
1 cup of Water
1 cup of Lemon Juice
3 tbsp Sugar
Blend until?sugar is liquefied.?
05-08 – Detheming and reframing
Detheming and reframing
Its time for my next step on the journey that we call blogging. I blog to experiment with the medium (I’ve never been into experimenting with the small and large), and I’m going to try out a couple of new things around the principle of themes.
The first is to de-theme this blog even further - it will be about everything and nothing, a stream of consciousness as pure as my impure consciousness can make it. If it comes across as a self-indulgent mess, so be it - there are things I can learn from that (and you don’t have to read it :-) ).
The second is to extend my sensible blogging into more thematically coherent places. These are (or will be):
- Enterprise Information Technology - I’ve already started to blog (though I could do more) over at the Macehiter Ward-Dutton web site.
- Consumer Technology - plans are afoot - watch this space
- Music and media - plans are, well, planned!
Realistically, this is all an experiment - I have a number of theories that I’m testing out, notably comparing multi-user and single-user blogs, and curiosities around themes. There’s also a navel-gazing perspective around linking one’s blogging closer to one’s personality. In my case I’ve never been consistent about anything for long, and blogs are by their nature fixed points of reference, so I’m interested to see how that can work. Anyway, more news soon!
Powered by Qumana
05-08 – My take on blogging
My take on blogging
I don’t believe blogs are all they’re cracked up to be. Neither do I believe that they are not.
Powered by Qumana
05-10 – Symptoms of Oblivion
Symptoms of Oblivion
A blonde-haired girl sits on the London tube. In her left hand, she holds a latex-covered iPod. In her right, a grande Starbucks. Next to her, a middle-aged Japanes lady is consulting an opened, silver deviice. It looks more advanced than the norm, perhaps a simpler version may one day become available in Western shops.
In the toilets in the Hilton hotel at Paddington station, a man is urinating. He is holding his Blackberry with two hands, his thumbs tapping out a message.
They are all oblivious.
05-12 – The first life is currently preferable
The first life is currently preferable
So, I’ve had a go at Second Life. Two gos, in fact - so far I have been picked up and thrown into the sea, told I have a big bulge and offered sex. Now, while I of course felt flattered (I believe I looked rather dashing in my tie-dyed shirt and shorts, set off by the blue hair), I happen to believe it was a bit forward for someone who has literally spent no more than half an hour in the “game”. I’m still not quite sure what Second Life is, but family entertainment it is not!
Update: Here’s a section from the beginer’s guide: “Please take some time reading the Terms of Service (“ToS” for short). Unlike some sites or programs where you can safely press Enter and forget about it, here the residents live by the ToS and it is actively enforced by them - you can report abuse by someone or something whlch violates ToS, and this can lead to suspension or even expulsion from the game - or even a suit against you. We live in anarchy, where everybody can do what he/she wants, EXCEPT violating ToS. The first thing to notice is if you are in PG or Mature land. PG is much more restrictive - no violence, no sex, no offensive language, no running around naked or with “revealing” clothes (or even changing clothes!). If you think this is too restrictive, stick to mature regions and events.“
… which explains things some more. Hey, I’m a stranger in a strange land here - Second Life is being constructed from first principles, to the extent that teh entire world is streamed as it is built, nothing is fixed. All the same, if I do get around to constructing a residence I think I’ll do it in the PG world for now.
05-25 – Hols
Hols
Birmingham airport, Terminal 1. The gate for Heraklion has yet to be announced, I have bought the spare batteries and stocked up on Sudoku. I reach for “Cooking with Fernet Branca,” which starts at an airport and whisks the reader quickly away from the masses and up into the mountains.
All is well. See you in a week.
June 2006
06-05 – Sympathy is what we need?
Sympathy is what we need?
As I come back from a most wonderful week in Crete I find the world is much as I left it - Rush fans are still avidly awaiting the next output while complaining that its a rip-off, Software as a Service is still going to raze all previous models to?the ground and someone’s taken the pile of gravel on Freecycle. There’s no doubt still dubious goings on, on Second Life and the earth is still warming up. There was a very interesting Panorama programme on the latter last night, on the Beeb, and one remark caught my ears - that US government officials had been “actively seeking influential people with a sympathetic view” (or words to that effect) to the government’s perspective.
This idea - or at least, its statement - left me a bit cold to say the least. Influencers have opinions of course, and are individuals that will have sympathy to one perspective or another. A cursory glance over the output of some of my IT analyst colleagues,?competitors and other influencers shows that some write about one company more than the rest, some favour small innovators over?the big guys, some think one model of software delivery (SaaS, open source, etc) will “win” or at least is more interesting than the traditional models. The oft-asked question, “Do you follow market X” could be interpreted as meaning “Are you sympathetic to market X,” as if not, why bother - so we end up with the potential for bias, just by standing still.
Overall, perhaps, any bias irons itself out if one is broad enough in seeking opinions, but lets face it, its not in the vendors interests to “go broad”. With the influx of blogging as well, opinions are often seen through the filter of bloggers, who will tend to favour - err - blogging and?the Web 2.0 gamut over past and parallel technologies. Not necessarily wrong - just - incomplete. No doubt I’m as guilty as the rest about this, but its definitely one to watch for, and to keep in check. Meanwhile, the great thing about MWD and its IT-business alignmant mantra, is that it sets a framework for allocating and measuring sympathy - according to business value. As I come back from hols I set the following goal for myself: I can keep my favourites, but when I talk about them I shall endeavour to do so from the perspective of business value.
Even if voice recognition is the killer app for handheld devices, whatever the naysayers may think?( :-) you heard it here first), it’ll have to be useful and usable to end-users first.
July 2006
07-07 – Thought for the day
Thought for the day
Bombs don’t discriminate.
07-29 – Isn't it scary...
Isn’t it scary…
… when your own kids can do things you can’t do? Here’s one of Ben’s animations. I taught him all he knows, honest…
August 2006
08-04 – A Tale of Two Horses
A Tale of Two Horses
I now have the latest beta of Internet Explorer installed, and it really is quite good. I won’t be de-installing it, as the performance issues in the previous beta seem to be resolved. I have, however, had to install Firefox in parallel, for a number of reasons: not least that certain Web sites (Blogger?) still don’t work properly, but also that some sites don’t list IE7 as a supported browser.
Its interesting - following the browser wars of the past, if I squinted a bit I could have the impression that Microsoft is going out of its way to open a hole for Firefox. I know, that would be impossible… but in this world of Betas, perhaps having a stable alternative is the key to the future.
ALternatively, perhaps Microsoft could just fix the bugs before release, beta or otherwise.
September 2006
09-21 – Determinedly blogging
Determinedly blogging
Its been a bit quiet hasn’t it? I’ve been testing a few blogging tools, notably Zoundry and (this post powered by) SharpMT. Lets see if it works, in more ways than one.
09-21 – Does anyone recognise themselves in this?
Does anyone recognise themselves in this?
Today’s Dilbert. Deeply unsettling…
09-21 – Eucon say that again
Eucon say that again
How very remiss of me. Two weeks ago I attended Eucon, the Rush convention at the Limelight (of course) club in Crewe, UK. It was a convention in the traditional sense - seasoned fans taking the opportunity to share their experiences and share the experience. As well as shifting a few books, it was a distinct pleasure to meet so many friendly people from the UK and elsewhere around the world. It was also a delight to finally meet Donna Halper, after so many phone calls and emails! Thanks very much everyone for making me feel so welcome, to Paul for keeping me company, and of course to Ashley for inviting me!
09-21 – Looks like I missed a good debate
Looks like I missed a good debate
Well, the gang - Neil, Dale, James and others were at Hursley earlier this week and at lunchtime the conversation turned to an interesting discussion about kids, computers, antisocial futures and all that. I’ve not much to add apart from the fact I wish I’d been there (not just for this discussion!), but two things spring to mind:
- the best advice I’ve been given is to keep the kids computer access in a place they won’t be shut away (i.e. the bedroom). Not only does this enable you to keep a parentally watchful eye, but also it means they will still be partially involved in what is going on.
- seems to me that there are three behaviours - passive/passive (TV), passive/active (traditional videogames) and active/active (social software). While these categories are broad generalisations - there is nothing more active than watching kids fight over whose turn it is next - the general tendency seems to be towards more activity and interaction, rather than less, which (again, in general terms) I see as a good thing.
Meanwhile I had a very interesting conversation with a CIO earlier this week, where he was talking about kids coming from college with all their computer-literate-social-networking skills, and the issues these people faced in the sometimes uncommunicative environments of the workplace. It will be interesting to see what happens in a few years time, when the trickle becomes a flood.
Powered by Zoundry
09-23 – This morning I will mostly be running…
This morning I will mostly be running…
… the rather neat mashup between Pandora and Last.fm, at Real-ity’s PandoraFM. Recommended music I haven’t necessarily heard before, and it logs it all so I don’t have to.
Shmokin’ ;)
09-24 – Claude Nouveau for all your wine needs
Claude Nouveau for all your wine needs
A few years ago we were travelling through France and doing a bit of wine buying in the Burgundy region (as a Frenchman once said to me, its harder to get a bad Bourgogne than a bad Bordeaux…). We stumbled across Claude Nouveau, proprietaire-recoltant in Marchezeuil who, from his small establishment, helped us to choose some lovely wines to take home. We are still drinking them, and once a year I get a price list from M. Nouveau, informing me of the latest from his vines.
Now others might see this as junk mail but every year it raises a smile, as he personally sends out a letter to me and no doubt many others. I’d love to buy from him again, and if ever I am passing through the region again I will no doubt do so. HIghly recommended, if ever you are in need of a few Santenay/Maranges at reasonable prices, you know where to go.
Update: you can also go here, from the comfort of your own armchair.
Update 2: I’m clearly not the only person to think highly of M. Nouveau’s outputs!
09-25 – On switching content with networks, and great sleep cures
On switching content with networks, and great sleep cures
“What are you thinking?” Liz said to me last night, just as we were settling down for bed.
So I talked to her about how, when I started playing World of Warcraft, how I thought it was a game with a social element, and now that I’d completed the levelling phase, I saw it more as a social network with a gaming element. I mentioned watching the Tottenham-Arsenal match on TV last Saturday and said how it occurred to me that in the old days, it was mainly about the football with a celebrity element, and now it was more about watching the celebrities, and football was one part of that. I commented that maybe there were some parallels, as the content - gaming or football - were being superseded by the contextual network that ran around both, and perhaps it was only a matter of time before sport and online gaming, watching and networking also merged. I started to say something else…
… then I realised Liz was sound asleep.
09-26 – FrieNDA
FrieNDA
A term has popped up in the valley that I quite liked - the first reference I can find is here: Web X.0: FrieNDA.
Thanks for that - of course, people have been practicing it for years but its nice to have a word for it.
09-30 – Porcupine Tree at the Astoria 29/09/06
Porcupine Tree at the Astoria 29/09/06
Not a remarkable gig from the point of view of performances - as fine as ever, but half the set was new material and I’m not the best person to ‘get’ live music from a cold start… most importantly, the best audience reaction I’ve seen at a PT gig, and only a small proportion of old faces in the crowd. This wasn’t preaching to the converted - which bodes well for the new album next year.
BTW new material verdict: not bad - some strong songs, a couple of growers.
09-30 – Today I shall mostly be...
Today I shall mostly be…
Juicing pears. Mmm.
October 2006
10-01 – Cotswolds to Capetown
Cotswolds to Capetown
Just back from my local, to see Nick and Nick as they set off to Cape Town on motorbikes. Emotions were kept in check until the helmets went on, at which point it was best they got on their way. Nick Graham is raising money for an anorexia charity, and Nick Clarke for Leukaemia research. More soon but for now here’s a link to their site, you can follow where they get to on their blog (they have a Blackberry).
10-03 – H at Riffs Bar 01/10/2006
H at Riffs Bar 01/10/2006
Having missed all of the h-natural gigs for reasons of just being too darn busy, I was delighted to be able to catch Mr Hogarth at Riffs on Sunday Night. It was one man and his Yamaha keyboard, playing a mix of Elvis and The Beatles, Tim Buckley and a few tracks from Marillion. Actually, it wasn’t just him - Swindon’s own guitar hero and “chum” Dave Gregory joined h on stage for a couple of songs, a sublime Sound of the Siren and XTC’s The Loving.
Which was nice. Actually there was more to it - the Buckley-esque Gabby Young put on an admirable support performance. It was all in a good cause as well, being part of the Oxjam music event. Shmokin’.
10-03 – IT people - have you subscribed yet?
IT people - have you subscribed yet?
When I first joined Neil Macehiter and Neil Ward-Dutton at MWD, I had an immense feeling that it was a good move, but I didn’t for the life of me know why - sometimes, you just have to go with your gut. 8 months later and I know it was the right thing to do because we share a common goal - to help people do Information Technology better, breaking with the habits and mindsets of the past. At MWD we are building a corpus of information about how this can be done, and we’re making it available via Creative Commons to anyone who feels it could be useful to them. Currently we have reports on areas such as Identity Management, Service Oriented Architecture and IT Service Management, as well as vendors capability assessments and general briefs on vendor offerings, and we’re adding to the pool all the time. If you feel these reports would be useful to you or if you would like to sign up to our monthly newsletter “Signposts”, please do register your details. There is no obligation whatsoever for this, please do pass this information on to anyone you feel might benefit.
10-03 – Just back from
Just back from
Actually it was Barcelona - but all I saw of it were some street signs in Spanish (and probably Catalan), on my way from airport to hotel, then from hotel to airport the next morning. That was my choice as things are a bit busy at the moment, but a shame nonetheless as it’s a great city. Bizarrely, I had to throw away my disposable razor on the way out, only to buy another exactly the same once I was through security. Explain that one.
10-04 – Maybe this is why I'm so messed up
Maybe this is why I’m so messed up
Just looking at the site stats for last month. They say seach strings are revealing, so here’s the strings that had people land on my site:
| 1 | 31 | 48.44% | qtek 9100 |
|---|---|---|---|
| 2 | 10 | 15.62% | qtek |
| 3 | 2 | 3.12% | alphadoku solution |
| 4 | 2 | 3.12% | dentist equipment |
| 5 | 2 | 3.12% | four colours problem |
| 6 | 2 | 3.12% | jon collins |
| 7 | 2 | 3.12% | q-tek |
| 8 | 1 | 1.56% | agile web services |
| 9 | 1 | 1.56% | bluetooth interferes with wireless |
| 10 | 1 | 1.56% | charlene zivojinovich |
| 11 | 1 | 1.56% | colin woore |
| 12 | 1 | 1.56% | courses for myspace |
| 13 | 1 | 1.56% | courses myspace |
| 14 | 1 | 1.56% | funny lessons from mishaps |
| 15 | 1 | 1.56% | goat sheep on the farm |
| 16 | 1 | 1.56% | invisible earbuds |
| 17 | 1 | 1.56% | marillion |
| 18 | 1 | 1.56% | porcupine tree |
| 19 | 1 | 1.56% | punk eek |
| 20 | 1 | 1.56% | sudoku creating puzzle |
OK Qtek I get, Marillion is fine and I’m pleased to see punk eek and alphadoku. But funny lessons from mishaps? Goat sheep on the farm? No idea, sorry - perhaps a deep insight into my psyche.
No sheep jokes, please.
Edit: I take it all back - those phrases do indeed link to the site. Well I never.
10-04 – Stylus styloss
Stylus styloss
A new stylus arrived in the post this morning, following me dropping the old one in the dark of the Astoria last Friday night. I felt weirdly bereft without a stylus, a bit like going out without a wallet or a mobile, I would reach for it from time to time and feel distinctly uncomfortable when I found it (still) wasn’t there. So, I am now complete once again. Scary.
Incidentally, I traded in my iMate Jam for a JasJah, or whatever this one is called - the Orange SPV M5000. It wouldn’t suit everyone, its very big for a star but it has a hi-res screen, full keyboard and, delightfully and unexpectedly, it supports 3G/UMTS. So I’ve also been able to cancel my WiFi contract wth BT Openzone. Its got a faster processor than the IMate, I’d say this was the minimum spec necessary if you’d want to use it as a laptop replacement. Oh, and it can also play DivX video in full screen! Now I can record from the TV on my Archos AV340 and copy it onto here (it can also do blog posts) for viewing on the move. Nice when a plan comes together!
10-06 – Desert Island Discs
Desert Island Discs
It?s a Friday, so here?s 10 songs?
Everybody Hurts ? REM
Mr Blue Sky ? Electric Light Orchestra
Lilac Wine ? Jeff Buckley (cover)
Clarinet Concerto ? Mozart
The Great Gig in the Sky ? Pink Floyd
Ocean Cloud ? Marillion
Sunday Bloody Sunday ? U2
Arriving Somewhere But Not Here ? Porcupine Tree
The Psychic ? Crash Test Dummies
Song to the Siren ? This Mortal Coil (cover)
10-10 – Dirty little analyst secrets
Dirty little analyst secrets
#86: Analysts read their competitors’ reports
Actually I make no bones about this - I’m of the opinion that there is nothing new under the sun, and that we can all stand on each other’s shoulders and get that little bit closer to it (or the Oracle, whichever is our preference). Of course blogging leads to a kind of collaborative analysis, but some firms do still find it difficult to share. Equally, I’ll quite happily promote the work of other firms - today I received the latest, typically excellent newsletter from John Katsaros and Peter Christy, and I was prompted to comment on it. I heartily recommend it to anyone that’s interested in the goings-on in the Valley (that’s Silicon, not Thames). You can see the kinds of things it contains, and sign up here.
On the subject of newsletters, I embarrass myself every Friday morning as my delight at receiving Silicon.com’s Roundup is once again eclipsed by the fact that I will have totally forgotten it would be coming. Like a goldfish swimming round a bowl, I am… but anyway, it has often been the perfect antidote to a hard week.
10-11 – Too many cool things
Too many cool things
I was about to comment on Guy Kawasaki’s Six Cool Things post, when I noticed the title of the post was “five cool things”… I have no idea which was the sixth but I’d bet on the shirt!
Indeed, I was going to propose a seventh - Salling Clicker. Turns your mobile phone (or in my case, Pocket PC) into a Bluetooth presentation device. The Pocket PC version even lets you see what’s the next slide. And it works with iTunes/Media Player. And it can work over wi-fi. One of the coolest things I’ve installed, then bought, this year.
I haven’t tested the Bluetooth range, but it hasn’t let me down yet :)
10-12 – Blogger goes multilingual
Blogger goes multilingual
Great verification word from blogger this morning - ?rdpam. Hang on, I’ll just go get my germanic keyboard…
![]()
10-13 – History of Guns in WGC tonight
History of Guns in WGC tonight
Goodman Max Rael is a many-faceted character, and a good mate - or at least he would be I’m sure if I saw him more than once a year. Anyway his band The History of Guns is playing Welwyn Garden City tonight.
Think goth-punk with lots of shouting. Nische.
10-13 – Intrepid bikers on Fanta
Intrepid bikers on Fanta
Nick and Nick have reached Africa at last - that’s 12 days to cross Europe and drop down through the Middle East. Hurrah! Spare a thought for the poor lads - they’ve been without beer for three days…
Cotswolds2Capetown: Day 12 - Thursday 12/10/2006
10-13 – Performancing fixes
Performancing fixes
Been testing some changes to Performancing, a rather handy blog editor that runs in Firefox (I’m also still running Zoundry, its better for backups etc). According to this post on the Performancing forum it looks like a number of ISP’s are blocking access to the xml-rpc file. There is a workaround but both Performancing and Zoundry require you recreate the blog accounts. So be it…
P.S. Zoundry also seems to upload the graphics better…
10-16 – Any SD Card computer Geeks Out There?
Any SD Card computer Geeks Out There?
As I was spending a bit of time tuning my Windows XP settings, I noticed I could set the paging file to be on any drive, even ones that were only temporary - i.e. hanging off a USB port. Suddenly I had what might be a brainwave - but might not - It occurred to me that I might be able to make use of the SD card port as a solid state paging volume. It wouldn’t be as fast as RAM, but surely it would be faster than paging onto the hard drive? Laptop memory is a darn sight more expensive than expansion cards
10-16 – The demise of blogging
The demise of blogging
It seems ironic that one should write a blog about blogging being on the wane, but it is. History will put the start of the blog-downturn when Robert Scoble left Microsoft, as (perhaps) he found it impossible to square his corporate and blogging existences. He’s now a PodTech blogger, and spends as much time promoting his multimedia show as he does talking tech. Which is fine. Similar examples can be found here and there - a journalist leaves a publication because he says something out of line, implying that those who remain will be expected not to; links to blogs indicate that the blog is no longer kept up to date, as the blogger now works for XYZ-insights.com, the Web-based publication. The number of blogs may still be on the increase, but this is more a factor of the simplicity of the mechanism than the betterment of the blogosphere as a whole. Up at the top, the club of elite bloggers in each sphere is now saturated, or if not it shortly will be. Bloggers of all persuasions are talking about cutting down the numbers on their blogrolls, implying that the number of relationships they can maintain is arriving at a sustainable level. From that point on the growth is organic, not exponential.
Which is fine. I have never had any problem with blogging, as a mechanism - in the same way that I don’t have a problem with knives and forks, lathes and printing presses. The problem was never the pamphlet, nor the pamphleteer, it was the idea that pamphleteering was in some way going to replace what had gone before and lead us to some greater thing, previously undiscovered. The greatest of greater things was, inevitably, the collective consciousness - this, too, is on the wane as neither Wikipedia nor any loose-knit community of like-minded individuals have proved that they will ever be able to agree on anything. The most hardened bloggers take offence at each others’ interpretations of their own remarks, pointing more to the downsides of collective anarchy than its upsides. If blogging were an island tale, William Golding would be writing the script. Or perhaps George Orwell - for as we all know, some bloggers are more equal than others - by humbly suggesting they are not as bright as their readers, they are nonetheless illustrating the divide.
Which is fine. The point is not to say it is bad, or it is good: it is neither, and it is no different from what went before or what will come after. Humans interact following ancient models and customs that have been documented to the n’th degree by anthropologists, sociologists and various other types; no doubt they are at it right now, drawing up the table of personality types and comparing them to (or should we say, “mashing them up with” the blogiarchies.) No doubt they did it too with open source, and I am sure they will do the same with whatever comes next - for in this very human existence we have, there will always be a “next”. Put it this way - teenagers are not blogging, or at least they don’t see it as such, more as an extension of Myspace, Bebo and MSN. Nor will they ever - something else will build upon the foundations they are already creating, and they will have their own version of the road to Nirvana. Its as old as the tower of Babel, though admittedly a lot more sexy.
So, blogging is doomed then? Of course not, the mechanism will achieve its rightful place on the workshop wall, next to all the other tools that have failed to be rendered obsolete. Cries that the media was in some way doomed become laughable as soon as the more savvy media chose to add blogging tools to their own sites; similar debate still rages over other outspoken types such as columnists and IT analysts, but it won’t last much longer. (Of course there were gaps opened up, new success stories, and there were casualties made of older hacks that were unprepared to move with the times.)The best way to knock the wheels off a new bandwagon is to integrate it with everything else, and that’s as true for masses-hype as it is for corporate-hype. Blogging in its current form cannot survive - not because it is in any way inadequate in itself but rather because, as a single mechanism, it is constraining. One blog cannot cope with multiple personalities with multiple things to say - and much as I enjoy reading about the day to day activities of my friends and family, I confess to not giving a monkey’s grunt about Irving Wladawsky-Berger’s baseball habit or Bob Sutor’s porch. The mechanism as defined is not capable of filtering out what I would consider noise, just as I cannot share information discriminately with my family and friends, my musical friends, my gaming other friends, my work colleagues and my customers and prospects: instead, I use multiple blogs, channels, columns and I am delighted to be a part of all of them. No doubt I will one day be able to manage everything from a single point, perhaps the resulting mechanism will be called something to do with blogging (though hopefully with a nicer name), but there’s just as good a chance it will be called something new.
At which point, of course, we can start all over again, convincing the cynics (as I was, about blogging) that that there was more to it than they thought, then riding the wave for a while before realising (as I am, now) it is starting to peter out and scanning the horizon for the next one. Enjoy the ride, but lets not get too hung up about the shape of the board.
10-17 – FUDCO - for all your corn needs
FUDCO - for all your corn needs
New brand available from Tescos supermarket in the UK. How soon before they start selling software using this brand? We can only hope!
10-17 – Master of None at the Sundial Theatre Cirencester, 14 October 2006
Master of None at the Sundial Theatre Cirencester, 14 October 2006
Three acts - a singer/guitarist called Richard Cox, a guitar and fiddle combo called Take Two and the main act, Stroud-based Master of None. An evening of gentle music, punctuated by amusing chat and the retuning of various instruments. Multi-instrumentalists Howard Sinclair and Alex West deserve to be shot (not really) due to their ability to switch from flute to sax, guitar to piano at the drop of a felt cap; Colin Sillence brings the experience, wonderfully virtuoso fingerpicking and dodgy lyrics. An enjoyable evening out - my only criticism might be that there could have been a bit more oompf in the main set, folk doesn’t always have to be quite so laid back, perhaps that was as much down to the regimented seating as anything.
10-18 – Anyone going to Storage Expo?
Anyone going to Storage Expo?
Storage Expo - The UK’s only dedicated data storage event
For the next couple of days I shall be at Olympia, London for Storage Expo. Anyone who thinks storage is boring need only to… no forget that. In any case, do stop by. Tomorrow I shall be chairing a session on future proofing storage and I’ll be on a ‘Dragons Den’ panel where we get to grill the vendors. Now, that should be a lot of fun.
See you there perhaps.
10-18 – CIO P2P
CIO P2P
I picke this up from Vinnie Mirchandani’s blog:
Charles Zedlewski (of SAP) sent me this quote after reading my “elephant hunter” post
“In the 1970’s when CIO’s wanted to know what to buy, they asked IBM.
In the 1980’s when CIO’s wanted to know what to buy, they asked Andersen Consulting.
In the 1990’s when CIO’s wanted to know what to buy, they asked Gartner.
In the 2000’s when CIO’s want to know what to buy, they ask each other.”
I thought this was both simple, but very profound - it implies - and this is something that I am seeing - that CIO’s are starting to take control of their own destinies. Unsure what to add without sounding patronising, but all for it.
10-18 – Mick Wall has a blog
Mick Wall has a blog
The music journalists’ own classic rock god Mick Wall has his own blog, and in these pseudo-declarative, self-obsessed diarising days of “what can I disclose about my goings-on” Mick offers a refreshingly uncompromised version of what life’s like at the front line of rock. Tax. Meetings. Frances Rossi. Crapping. A highly readable account, if mine were a “proper” blog it would be modelled on Mick’s. Though it wouldn’t be quite as interesting.
Thanks to Simon for the tip.
10-18 – The Matrix Revisited
The Matrix Revisited
The Matrix sequels have to go down as one of the greatest wasted opportunies in cinematographic history, but time is the great healer. Having got over the fact that (it took me a while) there is no philosophy worth discussing, I watched Matrix Reloaded over the weekend. It actually stacks up pretty well as an action film - more Terminator than Blade Runner perhaps, but the motorway sequence is incredible. Maybe its time to take another look at Revolutions.
10-20 – Second Sun
Second Sun
Helzerman’s Odd Bits - Sun holds press conference in Second Life - what to wear?
Only a week or so late, I just popped along to the Sun Pavilion in Second Life. There wasn’t much going on, strangely, I went and had a look at the pictures. One may scoff at the idea of replacing good old 2D interfaces with an immersive environment, but its got its upsides - one can look at a number of things at once, there’s less pressure, its more intuitive etc. Ain’t going to work for everything but I can see places it would.
Its also probably the only place in the world I’ll get my hands on a Sun Container (and it may be the first place it ships - I wonder if they’ve thought of that - directly hooked into their Grid facility?)
Didn’t get offered cybersex this time, either.
10-20 – The best conference giveaway ever
The best conference giveaway ever
Just back from Storage Expo, which was a surprising amount of fun - generally beacuse it was great to catch up with a number of people I’d happily call friends if I ever saw them more than once a year. The sessions were good as well, particularly the Dragons Den thing. Those vendors really can be good sports!
But anyway, I just have to comment on the Gyrotwister I picked up at the PCICASE stand. Its a gyroscopic spinner that you have to try to keep going by the power of your wrist. Forget the hats, t-shirts and pens, this is the business! Though it could get you some funny looks if you tried it out in public… it comes with a CD-ROM, which can measure the RPM by listening through the computer microphone. Just how cool is that. I thought the USB hub and cup warmer I picked up at a Dell gig was the ultimate, but this thing wins hands down. You can even get one with lights…
10-21 – Comment apologies
Comment apologies
I’ve been asked why comments aren’t working on this site, I’ve tried to fix things but the answer still eludes me. Something to do with .htaccess/xmlrpc/other php files and the fact the blog is in a “wordpress” subdirectory. Normal service will be resumed… but in the meantime, do please email me and I’ll post comments that way. Anything(at)joncollins.net should reach me.
10-23 – Ben Folds in Second Life
Ben Folds in Second Life
Interesting article here on Ben Folds and his Second Life gig. More interesting perhaps for the comment trail - I don’t know much about the music but its a good indication of what works and what doesn’t.
10-23 – Keane at the Wolverhampton Civic, 22/10/06
Keane at the Wolverhampton Civic, 22/10/06
I was full of trepidation about this event, not just because Tom might have rushed back to rehab at any moment, but also that we didn’t have any tickets - I was assured they would be available for pick-up at the box office, but the only proof I would have would be after the hour-and-a-half drive. As it turned out both Tom and tickets were there, and what seats - balcony right, just above the stage (Tip: never believe them when they say lines open at 9am, call 15 minutes earlier).
Given that Keane are considered about as middle of the road as a squashed hedgehog, to put it bluntly, they kicked ass. Tim played keyboards like Peter Crouch plays football - a gangling mass of energy, playing without letting up but always in control - I wouldn’t want to be his keyboard tech. Trim-looking front-man Tom seemed geniunely surprised at the level of support he received from the well-lit West Midlands crowd, and responded accordingly. Drummer Richard Hughes was understated but solid and capable, as I think he probably is in real life. Not much to say about the set list as the band aren’t yet over-endowed with material, yet each song was bashed out with energy and abandon.
Say what you like about Keane, but there’s a band that are doing it for love and they are getting it paid back in spades. Long may they continue.
10-26 – Torchwood and Cardiff
Torchwood and Cardiff
Just watched the first episode of the Doctor Who spin-off, which is not only set in Cardiff, it proudly advertises its Welshness. Very good, very enjoyable. Mention of “the rift” takes me back to when I saw BT announcing that it would be launching its 21st Century Network trial - in Cardiff. Was I the only person to think - “no, they can’t do that - it’ll be right on top of the rift in the space-time continuum!”
And then I remembered that one of them was just an imaginary story about a fantastic collection of technologies set some time in the future, and the other … was a TV programme.
Sorry, couldn’t resist it :)
10-26 – Why I like Zoundry
Why I like Zoundry
… for my blog editing:
- because it doesn’t need me to install any frameworks
- because it doesn’t crash
- because it auto-creates thumbnails
- because it is usable and straightforward
- because it doesn’t clog up the editor with click-through advertising capabilities
I’ll add if I think of anything else! I also still use Perfomancing a little.
Powered by Zoundry
10-30 – Comments are go...
Comments are go…
I’ve done a clean install of Wordpress 2.0.5, and it looks like I have comments back. So, if you were one of the three people who tried to comment on a post in the past 6 months, you should be OK now (I know, I know, it helps to post). I won’t install anything clever unless I’m sure it doesn’t break anything. Honest.
10-30 – Reviewed you Spam settings recently?
Reviewed you Spam settings recently?
This morning I finally got round to vsiting my email hosting providers and checking what my email forwarding settings were. I’ve been getting a lot of spam through certain addresses, and also one of my sites sees to be being used as a spam sender address, which is a pain - not least because I’m geting all the bouncebacks.
So, I’ve just been online. Lo and behold one of my providers now has an enhanced spam filtering service, another now has an “auto-delete” of the bouncebacks so while I can’t do much about them for everyone else, I can at least reduce the pain for me. I had no idea about the services, perhaps they’d been mentioned in a bulletin at some point but I quite possibly deleted it - as spam, no doubt.
So - it might be worth visiting your hosting provider and seeing if they have enhanced their services while you weren’t looking! Of course if you’re the kind of person to do this on a regular basis, then I take my hat off to you.
Powered by Zoundry
10-31 – Online backup solutions
Online backup solutions
The demise of a friend’s hard drive prompted me to take a quick look at the online backup market, hence the delicious links. While the shortlist is down to 3 companies, I’m testing AOL’s XDrive service first, as it offers 5Gb storage for free. The software still has to prove itself but its difficult to beat at that price.
November 2006
11-02 – First word about the Microsoft-Novell partnership
First word about the Microsoft-Novell partnership
And the word is -
More to follow, over at MWD.
11-02 – Layering the blogosphere
Layering the blogosphere
I’m understandaby pretty interested in how IT analysts, bloggers and so on interact and overlap - the nascent “influencer community”. A conversation with James earlier today resulted in 4 levels of documentary process, from the bottom:
- blogging - writing about the facts of the matter, often in specific interest areas
- aggregation - reading lots of blogs and other sources and summarising what’s being written about
- analysis - consolidatation and interpretation to deliver some understanding about what it means and what could/should happen as a result
- influence - the ability to project the information or interpretation to an appropriate audience.
The point is its about roles, not people. I’m an analyst, always have been in many ways (even when I was an IT manager) and always will be looking to understand how things fit together. To be frank I don’t read that many blogs (tens rather than hundreds), and I am thankful to the aggregators who have the inclination and skill to read multiple sources and summarise them. Often aggregators will analyse, and I know analysts that aggregate, the more the merrier - and all of the above rely on some kind of influence - if a tree falls in a forest and I write about it but nobody reads my blog, did it make a sound? So - am I a blogger? Could do better. An aggregator? Sadly not. An analyst? I do hope so!
11-03 – The music industry is f*cked - Peter Jenner
The music industry is f*cked - Peter Jenner
Music industry veteran and one-time Mike Oldfield link man Pete Jenner waxes lyrical on the state of the music industry. Everything I thought was true, but wasn’t sure enough to say. Or maybe I did.
11-06 – Jo McCafferty, Cheltenham, 04/11/2006
Jo McCafferty, Cheltenham, 04/11/2006
Bit of a fix this one, as Jo was playing at Em and Jase’s wedding (and what a wedding!) - but it was lovely to see her play again, one of those people I wish could reach a wider audience. Restrained and soulful, yet energetic and funny.
Jo is a singer-songwriter, she’s played quite regular support for Midge Ure, among others. For more information, click here.
11-07 – Nice graph
Nice graph
I’m not absolutely sure how web site hits relates to blog stats, but what I do know is that when I stop posting they go down, and when I post they go up. I’m just capturing a copy of this month before the this-time-last-year stats vanish.
![]()
11-09 – Life 2.0 and Pixie Boots
Life 2.0 and Pixie Boots
This time last year, I remember talking to some colleagues about blogging. Pretentious twaddle, was the feedback. Now, they’re all doing it.
Last week, I was talking to some other analysts about Second Life. “Have you got avatars,” I asked; “Don’t be silly,” was the response. I’ve had a similar experience with Myspace.
So, the question is, will they have Second Life avatars next year? My guess is, yes. And we’ll have gone through the, “Of course, it was all crazy then, but its useful now,” debate.
Reminds me of when I went to University, and got my first pair of pixie boots - everyone in Leeds was wearing them. When I went home, Paddy (who had stayed down a year) laughed his head off, as did many other old mates. A year later, Paddy came back from his University - in pixie boots.
“But…” I spluttered.
“Yes, but they’re fashionable now,” he said.
What a difference a year can make.
Caveat: Robin’s already there.
11-09 – Something to Declare
Something to Declare
Aw, shame. The fuss on declarative living has died down somewhat… and I had this draft on Wordpress I was saving until I could get down to writing a really good article about it. Might as well post these thoughts now, perhaps something will come of it some time.
Consider it a declaration of intent ;)
Thoughts for a future post…
- Declarative living is about much more than uploading preferences
- Depends on roles - what persona is being declared - cf duping Google, duping Audioscrobbler
- Non-declaration is also a tool - publication of reader figures, punching more than one’s weight
- Pre-requisite to engagement is to declare interface
- Declaration, like service, goes all the way down
- Passive declaration by blogging, active declaration needs to be more than it is
- In future, declaration of capabilities to enable service interface
- Agreed that (anonymous) declarative mechanism could replace surveys, charts (Audioscrobbler)
- Dig out blogging is narcissism post
11-13 – Hotel wireless: nice when it works
Hotel wireless: nice when it works
There’s been quite a lot written recently about the failures of hotel wireless in London, but things seem to be going my way this evening. After a little mix-up in travel at a Radisson hotel (no fault of the hotel), I booked a last minute “top secret” hotel round the corner, which to my surprise (its a lot cheaper) was another Radisson. This hotel chain has free wireless in both hotels, it may extend to others but it certainly works in the Covent Garden area.
So, having made my online booking, I was then able to email my booking code to the front desk as the back-office systems weren’t integrated enough to deal with such immediacy. To me, that was fine - as I had free wireless, I’ll send them any email they need.
Nice when a plan comes together.
11-15 – What a palaver! On rubber balls, customer service and spam
What a palaver! On rubber balls, customer service and spam
I can see it all so clearly. Over the past decade, hosting companies and other internet service providers have been building their businesses and implementing appropriate customer service mechanisms. In general this has followed a 3-tier approach:
- web based self-service - for the standard stuff
- email - for the non-standard stuff (or things they don’t want you to do so much, like leaving)
- phone - for the more complicated stuff
Phone support can be slow and laborious, in some ways deliberately causing the punter to opt for one of the other two mechanisms. Bottom line: its not perfect, but it works.
Or worked. Over the past few days I’ve been trying to communicate with Verio to transfer a domain. Verio’s fine, I just wanted to consolidate down the number of hosting companies I used, and they got the short straw. But phew - trying to work with them on email was like trying to throw rubber balls through a very small hole, ten feet away! First, it didn’t help that they don’t make their email address for this sort of thing particularly obvious (there’s a list at the bottom). Second, the amount of spam protection on these email addresses is just prohibitive. I must have gone through ten combinations of email sources, addresses and subjects before I finally managed to get a message through. Even once I’d done that, I was asked for more information and I had to do it all again…
I’ve got there in the end, but I took away a number of thoughts. The first was that what was initiallly a workable model - the three-stage approach above - has become unworkable due to the late addition of Spam protection - and such companies need to rethink it. Second, with my analyst hat on, it is a clear example of how security needs to be about business risk management and not just “block that nasty email”, IT risk avoidance.
The business risk in this case of course, is that customers get peed off and go somewhere else.
Here’s those emails - you wouldn’t guess them!
domreg@verio-hosting.net; domains@ntteuropeonline.co.uk; shared_support@ntteuropeonline.co.uk; support@ntteuropeonline.co.uk
The tip (which I got by phone, ironically) was to put the web site address in question as the subject, which overrode the spam filter - you have to do this every time you mail them, don’t just hit reply and expect Re:whatever to get through. Finally, try to mail from the registered email address for the account administrator, otherwise they’ll just ask you to do it all again.
11-17 – A brief history of Coates
A brief history of Coates
This history has now been updated at:
http://coatesvillage.wordpress.com/a-brief-history-of-coates/
Please update your bookmarks, and thanks for all the feedback!
11-17 – And this week's word is... Skullet
And this week’s word is… Skullet
Think: mullet (that 80’s hair cut), now think same person, a bit older, losing it on top… and you have the skullet.
Expect to see plenty of them at revival 80s pop gigs. Photos here.
Thanks Shane, Tips, everyone!
11-17 – Life's (not) a long song
Life’s (not) a long song
Last.fm is great, isn’t? Well, perhaps not, for some artists. Why? Because its unit of measurement is the track, not the album.
Take a track such as Jethro Tull’s ‘Thick as a Brick’, for example. The fact it comes 8th in the Tull chart is astounding, given that it is 12 minutes long. I would be prepapred to wager that it would be higher, if it were shorter - for the simple reason that in any 12-minute period, it can only be played once, whereas a four-minute track could be played three times. Mike Oldfield’s got it even worse of course, with Amarok clocking in at over an hour for a single track! Is it any wonder it comes in at only 94th on his own chart?
This matters also for artist charts as much as tracks. If, say, one is listening nonstop to Pure Reason Revolution, each play of the debut album ‘Cautionary Tales for the Brave’ will result in 4 tracks, i.e. 4 “votes” for the band. A single spin of Moby’s ‘Play’ would result in 19 “votes”.
Now, of course there are those 3-minute boys (not them, but theirs is the song) who would say that it serves anyone right if they have songs that are too long, but that’s neither here nor there. I wonder how long it will be before an artist actually constructs a track listing so to dupe mechanisms such as Last.fm.
Its only a matter of time, surely.
11-17 – Things I should have patented #335
Things I should have patented #335
Triggered by the faux-patent debate, I was reminded of something I thought might actually be worth registering at some point, namely a coding system that is organised to generate characters based on combinations of 6 simultaneous key presses. It goes with idea #334 - the keyboard glove. The 6th is to do with clever use of the thumb on a balled fist.
TBH it would probably lead to an RSI nightmare, but it passed the time :)

Next: Idea #241, the rubber pavement…
11-20 – Next year’s must-have gadget: Samsung SPH-P9000
Next year’s must-have gadget: Samsung SPH-P9000
Still waiting for the specs but just looking at the picture, this has to be top of the 2007 must-have gadget list: the Samsung SPH-P9000. It’s an ultra-mobile PC with 4G (or is that 5G) capabilities. Of course the big question has to be - how well does it handle voice recognition?
Thanks Jo at Silicon for the gen, and the pic.

11-22 – That was short - and to the point
That was short - and to the point
Seth Godin’s post about the demise of traditional TV. Short and sweet:
“That was quick: Helene points us to this press release from CBS in which they are touting how well they’re doing on YouTube, including a glowing quote from a YouTube VP. Think about that for a second.”
Can I use the “Waiting for Godin” pun yet? No, I suppose not ;)
11-28 – What a nice man - Mike Oldfield's Changeling
What a nice man - Mike Oldfield’s Changeling
From yesterday’s Publishing News:
Oldfield to donate autobiography money
ROCK GUITARIST MIKE Oldfield, who is writing his autobiography for Virgin, has announced that he will donate all proceeds from the book for the first two years to the mental health charity SANE. On publication he will also auction the guitar that was used on his classic Seventies album Tubular Bells, in aid of the charity. The phenomenal success of Tubular Bells led to mental health problems for Oldfield who commented: ?For some time I have wanted to tell my story, particularly the dark and difficult times I went through when I was making my early albums. This book is my way of off-loading the past, and I hope it will help others as they face up to challenges in their lives.? Virgin will publish Changeling: The Autobiography of Mike Oldfield in May 2007.
What a thoroughly generous gesture - I’m all for the idea of course, though I’m sure this will raise more than I ever could. And yes, I can confirm that the book really is nearing completion!
December 2006
12-04 – It's not your computer
It’s not your computer
When things aren’t right in IT, it can be quite difficult to put your finger on exactly what is wrong. A week or so ago, a helpful blogger put a clip from The Office on Youtube, relating to how IT and the business relate. I’ve never been a great fan of The Office, partially because I find it a too-painful reminder of certain past experiences… but this particular scene absolutely summed up one of the things we’re up against. Here’s the transcript:
“What you doing with my computer?” “It’s not your computer is it? It’s Wernham Hogg’s.” “Right. What you doing with Wernham Hogg’s computer?” “You don’t need to know.” “No I don’t need to know but could you tell me anyway?” “I’m installing a firewall.” “OK what’s that?” “It protects your computer against script kiddies, data collectors, viruses, worms and trojan horses and it limits you’re outbound internet communications. Any more questions?” “Yes. How long will it take?” “Why? Do you want to do it yourself?” “No, I can’t do it myself. How long will it take you out of interest?” “It will take as long as it takes.” “Right, er, how long did it take last time when-” “It’s done.” “Right thank you.” “Now I’m gonna switch it off, when it comes back on it’ll ask you to hit yes, no or cancel. Hit cancel. Do not hit yes or no.” “Right.” “Did you hear what I said?” “Yep.” “What did I say?” “Hit cancel.” “Good.” “Thanks.”
Priceless. If anyone can now find the original clip, I’ll put that up as well.
12-08 – Maintenance, innovation and half-baked pies
Maintenance, innovation and half-baked pies
I’m getting just a little bit bored of a certain slide that seems to appear in every single IT vendor’s enterprise pitch at the moment. It’s the one with the pie chart - where about 70% of the pie is allocated to “Maintenance” and about 30% relates to “Innovation”. The theory is that CIO’s are looking to reduce the amount they spend on the former, so they can free up resource for the latter.
On the surface, this appears all well and good. Scratch a little beneath the glaze however, and things become far less simple:
- few companies have a clear idea of the size of their own pie. In the discussions we have had around the book, IT executives have been telling us how difficult it can be to get a clear picture of spend, for new technology projects or for maintenance of older systems. This is true particularly if IT responsibility is devolved to the lines of business.
- In similar discussions, we are told that projects are more and more being driven by quite stringent business cases. While the budget totals may add up to the 30%, this is because the line has to be drawn, rather than any “here’s a piece of pie” view.
- To extend on this, organisations that see themselves as technology-driven are looking to the business benefit of any technology as well as looking at its intrinsic cost - more of a whole-meal view.
- Perhaps for these reasons, the pie itself is shrinking. As discussed by Nicholas Carr a few weeks ago, IT budgets are reducing, and the maintenance side is coming down faster than the innovation side.
- Finally, what does an innovation become, the day after deployment? Why, maintenance of course. There are plenty of new projects going on that are in fact replacing older systems with updated versions - as illustrated by Dale Vile’s recent SAP post.
The pie analogy as it stands is not completely wrong, but it is over-used and simplistic. Where it may be true is that the CFO says to the CIO, “We’re not going to give you any new money for project X, you’re going to have to fund it yourself.” In which case of course, it is inevitable that money needs to come out of one part of the budget, to shore up the other.
However, the suggestion that one side of the pie is shrinking and the other is growing, is a leap too far. It is also a dangerous starting point, suggests my colleague Neil Macehiter: “The key point is that innovation without a stable foundation where you understand your key assets, their costs and the value they add to the business, will mean that the only thing you innovate is chaos. If you simply shift budget, without stabilising and consolidating the foundation, you’re heading for trouble.”
So, what’s the alternative? Rather than drawing the line pre-and post-deployment, a better place to start is to distinguish between IT investments that relate to non-differentiating parts of the business, and IT investments that help the organisation differentiate itself from the competition. Of course organisations will still have to work out what IT they have, and where it adds value; but if the goal is to rob Peter to pay Paul, it is a far better approach to drive costs out of the non-differentiating parts of IT so that the differentiating parts can be funded, extended and improved upon, whether they be in maintenance or otherwise.
12-08 – "You put them in the bin, Mr Collins"
“You put them in the bin, Mr Collins”
It is difficult to avoid pinning all of the blame for shoddy delivery on the IT department. Of course, everyone should take their share of responsibility but I am reminded of the problems I used to have as an IT manager, due to the complex and bureaucratic procurement systems in place. No order could be under a few hundred pounds, as it cost at least that to get a purchase through; even when successful, it would invariably take weeks unless it was rubber-stamped at the highest level.
Still, it could have been worse, had the bureaucrats had it all their own way. I remember when I started in the job, I stared in trepidation at the forms I had to fill in on recepit of even the simplest of orders. They were long, complex and - worst of all - they were in sextupulate, each of the six copies having to go to different departments, most of which I had never even heard of.
Having struggled to work out what to do on my own for a few days, I eventually plucked up the courage and called up the accounts department.
“What do I do with all these forms,” I asked, in my pidgin French.
“Ah, Mr Collins,” came the all-very-understanding answer. “The first, yellow copy you keep; the green copy you send back to me, and the red copy, you give back to the people in goods-in to confirm receipt.”
“Thank you,” I said, “but what do I do about the other copies?”
The accounts manager became even more understanding, as if she could detect my suppressed panic over the phone.
“You put them in the bin, Mr Collins,” she said.
It was a small victory over bureaucracy, but distinctly pleasant nonetheless.
2007
Posts from 2007.
January 2007
01-21 – Happy New Year!
Happy New Year!
As Christmas vanishes inexorably into the past, like the dot on the TV screen (get the allusion in quick, before flat screens make the dot a thing of the past as well), 2007 begins and it’s all change, change, change. I’m moving companies from one set of buddies to another, books are finished and I’m planning all kinds of exciting things for the year, we’ll see which ones come off! More soon… no, really!
01-23 – Just back from the Architects Council
Just back from the Architects Council
The past couple of days, I’ve been very lucky to have attended the Microsoft-hosted UK Architects Council, which brings together a number of quite senior architects and CTOs from a broad range of companies, public bodies and software and service providers. IT was my first opportunity to present the 6 principles in their entirety, so I was ever so slightly nervous as I took to the podium. However, I was reasonably comfortable that the audience were exactly the kind of “we get it” people with whom the core messages would resonate. Or at least should, unless I messed up!
As things turned out, the reception was warm, the feedback highly useful, and a number of interesting debates were sparked off. Following the run-through of the principles themselves, I hosted a workshop session in which the attendees divided into three groups to consider Enterprise Architecture (EA) in the context of the Technology Garden. I wanted to know the answers to three questions:
- What were the potential ways that EA could help deliver IT-Business alignment, and how should these be articulated? - What are the challenges faced by organisations trying to deliver EA? - What actions need to be undertaken to maximise the chances of EA success?
The results are shown in the mind map below. One of the key take-away messages for me was how important EA is seen to be, when trying to enable IT people to understand what the business actually does. Equally, however, this will inevitably involve treading on toes, on both the IT and business sides of the house. If organisations want to reap the benefits of an improved dialogue between the two sides, they will have to consider how they counter the inertia of past practices and behaviours.
By the way I meant what I said by ‘lucky’, as this really was a top-class forum. Day 2 was spent discussing how to raise the level of IT skills in the UK. More on this – and another mind map – soon!
February 2007
02-03 – Get a first life
Get a first life
I was tickled pink by this link, forwarded to me by Neil W-D. On the day after they talk about it at the Davis summit, it seems appropriate…
… meanwhile, here’s a few random predictions:
* second life is a precursor to something really good, that is both usable and compelling
* Microsoft brings out its own virtual world, which doesn’t do very well
* online communities are taken to court under the new US gambling legislation
You heard it here first :)
02-03 – Script for a Jester's Tear, Riffs Bar, 27/01/2006
Script for a Jester’s Tear, Riffs Bar, 27/01/2006
Surely the best way to judge a gig is by looking at the crowd, and nobody could question the level of enjoyment felt by those leaving Riff?s bar last Saturday evening, having just seen Mick Pointer and his friends perform a rendition of Script for a Jester?s Tear. Nick Barrett was on guitar and John Jones took on the vocals, aided and abetted by a word-perfect audience (incorporating, of course, several Norwegians). It was the first time for many years songs like The Web have been played live, and if it turns out to be the last, it will have been a fine note to finish on.
Support was by Holly Petrie, who can be found on Myspace.
02-05 – Start 'em young
Start ’em young
Another of the sessions at the Architects Council was a workshop held between members of academia and the council members (largely from industry), to report on last year’s “Developing the Future” initiative, and to discuss the quality of people coming into the UK IT industry and how it could be improved. This was a hugely valid, and valuable discussion - particularly in the context of IT/business alignment. Undoubtedly there will be issues in other countries, but there does seem to be a dearth of high-quality post-graduate trainees in the UK, that really “get” what IT is for - to the extent that UK companies are looking abroad for skilled staff not just because people are cheaper, but because, quite simply, they are seen as better.
The mind map below shows the outcomes of the discussions, in terms of both the challenges and the resulting requirements. I thought the school system came in for a lot of stick, which was unfortunate as (or perhaps it was because) there was nobody in the room to defend it!
02-10 – Marillion - Port Zelande, Netherlands, 02/02/2007
Marillion - Port Zelande, Netherlands, 02/02/2007
Well, this was a bit of a cock-up on my part - I had booked tickets for the Marillion Weekend without taking too much note of the dates… and too my horror, (daughter) Sophie’s birthday was slap in the middle of it! Give that I’d booked, I still felt it was worth going for a couple of nights, not least to see This Strange Engine played, and to meet up with some of my best, yet normally all-too-distant friends.
It’s funny, one would think one could get bored of seeing certain bands play, but all they had to do was start for me to be reminded how it was about the music. The energy of the 2,500 strong crowd was electric, and when they started playing, first some new songs and then the album - the atmosphere was just amazing. I wouldn’t go as far as saying it was of the best Marillion performances I have ever seen, but it is certainly the best one I can remember!
Leaving on Saturday (thanks so much for the lift Mark and Ken) was not without issue - I left my phone behind, but it did manage to have some fun without me… priceless :)
March 2007
03-28 – What characterises a service?
What characterises a service?
A couple of weeks ago I was party to another event, this time hosted by IBM. At short notice I was asked to facilitate a session on defining services, which was interesting in the extreme as it very quickly became clear in the earlier sessions that there was no clear definition of what a service was – particularly as there were two types of people in the room – business analysts and technical architects.
So, I decided to take a different tack. Rather than trying to fix a definition of service, I thought we would go round the room and ask what characterised a “good” service. Here’s what we came up with:
• Value – the benefits of accessing a service should outweigh its costs • Reusability – it should be possible to access the service repeatedly, with the same level of interaction and service quality • Meaningfulness – it should be possible to describe the service in clear, relevant terms • Autonomy – the service should be cohesive, i.e. clearly bounded • Independence – the service should also minimise dependencies with other services • Contract – the service should offer its own guarantees in terms of what it delivers, and how: such terms should be subject to prior acceptance by the service user. • Uniqueness – the service should minimise overlaps with other services
With hindsight, there is possibly some honing that could be done with the above list – the difference between “Autonomy” and “Independence” for example, is not all that clear. What was interesting however, was that even as the debate raged around what a service should look like, there was relatively little controversy about what separated the wheat from the chaff. For organisations looking to develop their own service strategies, this would appear to be a good place to start.
April 2007
04-04 – Rush - Chemistry now available in German
Rush - Chemistry now available in German
Well, this came as something of a surprise to me - I didn’t even know it was being translated! But then, a copy dropped on the doormat this morning. While I prefer the original cover, the graphic is quite clever even if it does give Alex a black eye. Inside, better print quality and bigger pictures are a considerable improvement, but still no colour.
More information, as ever, at Amazon?- which has some good quality pictures, better, dare I say, than in the book itself.
June 2007
06-30 – Phew! What an upgrade!
Phew! What an upgrade!
Well I’ve just upgraded to the latest version of Wordpress. I won’t bore you (i.e. I can’t be bothered to list out) the entire blow by blow account but here’s the lessons learned:
- Check your ISP can support it - in particular MySQL 4.0 and above - there was a helpful message at the end of the installation process to this effect, at which point I found I didn’t have it. Which brings me to: - Absolutely, definitely, completely do make sure you back things up first! Take an FTP dump of the Wordpress tree, plus a MySQL export, that should be enough. To be sure you can revert to the previous version… - Possibly test the restore process before committing, I had the rather alarming message to the effect that MySQL could only re-import export files less than 2 Meg in size. Phew - mine was. - When upgrading, follow the instructions how to copy the directory tree. Don’t do what I did and overwrite the wrong bits (refer back to “make sure you back things up” :) )
That’s probably it - apart from a few hiccups the whole process was remarkably smooth. I’d like to tip my hat to the folks at Easily who were both highly responsive in tech support (thanks Francois!) and who did the upgrade when they said they would.
There, you never know I might even start posting again!
P.S. Just noticed I’m getting an SQL error in my links listing - oh well, I’d better sort that as well.
July 2007
07-05 – Not quite a job
Not quite a job
Greetings from Mallorca. Or Majorca, depending on who you are talking to. I’ve been invited here to speak to some senior IT guys, and my chosen topic is (unsurprisingly) The Technology Garden. They’re just setting the room up now, but I was able to wake up and feel the sun on my face - which could be the only time this year judging by the UK weather.
07-06 – Test Post from Live Writer
Test Post from Live Writer
I should probably delete this post as soon as I type it, but equally, I probably won’t. Following a recommendation from goodman Governor, I thought I’d give Live Writer a try. Not bad so far - I hear Adobe has a competing blog posting tool, but there’s one major difference - this one is free!
Oh, and one of our chickens died yesterday.
07-08 – The Grossest thing I have ever done...
The Grossest thing I have ever done…
… was yesterday, when I ran over an already dead hedgehog. With a lawnmower.
07-12 – Hotel top tips #182 - working a mixer shower
Hotel top tips #182 - working a mixer shower
If faced with a mixer shower that needs to be operated using the bath taps, first turn on the hot. Then, add the cold until it arrives at a suitable temperature. Then pull the knob to send the water to the shower head - and hey presto! A perfectly regulated shower.
Brought to you from the Horse Guards Parade, London. Grand but a tiny bit shabby, with a nice view of the Eye.
07-12 – InstantRails in Vista Ultimate
InstantRails in Vista Ultimate
OK, so I couldn’t find this anywhere on the Web so I thought I’d write it here. if (when) you get the message “Either Apache or MySQL cannot run because another program is using it’s port”, this is proably because you already have a web server running. I don’t know if this is IIS but if you run through the Getting Started section starting “Note that if you have IIS installed…”, that stuff works - or at least it did for me.
07-12 – What’s wrong with being pro-Gartner anyway?
What’s wrong with being pro-Gartner anyway?
“Interesting” question whether ARmageddon’s pro-Gartner, or is anti-Gartner? I wonder if this whole line of thinking is missing the point. I mean, I know I’d rather have a bigger slice of all that subscription funding, they are the 800-pound gorilla after all - but is it so wrong to think, or indeed say, that Gartner might do at least some good things? I’ve seen a few magic quadrants in my time, and some of them are pretty well thought out, solid pieces of analysis that raise a bunch of seriously important questions that end-user organisations should be asking.
Of course, that doesn’t mean everything they do is going to hit the target - and of course therefore, they should be subject to scrutiny - just like the rest of us. It’s also been written that some vendors feel they have to pay Gartner’s fees before they’ll ever see themselves represented in the quadrants - rightly or wrongly - I know Gartner hotly contests this! It may even be that Gartner’s product-oriented model is itself based on an industry as it was ten years ago, and not how it will be in the future - but that’s an industry-wide issue, and it doesn’t prevent Gartner analysts from being insightful in their own domains.
Meanwhile, we believe we have a whole bunch of differentiators that make us a pretty attractive alternative - always happy to share these! But perhaps its just too easy to bash Gartner because its Gartner, which equates to opinion, not analysis. The only people who can really decide whether or not Gartner is adding value are their enterprise customers, and that’s not a revenue stream I see drying up any time soon.
07-15 – Taste Test: Cotswold 3-8 vs Summer Breeze
Taste Test: Cotswold 3-8 vs Summer Breeze
At the Cotswold Show I was invited to compare a recent beer purchase of George Gale’s Summer Breeze, with one of our local brews - Three-Eight from the Cotswold Brewing Co. Both are 3.8% alcohol, and both include Saaz hops - but the major difference perhaps is that one - the Cotswold - is a lager, while the other is an ale.
And the verdict is - they’re both jolly good. While the Summer Breeze is a fine beer however, the Three-Eight has the advantage of combining the coolness associated with a lager, with a hint of the rounded charm of an ale. This may be down to the ingredients, to be honest I have absolutely no idea but if I were to choose between the two on a summer’s day I would probably go for the Cotswold.
I might tell a different story later on, as the balm of the day took on the slight chill of the evening I might be glad of the warmer character of the Summer Breeze. For now, however, it is the Cotswold Three-Eight that has my vote.
07-16 – Artists as Businesspeople? Whatever Next
Artists as Businesspeople? Whatever Next
As I sit and listen to Prince’s new album (included as a cover disk on yesterday’s Sunday Mail), I’m forced to ask myself about this “industry first”. While the man previously known as the man with no name may have stolen a march with the act, he’s not the first to have achieved the outcome.
Prince reputedly received a million-dollar sum for allowing his latest release to be issued in this way. Now, given that producing an album costs hundreds of thousands, in essence he will have been able to cover his recording costs. I might be assuming too much here but the single, most important benefit is artistic freedom.
Marillion went down a similar track when they invited their fans to pre-order an album before it was written, but while they may have been first with the Internet marketing idea, again, they are unlikely to be the first band to release themselves from the shackles of a contract by finding money outside the recording biz. No doubt, as well, there will be other initiatives.
What both of these examples share is that the artists have minimised sales risk with a non-refundable advance. In neither case is artistic integrity compromised, and both rely on thinking about the bigger picture of sales and marketing to ensure that they’re doing more than covering the costs.
There will be other ways of doing this - no doubt in ten years’ time we will look back on “firsts” of albums being paid for through government funding, bank loans, lottery wins and Google ads. What with Myspace as the incubator, and with artists understanding they stand to be just as successful (and potentially better off) without major label backing, it becomes less and less clear exactly how the music industry is going to retain any position it has left.
P.S. The album’s pretty good as well!
August 2007
08-29 – The downside of open communities?
The downside of open communities?
What fun. Several current blog roads lead to this post, one man’s rant against being contacted by PR agencies just because his blog is seen as influential. Leaves me a bit flummoxed, not so much from the “what did you expect, when you’re famous” angle, but more from the whole nature of Internet-based social activities. Fact is, if you want to swim with the fishes, you’re also going to be in with the sharks, octopuses (octopi?) and plankton. To suggest things are otherwise is fundamentally missing the point of open communities.
September 2007
09-16 – Deeper meanings?
Deeper meanings?
Due to a sequence of circumstances I have ended up reading four books in parallel. At first glance they are unconnected:
Derren Brown - Tricks of the Mind
Richard Dawkins - The God Delusion
M. Scott Peck - The Road Less Travelled and Beyond
Christopher Booker - The Seven Basic Plots
At first glance these were totally unconnected, apart from being non-fiction and vaguely esoteric that is. Delve a little deeper and two are people wanting to debunk myths, one is explaining their roots and the other is exploring that part of the human psyche that needs them. Derren was a staunch Christian who is now keen to reveal all things “magical” as mere (though still clever) trickery, Richard is keen to state that magic is how we see things we don’t understand, and Scott thinks we could all do with a bit more of it. Christopher won’t be the last to point out how we love certain types of stories, but whether they fulfil a purely human or a spiritual need is a moot point. I’ll let you know in, ooh, about 3,000 pages!
How did I end up reading them all at once? Who knows, perhaps it was meant ;-)
09-18 – This morning I shall mostly be...
This morning I shall mostly be…
Installing Ubuntu Linux in a Microsoft VirtualPC virtual machine. And some other things besides.
For my own future reference as much as anything, I had to:
* install in safe graphics mode so that the display was viewable during install
* reduce the colour depth to 16 bit (booting in recovery mode and running “dpkg-reconfigure xserver-xorg”)
* add the parameter “i8042.noloop” to the kernel command in the file /boot/grub/menu.lst so the mouse can work
* log in as root and run “dhclient” to pick up an IP address from my wireless router
* (update) added “snd-sb16” to the file /etc/modules, to enable sound
There - so easy my granny could do it :-) I then downloaded all the patches (180Mb of them) to be up to date.
First impression - not too shabby! I’m running 2Gb of RAM, so I’ve allocated 500 Meg to Ubuntu and its running fine. Nice, clean interface as well - and plenty of games!
09-20 – Circular arguments on Analysts from the Fool
Circular arguments on Analysts from the Fool
Took this from the Motley Fool:
“The current model of analyst-intermediated, opinion-based technology buying and selling produces poor financial returns. Across the world, corporate managers spend more than $1 trillion a year on technology. To help managers invest, and technology marketers persuade, $2 billion a year is spent on technology analysts. Yet 70% of projects fail to deliver a financial return, according to the Standish Group and other commentators.”
That’s an interesting, but not particularly insightful point. IT may produce poor financial returns but that doesn’t mean Gartner, Forrester etc are failing their shareholders. Also, what is Standish Group other than an analyst house? And finally, what of the 30% ? Should we all go back to pen and paper, and blow the Internet? I don’t think so.
09-20 – Not so massive
Not so massive
Well, I finally got round to listening to Massive Attack. Proof (should we need it) that another man’s meat is this man’s poison. I’m sure it’s very good, if you’re in to that sort of thing.
Update: Hold that thought… now playing: Teardrop
09-20 – "This is a business line..."
“This is a business line…”
…is the best brush-off I know for unsolicited callers. As it happens, it’s true - BT has messed up and put my business number in the home phone book. When the callers come however, all I have to do is say the above and they immediately hang up. Superb!
October 2007
10-03 – First impressions of Mumbai
First impressions of Mumbai
I’ve been invited to India to give a presentation at a conference tomorrow, so I flew in 36 hours early to see what I could learn from the place. Having arrived yesterday lunchtime (it’s now 7am the next morning), I thought I should make some notes because experiences may well end up overwritten by today’s events. So, here goes:
- Jet Airways - very comfortable flight from LHR, even managed to get some sleep (however, recommendation: never fly with half-finished dental work)
- Arrival at airport and exchanged 35 quid to 2700 rupees. It looked like a lot of money (and as things turned out, it is). Met by B.D., the conference organiser who took me to the taxi rank, past a row of ancient-looking black cars in various states of dilapidation. No, wait, that is the taxi rank. We get in and I look round for seat belt. Denied.
- The journey offers quite immediate presentation of the complexities and contradictions of India. At least I think it does, not having been here long enough. In the road sit some women, a girl picking nits from her mother’s hair. Pass the conference centre, very posh, and almost immediately a set of slum dwellings by the side of the road. Most of roadside is covered with stalls, shops, and people sit, squat or lie just about anywhere, many asleep in the noon sun.
- Hotel Highway Inn is unremarkable but comfortable, friendly staff. Set back from the road right in the middle of an area I would have baulked at staying in, if it hadn’t been pre-booked. Oh well, in for the experience I think, clutching my laptop just a little too tightly to my side. Room has shower, TV, and a whopping fan on the ceiling. And a noisy but serviceable aircon unit. There is no Internet access, when I enquire I’m sent to a small shop across the way, which does indeed have access for a single computer but no laptop connectivity. Even the phrase “laptop connectivity” is starting to sound a little alien. I buy a bottle of lemonade and return to the hotel, planning to head into town where I am sure there’ll be the usual Starbucks, Wifi, street painters, book shops… hang on, where exactly did I think I was again???
- I enquire at reception for a map, and I ask how to get to the Gateway to India - one of the first places that popped up on the tourist sites on the Blackberry (not totally unconnected then), and incidentally, where the last ships set sail for Britain as India was decolonialised, or whatever the word is. Incidentally, today is Gandhi day, for added poignancy and (so I am told) less traffic. I assume - wrongly of course - that the tourist areas will have such facilities as the average incomer (me) might expect. i.e. Wifi. The receptionist, in broken English, firmly steers me away from attempting to use the train system, and so instead I get in another taxi.
I should at this point stress that I know I was singularly unprepared for this trip, for a whole stack of reasons. The unpreparedness signalled itself in a number of forms, not least my usual catch-up-on-email-on-plane-ready-to-upload-when-I-arrive habit, which was why I was rather hooked on the idea of finding a Wifi signal. I’d already established I could get GPRS but there were some rather large file transfers waiting to take place and given wireless roaming, I didn’t want to get back to find a bill for more than the cost of the flight. Besides, it gave me something to focus on, so into town I headed.
The round trip journey was about 5 hours, spent in taxis, motorised rickshaws (Italian scooters on steroids) and walking. I was offered drugs twice, women once, and I had a man grab my ear as if that would make me more likely to want him to drill it with a piece of not-so-sterilised metal. I visited two tailors who made me feel very uncomfortable about the fact I didn’t - no, I really didn’t - need a suit right now. Down a dark, back alley full of cats and stinking of piss I found, or I was shown an airconditioned oasis of computers believe it or not with Wifi access, 10 rupees for 20 minutes which equates to not very much at all. I gave money to a sunken-cheeked man who really looked like he needed it, then denied it to another man who whipped up tears, not knowing who was telling the truth and who wasn’t. New development everywhere - to the extent that it all looks ramshackle, like a Frenchman’s house when he is half way through his renouvellements.
I agreed to meet with the same taxi driver 2 hours after I left him, ended up arriving back early and waiting an hour, only for him not to turn up and me wondering whether I’d been the idiot for agreeing, or was seen as the idiot for not communicating my request properly. There’s more, so much more, tidbits of experience that would bore anyone stupid if I tried to list them, fragments of understanding that leave me none the wiser but nonetheless feeling I had learned something.
Back at the hotel. I buy a local SIM so at least I can get some GPRS access, with an international calling card. Out for a meal, chat to diamond seller sitting opposite. We talk abut contradictions, call centres, success and failure, the new riches of India making it harder to be poor as the cost of living rises. The value of property in this area has doubled, he tells me, due to a new microtrain that is to pass straight down the main road. And then, I go back to the hotel and sleep, unencumbered by the noise of the aircon or fan, which just about drown out the street outside.
This morning, a shower, and I write this. Today I meet with B.D. and then I visit some of Mitul Mehta’s colleagues at TekPlus. First I need to find some food. Let the day begin.
10-05 – An Email Day
An Email Day
Thanks to the long-haul flight back from Mumbai, I have now spent exactly 8 hours doing nothing but email. That’s responding to emails I’ve been sent, completing actions (e.g. report reviews) and emailing the responses, or replying to people I have recently met and doing whatever action was scrawled on their business cards.
Quite stunning, how it should all take that long. If anyone’s still waiting for anything from me, its probably not going to happen so don’t hold your breath.
10-05 – Clubbing - the LinkedIn way
Clubbing - the LinkedIn way
As a test of the LinkedIn “Ask a question” mechanism and in response to a genuine need, I recently asked my network of connections what they thought about private clubs in London. I really had no idea - but having been stung for £250 one night I had this thought that there had to be cheap yet comfortable places to stay, if only I knew about them.
To my delight, I received no fewer than 10 responses, each from a different perspective. They ranged from the cheap and comfortable (just what I was looking for in fact), to the kinds of places that needed references before they’d give you a look-in, which is incidentally exactly where something like LinkedIn comes in extremely handy. Among the pointers were also links to useful web sites, such as www.travelstay.com for budget hotels. Several people also suggested block booking for a number of nights per year, which enabled rates to be negotiated. And finally, an old friend suggested we met up at his favoured joint for a beer next time I’m up.
So all in all, a rather good use of my time!
10-06 – Second thoughts about Mumbai
Second thoughts about Mumbai
I was only in Mumbai for 3 days as I had to get back for a funeral (yesterday), which was a shame, but it was long enough to make a lasting impression. My first thoughts were very much from the perspective of a westernised rabbit finding himself in the headlights of another culture, so rather than attempting to rewrite them, here’s a snippet of an email I wrote to an Indian friend.
- The general feeling of industriousness was telling. In England there have been examples of people feeling threatened by people coming over to the UK and working far too hard compared with their western peers. Catching a glimpse of the other side of the fence, where such effort appears absolutely the norm and not the exception, really put things into perspective for me. Meeting the staff at TekPlus was great, 10 MBA university graduates all with such drive and enthusiasm! India has so much to offer, and very clearly, it will be a major economic power in the future (in some areas of course, it already is).
- The amount of construction work going on was stunning. One day I took a rickshaw to Andheri (west) from where I was staying, in Andheri (East), and saw plenty of new buildings going up; on another day I headed south in a taxi to the Gateway of India, and saw a great deal of development as well on that journey. I understand in some areas of Mumbai, property can cost as much as in Manhattan.
- I had some good conversations, for example with a diamond seller with whom I shared a table at the local restaurant. He was saying how it was difficult for the poor, as successful businesses and people were getting richer, pushing prices up beyond what poorer people could pay. Square footage is doubling in price where I was staying, for example, due to a new micro-train being planned.
- The seeming contradictions between rich and poor, as both rub shoulders, was quite a surprise to my untrained eye. This was entirely down to my perceptions of course, but to see people from all walks of life going about their own business right next to each other was very different to how things are generally in the West (where we like to partition things up, and there is much fear and resentment). I was staying in Andheri (East), near the railway station ? so it was certainly not a ?sanitised? tourist resort but equally, I had aircon in my hotel and a hot shower which I took to be a luxury. Many people were sleeping on the street outside.
- The media, I was an avid reader of newspapers while I was there ? gave me a great deal of insight as well, both into cultural differences and local issues. In general I would read with interest something differently presented to here (“Guidelines for hugging in the workplace,” for example), and then almost immediately think of several examples of similar contradictions in my own culture. Interestingly though, I did find (in the papers and during the days) more examples of cultural alignment than I found differences, which helped make me feel quite comfortable wherever I was.
To my surprise I was not daunted by the squalor in various places but neither was I unaffected by it, nor the bustle and the noise. Overall, I found people very welcoming, accepting and helpful, and I didn?t feel particularly threatened. The obvious question of course is, “why should I be?” but then, it was my first time in a very new place, by myself, where I really did feel I stuck out like a sore thumb. It won’t be my last - I have already been invited to come and speak again at another conference, and also to be a visiting lecturer at a University in South India for a couple of weeks. We’ll have to see what develops but equally, I’m very much looking forward to going back and I have no doubt I shall be spending a lot of time in India in the future.
Looks like I wasn’t the only person writing about travelling in India last week, I defer to the greater experience and I’ll have to read the book!
10-08 – From Alphadoku to Alignment
From Alphadoku to Alignment
I’ve been meaning to write this for a while, but as usual I couldn’t find the time to write the full story. So I won’t but at least I’ll give the highlights! It just struck me as an interesting example of how connections yield, well connections, and potentially results.
A long time ago, I’d been thinking about writing a book about how to do IT right. Not that I had all the answers, but on my travels I’ve met plenty of people who have suitably inspired me, and I wanted to distil their collective knowledge in some way. I even started writing bits of it, but it never really reached critical mass.
Meanwhile… one day I was feeling a bit sneaky. Sudoku was growing in popularity, and I realised it was only a matter of time before people brought out bigger versions. How about an alphabetic version, I thought, a 25x25 grid based on letters? It could be called - alphadoku! A quick check revealed that the web site www.alphadoku.com was free, so I, ahem, purchased it.
Almost immediately I felt guilty - how could I sit on a web site without doing something useful? So, I set about producing one of said puzzles. A little while later, it was done - and I posted it up. One day, I thought, I would get round to writing code that could autogenerate alphadoku puzzles (note: started, but never finished - yet!)
So, I left things as they stood.
A goodly handful of months later, I got a phone cal from Wiley, the “for dummies” publishers. Was I interested in writing an “Alphadoku for Dummies”? Yes of course, I said. Unfortunately, Wiley went away to think about it some more, but when they came back they had decided the market was probably no longer in the ascendant - which may have been a good job considering my coding skills.
However, I did ask - “while you’re there, I’ve been thinking about this technology book - interested?” Perhaps out of their guilt this time, I was put in touch with the right people. Almost immediately I realised how crap the book would be if it was just from me - by no coincidence, I was at the time working with the guys at MWD, whose opinions I valued (and continue to value) enormously, as well as those of my past and present colleague Dale Vile. So, I proposed we jointly wrote the thing, and for better or worse, everyone agreed.
So, from registering a Web site, we have a book - “The Technology Garden”. I thoroughly recommend it of course, and it wouldn’t be a quarter of what it is without being a team effort between the four of us. Still, ain’t it interesting what unexpected acorns can grow into?
10-08 – Mobile experiences from Mumbai
Mobile experiences from Mumbai
Quite beside all my other experiences and insights derived from spending 3 days in Mumbai last week, I did have a good try with mobile technologies. Here’s a quick summary:
- mobile access via Blackberry - check (though I am feeling a bit nervous about what the bill is going to look like). Roaming service was EDGE provided on the most part by Hutch, now part of Vodafone
- mobile access on laptop via HSDPA modem - check, but I rarely dared use it for fear of roaming charges.
- there was no Internet access at my hotel despite it having been advertised. Indeed, the only Internet within walking distance was a computer in a nearby shop. I had to travel to get full-speed Internet on my laptop.
- many businesses had a “Walky” telephone, which was essentially a mobile phone for office use. Looked like a desk phone, no wires, big aerial.
I did purchase a local pre-pay SIM, so I now have my own Indian phone number which is a bit cool. Setup was straightforward for calls, and pretty cost-effective, about 2 rupees per minute for international calls, as long as I took out the international calling card option - which I did. Topups could be done just about anywhere, retailers who lacked terminals had a clever scheme where they could top up my phone from a phone of their own - now that’s what I call mobile! Strangely, despite being with Vodafone, the service showed up as “Orange” on my handset.
Unfortunately GPRS access could not be set up in time before I left the country, due to the time needed to process my ID. Oh well, hopefully this will be working for next time!
10-08 – Off to IASA
Off to IASA
I’m assisting Matt Deacon this evening at the IASA meeting in London. While I’m not an architect myself (“I’m a lover, not a fighter” springs to mind, though I have no idea why), I do get a lot from hooking into the people whose job it is to make IT work in their organisations. More than just keeping me grounded, its also a great environment to test ideas and find out what’s really going on.
10-10 – EMC buys Mozy - should we all be doing online backups now?
EMC buys Mozy - should we all be doing online backups now?
Online backups of desktops and laptops are such a no-brainer for so many small companies and individuals - aren’t they? Markets are all about supply and demand, so if this were true, there would be a mass of different options available. But there isn’t, which suggests that either the conventional wisdom is wrong or something that needs to be in place, just isn’t.
I started playing with online backups a while back, when I was introduced to the company Connected.com, way before it was acquired by Iron Mountain (a move that continues to flummox me). At the time the bottleneck on such capabilities was the available Internet network bandwidth, but then broadband arrived and took that problem away. I continued to use Connected.com for quite a while, but then after one laptop upgrade I never got round to reinstalling it. These days I’m running a RAID box in the spare room at the home office, and backing up our computers to that on a nightly basis. Hmm, no online backup. Why?
The main answer is probably that the quantity of data that is changing, despite broadband, exceeds the bandwidth available to back this up. For me this is about email - to keep some kind of control on my email load I make extensive use of offline folders, which are now several gig each in size. While Connected.com professed to do clever things with email data files, my personal backup windows were growing far too big. Meanwhile, personal photography and video capture habits are growing a large quantity of multimedia files, and to back them all up (40 gig and counting) would break the back of all but the most expensive online backup services, not to mention my ISP fair use policies.
Meanwhile, there is the question of usability. It pains me to acknowledge that many small organisations and people aren’t running backups - or at least, are facing the risk that if they have a hard disk crash, they could lose all of that customer data, or family snaps, or whatever. I really, dearly want this problem solved. While Connected.com was eminently usable by me, it still fell into the trap of assuming that the user was computer literate, and so could only ever be working with the minority.
And so, to Mozy. EMC’s acquisition is the first time (to my knowledge) that an online backup service has entered the fold of a major vendor whose business is based on effectively delivering storage solutions. I’ll let EMC blow its own trumpet on that one, but let’s face it, you would hope that if anyone can crack the code it would be a company that had set its core business on it!! Immediate caveat, no, I don’t believe EMC’s going to get it right just because they set their store (sic) on these things. However, one would hope they stand more of a chance than, say, manufacturers of washing machines, or indeed, companies that have built their businesses on providing secure offsite locations for holding large wads of paper and boxes of tapes.
To resolve the issues of bandwidth and usability together, Mozy needs to be delivered as an integral part of any small company’s information risk management strategy. I deliberately use this term rather than backup strategy, because lets face it, backup is the answer, but not the question. Not all information was created equal, and not all information is subject to the same risks - so, given the fact that we have different ways of backing up and protecting information, we should be able to pick and choose which mechanism is appropriate for what type of information.
Looking specifically at usability, however, such gubbins needs to be kept under the bonnet. Consider a specific example - the emails I have sent over the past few days are quite likely to see responses (I hope), and I would probably like to refer back to them. Meanwhile, while I may want to refer to documents I created as part of older projects, the chances are I won’t want to change them. In terms of risks - the chances of a house fire in the next week, are 52 times less than the chances of a fire over the next year (and potentially increasing further - if my son’s demands for fire poi are ever heeded). In other words, I would dearly love to know that the information I have just created is protected in some immediate way, and I am pleased to have my older data protected, but they don’t necessarily need the same mechanisms: if my recent data is backed up in-house that’s a pretty good start. I may change my mind if I’m working remotely from the office
The bottom line, for me anyway, is that Mozy is a feature, not a product. The product I’d like to see in small businesses is one which exists as a client on every computer, and which can then deliver a co-ordinated backup strategy that meets the needs of each individual. If disaster (of whatever form) should strike, then emphasis switches onto the usability of recovery tools, so that individual files (or indeed, the entire environment) can be rebuilt. Mozy by itself may offer a tool for individual use that can offer significant protection, particularly for IT-literate individuals with lower data transfer requirements or high bandwidth availability. For it to be sustainable into the future, and pass the “upgrade and reinstall test” it will need to become an integral part of the backup toolset, in-house and external.
10-10 – reCaptcha - Real text verification
reCaptcha - Real text verification
I’ve just installed this handy little widget on my personal blog. For those (anyone left besides me?) who hadn’t heard of it, reCaptcha is a word verification tool reputedly to guard against comment spam. I’ve only just installed it so its success is yet to be proven - however it does have a spin-off benefit. Text strings are taken from publications being scanned (currently) for the Internet Archive, this is what the reCaptcha guys have to say about it:
“reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly.”
I believe comment spam is still coming in through the back door - but I have Akismet to help with that (and thanks to those guys). Meanwhile, here’s a tool that is not only keeping my front door clean, it’s making a literary difference as well. Hurrah to that.
10-10 – Rush - Wembley Arena 10/10/2007
Rush - Wembley Arena 10/10/2007
Now the first thing I should say is that I’m still in “digestive recovery mode” following last week’s most excellent Mumbai adventure. So, I had to leave my seat more often than I would have liked… Equally, I’d bought a ticket at the back of the hall due entirely to lateness of the decision, it was only a few weeks ago I knew where I was going to be. Speaking of lateness, a burst watermain in the Greenford area led to me arriving 20 minutes after the show had started.
So, perhaps unsurprisingly, I was feeling a little detached.
Things seemed to kick off quite slowly, like the band were going through the motions… but about half way through the first set they seemed to come alive, like someone turned on the lights. Or perhaps the lasers. Overall it was a good gig, a fine gig but maybe not a great gig, from my distant standpoint. The sound was reputedly much better than other shows so far on the UK leg of the tour, and the light show was superlative as always - I was left wondering how an arena could possibly be filled without such a thing. We idolise the bands, but where would any of them on these massive stages be without the lighting rigs?
From the gods, the view was of a nearly packed hall having a great time. Hands were waving, voices were singing along, applause was forthcoming particularly it has to be said for some of the old classics, but also such songs as One Little Victory and for Neil’s drum solo. Personal highlights were Natural Science, Between the Wheels and Subdivisions, which will always take me right back to the Laserium Signals show, goodness knows how many years ago.
Preaching to the converted maybe, but then, why not.
10-10 – SAP blots out Business Objects
SAP blots out Business Objects
“Never comment about those things you know nothing about,” is the recommendation - so I won’t remark on SAP nor, particularly, about Business Objects, though I have had dealings with both at various times. Aside from questions about what BO means to SAP (maybe I should rephrase that), interesting to me was what this means for the wider market.
As I have written previously, information management and, for that matter, service management are like different tracks up the same mountain, with fissures in between caused largely by historical scenarios - even the separation between structured and unstructured data comes largely from the adoption of two types of (incompatible) data repository. The result is that we have, today, a number of larger companies that offer large-scale, solution type software, and meanwhile smaller vendor companies whose job it is to add functionality or enable integration between the bigger stuff.
There’s a whole stack of acquisitions we’re seeing right now, which are potentially partially to do with the liquidity of the market (i.e. companies have cash to spend) but equally, there’s a consolidation phenomenon taking place. The big players - IBM, Oracle, EMC and now SAP are all filling gaps in their own portfolios - like blotters, absorbing the smaller players as they go. This is healthy, particularly for larger companies that want to rely more closely on a smaller set of vendors; but also for the vendors themselves, who can look to deliver more integrated solution sets and service packages to go with them.
P.S. Still waiting for the HP acquisition of Opentext/Ixos! No I don’t have any insider knowledge, but to me it would be like a key fitting in a lock.
10-10 – The ITIL Placebo Process Effect
The ITIL Placebo Process Effect
Goodman Martin just forwarded to me a most excellent blog post about whether ITIL implementation could be replaced by a placebo, based on (say) astrology. Could it be done, and would it be effective? I’d have to answer yes, but that doesn’t diminish the value of ITIL itself.
Instead, what this comes down to is process. Management initiatives have been legion probably since Plato suggested everyone should get out of the cave (John Gray, eat you heart out). But what often yields success in management initiatives, is the process that is worked through and delivered upon, rather than the specifics of the initiative itself.
In some ways this is similar to therapy. Some advocates of homeopathy say that a great deal of the benefit comes from having someone with whom one can share one’s problems, and the fact that one is told at the end to stick a couple of small, sugary pills under one’s tongue is just one element of the overall experience. It is a tricky one - because if it is ultimately seen to be true, it does suggest that snake oil salesmen were maybe not quite such confidence tricksters after all. And IT marketers… maybe that’s a step too far!
And so to ITIL implementations, which will undoubtedly require some of the following:
- strategy definition
- discovery of “what’s out there”
- reviews of existing processes
- interviews with key stakeholders
- training
- definition and use of metrics
These characteristics are not new, and they are not specific to ITIL. What they do offer however are opportunities to engage, to review and to update the organisation (IT or otherwise) on the latest incarnation of best practice. Right now, in IT management circles, this best practice revolves around ITIL - which, let’s face it, is a pretty good starting point. But even if ITIL is being picked up by an organisation in a faddish way, the process still offers such opportunities.
Of course, the question then becomes, how useful are activities such as those listed above? In my consultancy days, I can remember being brought in to work on business process modelling exercises, but when I went to interview key people, they would tell me it was the third time that year they’d been interviewed, for different initiatives. At which point, any such efforts become counterproductive.
Overall then, should we have IT astrology improvement programme? Well, potentially. But only if it goes through a process that will make a difference to the organisation, all by itself. Indeed, an initiative without a correct process is probably no initiative at all.
10-11 – Information Management - three sides or three mountains?
Information Management - three sides or three mountains?
Yesterday I had a rather interesting conversation with Dale about how perspectives on information management can vary according to the provenance of the people involved. At the highest, most visionary level Information Management can be defined (and indeed, is, by IBM) as getting the right information to the right people at the right time. That’s simple enough to have people nodding sagely or shaking their heads in a “well, yes of course” kind of way.
While the “what” might be simple, the “how” can very quite considerably. Essentially there are 3 camps:
- the Business Intelligence (BI) crowd - with a structured data background, these people discuss normalisation, cubes and master data.
- the Content Management (CM) crowd - historically working with documents, they talk in terms of taxonomies, workflow and rights management
- the collaboration crowd - coming up from file-based environments and delving into Intranets, for these it’s all about desktop access, email and office integration
While each group of people may be highly computer literate, each will tend to talk to its own using a certain language, philosophy and (dare I say it) psychology. I know this to be so, based on experiences talking to groups of each type. I could waffle on about that for a while but hold that thought - what’s perhaps more interesting is how this impacts specific vendors, again, based on their provenance.
The obvious examples are:
- IBM, with its DB2 heritage, still very much in the BI camp despite having acquired FileNet
- EMC, having purchased Documentum 4 years ago and falling rather naturally into the CM camp
- Microsoft, its Windows-as-the-platform heritage yielding a firm position in the collaboration camp, with its Sharepoint “flagship”
Savvy, companies marketing execs in these companies will be able to pitch at the visionary level. For the rest however, its not so much that they don’t get the existence of the other areas, more that they don’t see the point of dwelling on them. They’d be right to surmise that there’s so much going on in their own areas, they’re kept too busy to step across into the other areas. The end result however is that we end up with three communities, not one, the behaviours of each dictated by their own technological heritage. For a stark example of this, consider what part Lotus has to play in IBM’s Information On Demand vision (here’s a clue: not much).
Does it really matter? I believe it does, not least because of the inefficiencies of reinvention and the requirements for integration across the piece. Perhaps it matters most of all because our research tells it so - organisations really do want an integral view of their data assets, of all kinds, and they are frustrated by the inability of vendors to serve it up. I do understand how hard it is to build a scalable repository, but that doesn’t mean we need to stick with models and architectures that are some 20 years old. Neither am I sure we can afford the luxury of preaching to the converted within our own comfort zones.
P.S. I’ll leave the decision about where Oracle fits, as an exercise for the reader :)
del.icio.us Tags: freeformdynamics
10-11 – Things I miss about UNIX
Things I miss about UNIX
A long, long, time ago, in a vertical far, far away, I once used to spend a lot of my time doing various things with UNIX - as an administrator or a developer. Somewhere along the route I appear to have become waylaid - I now spend most of my time (literally, I feel) in front of a Windows machine, like one of those office workers I used to have to support. How the tables have turned.
Don’t get me wrong, I’m quite satisfied with what I have, though some things could of course be better. Much like, I suspect, the majority of office workers out there. However (and probably unlike the majority of office workers out there), there are some things I do miss about UNIX:
- text manipulation - the number of commands (sort, uniq, awk, etc) available to muck around with strings, extract them, compare them, munge them together and so on
- pipes - with the above, making it very simple and elegant to develop command lines that could do some very powerful things
- the command line (of course) - but not just for text stuff, also just to make it very easy to move around directories and move things around
- the knowledge (though this had to be learned) of where admin information was being stored, generally in text files which could be easily changed (though this was a two edged sword!)
- finally, the general feeling of control that comes from having an operating system in which everything was configurable, even to the extent of tuning the kernel…
… so I guess the question is, if it was so great, why don’t I still use it? Bluntly, the answer is that it wasn’t - so great, that is - at least, not for I have needed computers for, for the past 10 years. This is possibly more an indictment of me than of any specific technology, in that I don’t have the time or the inclination to spend my time tinkering with software, when I should be getting on with other stuff - writing, generally. Also, and again to be blunt, office apps have (until recently) been poor or stupenduously expensive for UNIX, as anyone who tried WordPerfect for UNIX will testify.
All the same, that doesn’t stop me missing such things as the above - particularly when desktop tools from Windows don’t cut the mustard. If I wanted to dedupe my email contacts in UNIX for example, I could do so with 2 or 3 commands piped together in a single line… whereas in Windows, I have the option to filch around the Internet for a piece of freeware (slow), or write a bit of code myself (unlikely - and option 3, to actually buy something, is beyond me completely). Of course, all is not lost as I could just install Cygnux for many of the above benefits, but somehow, I just don’t think it would feel the same.
10-15 – Descending on Documation, Schmoozing at Storage Expo
Descending on Documation, Schmoozing at Storage Expo
For anyone who’s in London this week and wants to meet up, I shall be at Olympia Wednesday and Thursday for the jointly held conferences of Documation and Storage Expo. Indeed, I’ll be presenting at both, specifically at the following sessions:
- Information Lifecycle Risk Management, 17 Oct at 1145 (Storage Expo)
- Using Content Management to Power Business Intelligence, 17 Oct at 1600 (Documation)
- Governance, Liability and Responsibility: Getting rid of information, 18 Oct at 1015 (Storage Expo)
In case I haven’t banged on about it enough, I don’t really see the above as anything other than facets of the broader picture of information management, indeed I’ve had some good chats with the guys at Reed Exhibitions about the way things are going. From our research there’s a clear desire to link BI and CM for example, and good governance and risk management cannot exist without a firm understanding of your information assets. But I’m telling you the plot :)
Should be a lot of fun - see you there!
10-15 – OK, I can't spell "the"
OK, I can’t spell “the”
Not sure what happened about 3 years ago but my fingers started jumbling up on the keyboard to the extent that “the” always comes out “teh” etc. Annoyingly, I’ve accidentally added “teh” to my Windows Live Writer spell check dictionary, so I’m not sure I’ll catch them all. Anyone have a cure for this, do let me know…
10-15 – Silicon Agenda Setters - the results are out
Silicon Agenda Setters - the results are out
Sometimes, this job is, well, just a job and sometimes it’s a damn good wheeze, such as when I was invited to come along to the Silicon.com agenda setters panel last month. My approach was quite deliberately to try to bring to the party people who wouldn’t necessarily be top o’ the list but were still ground breaking in some way - hence for example Rob Pardo of World of Warcraft fame as well as all the usual suspects, Jobs, Schmidt, Benioff and the like.
The final list is here, with Facebook’s Mark Zuckerberg at the top. We don’t see Facebook as a major business tool, but I have to agree with another agenda setter, JP Rangaswami, that it’s rattling a few cages in corporate land. That doesn’t mean traditional IT business leaders have been pushed out - with John Chambers, Diane Greene and Mark Hurd rubbing shoulders with the new media darlings and offshore tycoons.
Fascinating to me was how much there is of the IT industry about which I have very little understanding. We talked about the inner workings of venture capital for example, and the influence of Asian manufacturing. Great, mind expanding stuff.
I can’t wait ’til next year - if they’ll have me of course!
10-16 – Eucon Dance if You Want To
Eucon Dance if You Want To
On Sunday I was fortunate to attend (and shift some books) at the Rush European convention Eucon. It was a great day, most people seemed to agree - luminaries of the Rush story Terry Brown (who keynoted), Howard Ungerleider and Andrew MacNaughtan were all present, and a good time was had by all, capped off by a stonking Rush gig in the evening!
Thanks to Ashley for inviting me, and to everybody else for making me feel so welcome.
10-16 – My First Video Briefing
My First Video Briefing
I know those stalwarts at Cisco have been doing this for yonks, but today saw my first videoconference briefing, via WebEx. A few issues with setting up the camera at my end, and quite jerky but it did make it more of a participatory experience. Thanks Lisbet for organising and Chris for the call!
10-16 – Sleepwalking Towards the Last Post
Sleepwalking Towards the Last Post
The one thing I didn’t expect to be doing this morning was agreeing with Tory MP John Redwood on the plight of the postal workers in the currently still-ongoing postal dispute in the UK. For one, I am full of admiration for the merry fellows who, come rain or shine, ensure the message gets through - people like our very own post man, Andy, who always has a smile and a friendly word.
In John Redwood’s own words however, “The country is not grinding to a halt.” Here’s perhaps the nub of the real crisis that faces one of Britain’s last public industries - that technology has it doomed. Not only is email providing an unthought-of transport for many of our communications, but also, any attempts to bump up the relevance of the post by (for example) not delivering it for a few days are not only falling on deaf ears, but are counterproductive in the extreme. “Give me your bank details,” someone said to me a couple of days ago, “I won’t send you a cheque, I’ll pay you online instead.”
It’s a tough one. The post is like the messaging equivalent of the book: if one is old-style data in motion, the other is data at rest. Just as we don’t want to replace books with electronic equivalents for oh, so many reasons, neither do we really want to lose the sound of envelope on mat. Even such junk as catalogues has some merit, judging by the amount my kids pore over them as they spend their pocket money many times over. And the thought of a birthday mantelpiece being reduced to a bunch of printouts… don’t even go there.
Meanwhile, there are, oh, so many items of post that really don’t need to be sent. Junk - of course (apart from the catalogues maybe); utility bills; bank statements; cheques to cover bill payments; the list goes on. Why are they still posted? Because (a) that’s the way its always been and (b) not everybody yet has the email and Web alternative. To catalyse both requires significant effort, or a limitation on supply, which is exactly what the postal unions are providing.
Where will it all end? In 5, 10, 20 years time I have no doubt there will still be a postal service. It’s not just about the postie: Post Office Counters provide a wide number of services themselves, many of which are relied upon by a great many people - benefit payments, car registrations and the like. From an economic standpoint however, the counters are propped up by the numbers of stamps on envelopes and packages. As postage revenues fall, so will the numbers of post offices, and (hence) the quality of service. I don’t want to see that - but neither am I likely to send letters for the sake of it, particularly in times when there’s no guarantee of when they will arrive.
As someone who lives in the country, the absolute last thing I want to see is that postal workers go the same way as the post offices that are already closed - I do understand that for many, in remoter places, the arriving post can be one of the rare contacts. These things are important, and an intrinsic part of sustaining rural communities. All the same, I can’t believe that retaining the status quo for the sake of it is the right approach. Email and the Web are here to stay, for better or worse - and we need to face up to how they impact on even our most loved of institutions.
There are undoubtedly a number of services that we need, that we don’t want to put into the hands of a private company, and (most importantly) which we are prepared to pay for. Perhaps some of these are listed here; there will be others. If we want them, let’s think about how to ensure they continue to happen; the alternative is to stand by and watch as they crumble.
10-17 – Day 1 of Storage Expo
Day 1 of Storage Expo
Random thoughts…
- walking in was like the scene at the end of Trading Places, the noise, the hubbub. “Lots of people!” I remarked to Bob Plumridge. “Yes, but they’re all vendors!” he replied. True, but different later.
- First session went pretty well, good level of questions and feedback
- Most of day spent taking repeated bites of the Expo apple. Too many people I know to list. Symantec, Riverbed, Pillar, EMC, LSI, Emulex, Plasmon, Quest, 3Par, Copan, IBM. Lots of conversations, all good. Friendships remade, relationships rebuilt, business redone. Sorted.
- Podcast with Reed on future of storage. It has one. Cool.
- Quick review of second pres with Xerox - then more of the same
- Second presentation coincident with football kick-off England vs Russia. Disproportionate number of females in audience, which outnumber males - realise football is answer to rebalancing sexes in IT. Pres goes OK despite diminished numbers.
- Leave conf too late, arrive at pub after everyone has left, story of my life.
And tomorrow is another day.
10-18 – Day 2 at Storage Expo - Greening Up the Act
Day 2 at Storage Expo - Greening Up the Act
Exhibitions are like travel - one spends a lot of time seemingly not doing much at all, and its completely shattering. In the case of Storage Expo, “nothing at all” translates into presenting, listening to presentations, participating in meetings, walking between stands and just downright chatting, all in all very productive from an information gathering perspective but tiring nonetheless.
So what’s a-buzz? Admittedly I was looking through my own, undoubtedly biased “solutions not products” spectacles - there may have been interesting things being demonstrated in the hardware layer, but I confess they passed me by. It was the companies doing “joined up storage” that I found most interesting, such as Copan getting together with Data Domain to offer concentrated offline storage for deduped backups. I seemed to spend a lot of time talking to people about green issues as well, which prompted me to think even harder about the solutions/software side of the house.
Put simply, most storage hardware vendors are currently pretty shabby when it comes to being “green”. For all the talk about “our watts are lower than their watts,” which may or may not be true (but is probably a leap-frog thing between the vendors anyway), the whole notion of a constantly spinning mechanical device is never going to look good from a power consumption perspective - as companies like Plasmon (whose storage doesn’t have to spin constantly) are keen to point out.
It’s a bit of a cruel trick really, a bit like the printing industry might feel if it suddenly had to use paper from sustainable forests (hmm. not a bad idea. but anyway). Since that guy from M.A.S.H first invented the hard disk, the general principle has been, “let ’em spin” - which of course uses power. The more data your business generates, the more disks you need, and asking a disk storage sales guy to regulate the flow is going to be a bit like asking Cadbury’s to restrict the supply of Flakes to ice cream vendors. Tricky.
But, for all manner of reasons, we have arrived at a point where “get more disk in” is no longer the default answer. All those disks are keeping on spinning, using up power whether the data on them is being accessed or not. Result: almost overnight, storage hardware companies look like the bad guys and are trying to out-do each other and hiding behind the worse-er offenders.
All of that is missing one fundamental point, however, which is to question whether or not the disks are required in the first place. For example: I was speaking a few months ago to an oil company that had 17 SAP instances, all on different hardware. We have organisations who have “keep everything” policies for data retention rather than spending the time to work out what they don’t need (or shouldn’t keep). There are database compression companies like SAND Technology and Sybase, whose products remain niche due to lack of market take-up.
All of this points the finger of blame at a point quite a bit higher than the hardware layer. Sure, there will undoubtedly be clever things that can be done within the SAN, such as dynamic storage provision into virtual pools, for example. However, to solve what is fundamentally a data management efficiency problem, requires serious thought right up the stack, in terms of clearer definition of what information the business actually needs, down through more intelligent application architecture, to better storage provisioning and resource management. Let’s not stop wagging fingers at hardware manufacturers, but while we’re pointing out the motes in their eyes, let’s also recognise the environmentally unfriendly and resource hungry logs in our own.
P.S. No, the guy from M.A.S.H didn’t really invent the hard disk, it was in fact named after a rifle. Who says there is no humour in storage.
10-18 – HELLO! I'M ON THE PLANE!
HELLO! I’M ON THE PLANE!
It was with initial trepidation that I read the news that mobile phones might be allowed on plane flights around Europe. After all, along with the metro/tube/subway, churches and libraries, Antarctica and deep space, there are few places left that have managed to avoid the encroaching wave of over-loud teenagers, ringtones and sales reps.
But then, here I am on the train. There’s a guy opposite me calling his missus to check on childcare arrangements, the occasional ping of SMS and, somewhere in the background, an ongoing conversation that could be on a phone but I can’t tell. Realistically, perhaps it won’t be so bad.
A major upside is nothing to do with voice at all of course, but with data. Plane travel is both a sanctuary and a frustration, particularly given that it’s an ideal time to catch up on email etc. At least this provides a choice.
The downside is mainly to do with cost - unlikely the operators (or airlines - the system is based around on-board picocells) will feel under to much pressure to offer this as a low-cost facility. Ironically, this could also be to its advantage if it means that the louder-mouthed talkers are more likely to be in the upper class decks.
We’ll have to see how it pans out.
10-18 – Nostalgic? Try Mobile
Nostalgic? Try Mobile
Anyone who thinks modem speeds are a thing of the past (hi Joe, not a criticism) can’t spend much time working mobile, using GPRS, throttling back the arrival of the 4Meg file in Outlook so that he can get onto the Web and post a blog. 56Kbps doesn’t seem all that distant, because that’s pretty much exactly what I’m on right this minute. In central London, no less.
10-19 – A not particularly exhaustive Twitter client study
A not particularly exhaustive Twitter client study
I had a quick browse about for a Twitter client, and there are plenty, many require the .NET framework which I didn’t fancy installing unless I had to. So, from a shortlist of:
- Twadget, which sits in the Vista sidebar (and what a cracking name)
- MadTwitter, simple but effective
- Pwytter, written in Python
And the answer is: MadTwitter. It stays minimised and pops up when there’s a tweet, lets me post back (and succeeds, Twadget sometimes fails), and is fast (unlike Pwytter). It lacks plenty of features, but maybe that’s the point.
10-19 – Whoa! How Green are We!
Whoa! How Green are We!
By total coincidence, Tony and I both posted a green review of Storage Expo yesterday evening. Thinking about it, it was probably watching Tony checking a certain vendor’s credentials that at least partially prompted my own post - so perhaps its not that big a surprise… and not the first time yesterday I accidentally trod on Tony’s broken toes, sorry mate ;)
Still, topical, topical!
10-20 – What a way to start RSA - with a virus
What a way to start RSA - with a virus
Well, well. The last thing I expected to see when I plugged in my SD card this morning, was a virus. I think I must have been picked it up earlier in the week. as I was transferring files between computers.
First thing was when an AVG window popped up, to say a file was being quarantined. When the file re-appeared, I knew there was something awry. For anyone who is interested, it was the “microsoftpowerpoint.exe” virus - conveniently explained (along with removal instructions) on the Trend Micro web site, among others.
(Unless I speak too soon,) I got rid of the blighter in the end. But it was a timely reminder that, while the debate should quite rightly shift to take into account the true breadth of the risk landscape, that ol’ external threat is still alive and kicking.
Nigh time to check those signature files are up to date, before heading off to the RSA conference in London next week…
10-24 – Shoulder standing 101: being influenced by the influencees
Shoulder standing 101: being influenced by the influencees
These are indeed “interesting times” to be an influencer of any form, not least as we see the democratisation of influence - interestingly, not a term that has yet been adopted particularly widely. It is a timeless truth that every human being has an opinion, which is expressed more or less willingly; what has changed is the mode of expression, the Internet providing a voice louder even than the loudest rock band in the universe.
Whether or not this is a welcome change is a moot point, particularly for organisations who have made their money controlling the flow of such expression, such as news organisations and, indeed, industry analysts. The fact is that the guy in the bar now has global reach, and the rest of the world has to deal with that fact.
To the point - what can we learn as industry analysts? To me it’s simple - our role and privilege is to spend time learning about what is going on, and to draw insightful conclusions that can then be fed back for the common good. While many may have time to think about aspects, it is a rare luxury to be able to do this as a career, without the distractions of what many would consider to be a real job. We’re standing on the shoulders of giants - one set of insights and conclusions serve as inputs to the next level of analysis, and thus can we all move forward.
So, I don’t feel in any way threatened by these developments; rather, I revel in the fact that the number of fire hydrants to drink from is increasing. Welcome indeed, for example, to the vendor analyst relations blogs such as those from Carter and Skip; or indeed AR professionals like Jonny and David, and all the rest (I said it was like a fire hydrant!). I would love to say we have a monopoly on how things are evolving but the truth of the matter is that nobody does, so all help as we evolve our services into the future is gratefully received.
In the future, then, we shall continue in our role as aggregators of opinion and behaviour, and offer our findings back to the community in the way we do now. It’s an eminently scalable model, and for now we believe, adaptable to what the wonderful world of influence throws our way.
10-24 – Sun vs NetApp - Good Hippies Don't Divorce, Do They?
Sun vs NetApp - Good Hippies Don’t Divorce, Do They?
Funnily enough it was only today that I was recounting a tale to goodman David, about a formative experience I had a few years ago when two hippy friends of mine decided to divorce. It took me a while to reconcile this - after all, I thought, if hippies were so laid back and peace loving, surely they’d just get on?
And so it is when I see people like Dave Hitz and Jonathan Schwarz in a spat. Admittedly I’ve never met Jonathan, but from what I’ve heard about him he’s a regular guy, who just happens to run a rather big IT company - and he sports a pony tail. I’ve met Dave on a couple of occasions through the years, and he’s come across as a regular guy as well. As in a divorce situation, I know I have to be grown up and recognise that (a) books shouldn’t be judged by their covers, and (b) there’s probably an element of truth on both sides.
The story seems to have unfolded something like this:
- many years ago, NetApp decided to build a storage box based on available technologies, and stick some clever IP into the I/O layer while leaving the processing layer as a reliable, if back ward overseer - I seem to remember the expression “trailing edge” technology being banded about, not in a negative sense but as opposed to “bleeding edge” - including such stalwarts as the NFS protocol.
- a couple of years ago, StorageTek was miffed about something NetApp had done, but the sides never reached any particular resolution. STK was then bought by Sun Microsystems, who then continued bickering with NetApp. However, as storage wasn’t seen as particularly strategic at the time, its still didn’t come to anything.
- quite recently and indeed laudably, Sun decided to treat storage more strategically, at the same time as reaching a level of corporate psychological resolution about the relationship between its own software and open source. All admirable stuff, with the result that Sun decided to do some more stuff with storage - including releasing the ZFS file system to the open source community.
- unfortunately, NetApp saw this and wept, in the belief (now to be proven in a court of law) that Sun was riding roughshod over some of the “clever layer” intellectual property, indeed, patents that Dave Hitz himself had filed all them years ago (and suddenly, its personal). This may or may not have been ill-thought-out but unintentional on the part of Sun, or indeed, it may have been deliberate, anti-competitive move, a bit like “accidentally” leaving Coca-Cola’s ingredients list on Howard Stern’s desk. We’ll find out - but right now, we know that it led to NetApp taking out a lawsuit on Sun.
- of course, the dot in dot-com was not going to take this lying down. After (no doubt) that quick call to determine whether some amicable resolution could be reached, Sun assessed its options and today has decided to countersue, not just about ZFS but calling into question the very, “trailing edge” foundation that NetApp had adopted, at the very inception of the company.
Nasty. Commenters have quite rightly compared this to the SCO vs Novell case, and indeed raised questions about whether IP can be open sourced, or indeed closed back up if it does infringe on patents. To me, this also echoes the rather dodgy ground Microsoft finds itself standing upon whenever it reiterates its patents issues against Linux (I’m still not sure if Steely Neelie Kroes has put this one to bed).
I’m also rather fascinated at how the battle lines are being drawn up in the blogs. Mr Schwartz has always been an advocate of the openness of CEO blogs, but of course when the fecal matter hits the fan, one cannot stop being declarative even if one wants to. This is where it’s like my hippy friends - bloggers are supposed to argue, sure, but when it comes to them acting like good ol’ boy CEO types, what happens then? things sure as heck don’t get decided by the number of diggs assigned to the counter-arguments, or consider adverse comments as a form of cross-examination.
I confess I have a nasty feeling about this. Court battles are just that - battles - and fighting dirty is acceptable as long as it happens within the confines of the law. Already we have seen Dave Hitz branded a liar and a troll, and while Dave H has not used the terms, he has called into question Jonathan S’s records of events. NetApp has (as illustrated in that same blog post) started from a position of, “let’s get along, but use the courts, that’s what they are there for,” but it doesn’t appear that Sun isn’t going to keep the gloves on. Call me old fashioned but Jonathan’s statement, “we are requesting a permanent injunction to remove all of their filer products from the marketplace” doesn’t appear to be wanting to meet anyone half way.
Where will it all end up? Messily, certainly. We now have two court cases, one of which may find it correct to say Sun shouldn’t have released NetApp’s IP into the open community - it’s difficult to know what the outcome would be from there, other than requesting people to kindly switch it off. Or it may find Sun in the clear, in which case NetApp will look a bit foolish, and perhaps in trouble as the patents in question are reputed to be a mainstay of the company’s offering.
Meanwhile, Sun’s own case, if it succeeds, could bring NetApp to its knees. If it fails, Sun will look a bit silly but can revert to the counter-sue argument, so nothing lost other than a bunch of legal fees. Hopefully this second case exists purely as a counterpoint, and the situation will be judged on the technical merits of the claims rather than obscure interpretations of patent law.
What do I feel about it all, besides just wishing naively that everybody could just get on? The only recommendation I would make to both sides, is that the behaviours exhibited during the process can be as damaging to a company as the topics under discussion, so - play nice, guys. Meanwhile, while I’m not sure I’d want to rush off and install ZFS across the entire organisation until I was sure I could keep it, I wouldn’t be switching off my NetApp filers just yet.
10-24 – This message is so wrong in so many ways
This message is so wrong in so many ways
Just seen as I resumed my computer from suspended mode:
“Not enough memory or disk space to update the display.”
Sheesh.
10-25 – Twitter's just a big chat, right?
Twitter’s just a big chat, right?
S’funny. There I was thinking that Twitter was in some way different from, well, anything else. To the extent that it had taken the web publishing model and reduced it to the finest level of textual granularity, expressed as a 140-character “tweet”. And it’s a platform, open API’s, the lot.
Meanwhile, we’ve been using Skype as our messaging tool du choix between Freeform team members. We even use it for voice sometimes, but text is the default.
So there I was last night, getting on with various things - with a MadTwitter window open on the left, and a Skype Chat window on the right. And, behold, I was using them both in exactly the same way.
Sure, there’s differences. Twitter is the ultimate in broadcast chat - when I post, it’s like shouting across a crowded room where everyone can hear (and fortunately, not everyone is shouting). Meanwhile, in Skype I have to pre-select people I want to chat with - but I can have multiple chats with individuals and different combinations of groups. I can access Twitter on the Web, through phone or via my handheld, and while I can’t open a Skype window on the web, I can do the latter two. With Twitter, I can write to it from other programs. So I can with Skype. Etc, etc.
Other messaging apps offer a bunch of facilities that are much more controllable than either Twitter or Skype, including IMvironments, talking avatars, enterprise logging features, unified comms and so on - which makes me wonder even more. Aside from the “following/followers” concept, what exactly has Twitter got that traditional messaging hasn’t? It’s important - because while this would be quite a simple feature to add to the majority of text messaging clients, it would be quite a challenge fro Twitter to bulk itself up to offer these stock features.
I’ve probably missed the point entirely, but then, so did the kid who said the king had no clothes on.
10-26 – Lighthouse stories
Lighthouse stories
I was prompted to dig into the story behind the picture on my wall, when David Brain questioned whether it was real or photoshopped. The answer - very real - and the bloke in the lighthouse was lucky to get back inside in time!

Brings a whole new meaning to leaving the door ajar ;)
10-26 – Pulling a blog up by its bootstraps
Pulling a blog up by its bootstraps
It’s an interesting experience, starting a new blog - like any investment, one has to take the long view. As a stake in the ground, Totalimmersion currently has up to about 40 readers a day - it’ll be interesting to review this in a year. Meanwhile, its being syndicated to my chums at IT-Analysis.com, whose hit rate is much better. But watch this space, the only way currently is up!
10-26 – The bigger picture of behavioral analysis - a conversation with Tier-3
The bigger picture of behavioral analysis - a conversation with Tier-3
In a break with tradition, I’m going to write about a specific company in this one, or at least a specific series of conversations. I’ve been talking quite a lot to the guys at Tier-3, a company specialising in software that can look for anomalies in how IT is being used. While there are many potential applications of such a capability the company has focused its efforts on looking at IT security, sucking in events from computer logs and looking out for things that don’t fit with the norm. Think intrusion prevention, unauthorised access and the like.
It sounds so great in theory - and indeed, the company has recently announced wins for its HUNTSMAN product with some quite sizeable players such as Toshiba, so it must have something going for it. I still find myself feeling dubious however, not least (indeed, mostly) because whenever we do research into who’s buying what in IT security, behavioral analysis software seems to come out near the bottom of the pile.
So, there appears to be a bit of a behavioral anomaly about the whole thing. If such products are recognised to be so blooming useful, why is nobody buying them? My conclusion has been that, while such security products as antivirus, firewalls and VPN are quite simple to explain and therefore cost-justify, it was always going to be harder to assemble a business case for such tools as behavioral analysis.
When I spoke to Tier-3 I put to them this position, and asked (on the back of such deals as Tosh), whether it was changing. What Peter Woollacott, CEO told me, was that it was true, but he shed a bit more light onto what made it so hard. “Anomaly detection investments are currently being driven by the value ascribed to IT/IP assets relative to cost,” he said, “yet many organisations still fail to understand the value of their IP assets.” In other words - if you don’t know what you’ve got, it’s difficult to work out its value, or indeed (as Peter explained), how vulnerable it is against the legions of potential threats.
It’s an interesting one, not least because (according to my illustrious colleague Martin’s report) the lack of asset knowledge is such an age-old problem in IT, leading to that other age-old chestnut- how can you secure your IT environment, if you don’t know what you’ve got?
Funnily enough however, the answer to the asset management issue may well come form considering some of the desired outcomes of security - not least that mother of all reasons, the reduction of business risk. Peter used the term “return on security investment” - the ramifications of which can be seen quite clearly in more regulated environments, and are starting to be visible in other verticals. “Just as Basel II rewards better operational risk managers with lower costs of capital,” commented Peter, “risk adjusted decision making is already featuring in corporate investment cases.”
Understanding of IT risk requires (and therefore drives the need for) understanding of IT assets, and their vulnerabilities. Ultimately this also drives the need for products such as those from Tier-3, but its unlikely that the company can currently use this as a product pitch. Rather, organisations that are already educated on the need to manage risk for business reasons, and are acting upon it, will also want to get on top of their IT assets and what they are up to.
To take this one step further, perhaps there is no business case for behavioral analysis per se. That is, if such analysis is seen purely as a security measure, i.e. a way of working out what went wrong after the event so the hole can be plugged, it will always be difficult to justify. Alternatively, organisations that “get” such topics as risk management will be able to see behavioral analysis as a way of achieving some of the higher level goals that ensue, such as ongoing monitoring of risk levels in an already well-managed environment. In this context, anomaly spotting becomes a feature, and not an outcome.
Which is perhaps, as things should be. Companies such as Tier-3 better be in it for the long haul however, as there is still plenty educating to be done just to get some organisations off the starting blocks.
10-26 – The Wii Has Landed
The Wii Has Landed
So, today was the day. In a rash moment of parental materialism I agreed with my son Ben that if he could raise half of the cost of a Wii, I would provide the second half. Never have I seen a boy work, save, plan so hard. Finally the day has come, we nipped out to town at lunchtime to pick one up - having reserved it by phone, such is the Wii demand.
Ben’s nipped out, otherwise I would no doubt be on the thing right now - but it has already taken pride of place next to the telly in the front room.
Bowling, anyone?
10-26 – Why I like phone briefings
Why I like phone briefings
I was asked today whether I could pop into town for a half hour briefing next week, and I said I’d prefer phone in the first instance. When asked why, I gave the following example of my London routine. I thought it would be useful to post it here for future reference:
- leave home, 07.45
- Get train, 08.08
- Arrive Paddington, 09.25 (on a good run)
- Get tube, 09.35
- Walk to destination, arrive 10.15
- Briefing finishes, 11.15
- Walk to tube, 11.30
- Tube to Paddington, arrive 11.50
- Next Train, 12.30
- arrive at station, 1.50 (on a good run)
- get home, 2.15
So that’s 6 hours 30 minutes for a 1-hour briefing.
Don’t get me wrong, I love meeting people face to face, and it would indeed be preferable in many cases. With phone briefings I can quite literally fit 5 times as many in (allowing for coffee breaks), which is also time well spent!
10-28 – Working through the book pile
Working through the book pile
Six weeks ago I started reading a number of books, possibly a bit ambitiously I kicked all four off at once. This is no more than checking in as I haven’t yet finished them - but I am over half way. So far I have completed:
Richard Dawkins - The God Delusion. Clearly in places a rant and not as well argued as expected in places, but a very enjoyable book and necessary reading for anyone who takes the topic of religion seriously, on either side of the divide.
I felt Mr Dawkins was like the man who finally had had enough of the sniping and negative speak, and in the end felt he had no choice but to say things as he really felt. As such, he made a few cutting remarks of his own - but having got these off his chest, presented the arguments against organised religion pretty well. There were a number of weaknesses - the Buddhists, Hindus and other non-Abrahamic religions got off pretty scot free, and the presentation of religiousness as 2-dimensional, i.e. there was a straight line between atheists and zealots depending on level of belief was, I thought, a bit simplistic.
Perhaps weakest of all was the explanation of the fact that there is no God primarily on the basis of probability. Of course God is highly improbable, but then so is the human race, the latter fact one which Mr Dawkins felt proved itself by the presence of thee and me. The fly in the ointment is perhaps the assumption that we can only judge by what we can measure, when of course at the same time we are woefully inadequate to to such a thing, both from our ill-equipped position at the periphery of some far-flung galaxy and also our fundamental, stupid humanity.
None of which proves there is a God either, but as spake the humanist prophet Douglas Adams, with proof, there is no need for faith. I do wonder about this one - specifically (and I would love Mr Dawkins’ feedback) that perhaps we have a genetic propensity towards such socio-psychological constructs. More specifically still, perhaps it is our drive towards higher planes of thinking that have in some way enabled us to evolve, to the point where we are now. Organised religion may have been the cause of much that is wrong, but what if it is a prime factor in our development as a race?
I don’t know the answer to this, but it is certainly worthy of investigation. Food for thought: would the Buddhists in Burma have taken on the government there, if they had no faith in their own higher powers? Does religion come from community, or community from religion? And indeed, have we really advanced so far in the past 10,000 years that we no longer need such a crutch? As I was walking the dog earlier I was considering the existence of “psychogenes” - perhaps these are as selfish as those concerned with our more physiological aspects (and if these are already well-established and under the scope, clearly I need to read more).
M. Scott Peck - The Road Less Travelled and Beyond. I was really looking forward to this as I got a great deal out of the original The Road Less Travelled, and to be sure there were some moments of clear insight in this book. Indeed, there was a point about a third of the way in where I thought how great it might be if Mr Peck and Mr Dawkins were in conversation together: one, whose science had proven there was no God, and the other, whose experiences in psychology had proven that there was.
Unfortunately, Mr Peck’s book was like a mirror on his own, fragile humanity. To say he had “lost it” towards the end is a bit strong but his arguments were blunted by his own desire to get closer to the higher truth, and to present it from a Christian standpoint (though he did bring in some teachings from other doctrines). It didn’t help either that I was well aware by the time I got half way about his own weaknesses, and while I was happy to reconcile that he could talk about his experiences as a psychotherapist without considering his own shortfalls, I wasn’t prepared to put up with him being too preachy.
In some ways Mr Peck came across like Icarus, having flown too close to the sun, or perhaps one of the architects of Babel, returning from the top but ill-equipped to articulate all he found there. Not long afterwards of course he was to die, all too young, of Parkinson’s disease, proving beyond doubt his thesis of death being the great leveller.
In conclusion, what both books taught me was that we are all only human, and that there are some things that we can never prove for sure, one way or the other. This is perhaps less about God, and more about us… but still, it may just be the way things are supposed to be.
I’m now tackling Tricks of the Mind by Derren Brown, and it is with no small sense of irony that I find myself drawn more towards this illusionist and iconoclast. You see, here’s the rub - I do happen to believe that we can be convinced of all sorts of things, good or bad, its one of the things that makes us human (and its a reason I could never subscribe to the Wisdom of Crowds). We also love stories (as described in my other book on the go, The Seven Basic Plots) - these are things that make us who we are. Equally no doubt, is the fact that we love to constantly revisit these arguments. To paraphrase Douglas Adams, with proof there could be no debate. And where would that leave us.
November 2007
11-01 – BT skates its way to transformation
BT skates its way to transformation
When I was a kid, once a year we used to go and watch an ice show. For adults perhaps, it might have been an excruciating panto rescued from the brink of despair by a few spangled costumes and tight-fitting lycra; but to my childish eyes, it was sheer magic. Every year, the centrepiece of the show would kick off with a few people, rotating slowly but steadily and largely keeping their positions, in the middle of the rink. Gradually more and more skaters would join them until eventually the whole troupe would be involved, apart from one solitary figure who was yet to join. Of course, by then the end points of the spiral would be moving so fast, the poor chap would have to sprint like a billy-oh to catch up.
And so, to BT. Back in July, the company announced it would be launching a new transformation and innovation process, which is now 100 days in. At yesterday’s progress meeting for analysts, hosted by execs Al-Noor Ramji, Roel Louwhoff, Paul Excell and Dina Matta, topics included the usual crowd pleasers such as “customer service is our number one priority”, through to genuinely interesting examples of how BT’s customers are working with the company to drive innovation.
It’s difficult to know how to judge this, latest initiative from BT. Certainly in the UK, we have consumer-based experiences (not all of them good) which can colour our opinions; meanwhile, over the past 5 years the global company has been through a number of other change programmes - in terms of both internal restructuring and application rationalisation, and incorporating technology infrastructure transformations such as the 21st Century network, currently in mid roll-out.
Perhaps the crucial axis upon which BT’s future rests, is its stated goal of delivering software-based services. This could mean multiple things, some of which (“Is BT taking on SAP now?”) might be seen as a step too far for the company - so it’s important to stress that BT isn’t going to be ditching its core, platform-oriented business. When asked, the panel explained how it would be building on top of its service provider heritage with said (software-based) services, in a way that can be integrated (or “mashed-in and mashed-out” in Al-Noor Ramji’s terms) with both the enterprise environments of its corporate customers and the burgeoning new era of Internet-based software.
Rather than mucking around too much with the company’s product and service portfolio, the plan is to do similar things as currently, but far better and more efficiently than in the past. “The ‘what’ will stay the same, but the ‘how’ will be different,” said Roel Louwhoff. Improvements to the “how” will (so we were told) enable the company to be far more innovative, or at least, far quicker in how it brings its innovations to market.
What’s going to prevent such a transformation? Perhaps the main challenge to BT remains the company itself, as defined by its staff. There can be no papering over the cracks here, as it will undoubtedly be a challenge to get all of the company’s employees moving in the same direction - please do note that this is not a comment on the quality of the people, but more on the fragmented nature of BT’s historical structures.
The proof of the pudding will only become visible in a year or so, as BT becomes able to offer demonstrable evidence that this latest change and innovation programme is making a difference. Like the big wheel of skaters, BT doesn’t want to move dramatically from where it is now, but it does want to be able to turn faster, whilst keeping everyone involved on board. In people-centric terms this means balancing the momentum being driven from the centre, with appropriate bottom-up activities such as training, personal staff development and so on. Sharpen the skates, if you will, rather than sharpen the saw.
Overall it’s a laudable initiative, and on paper at least it sounds practicable. It is still early days however. Of course success will need to be judged in terms of metrics such as time to market reductions or increased customer uptake of new, software-based services - and the consequent, directly attributable impact on the company’s bottom line. However, perhaps the real litmus test will be the ability to go to any of BT’s 110,000 employees and get a clear understanding of what the company stands for. Like the guy at the end of the ice-spiral, for this to work, BT can’t afford to leave anyone behind.
11-02 – Oh My God! They Killed Music DRM!
Oh My God! They Killed Music DRM!
Digital Rights Management in music was, to be fair, always doomed. As long as there exist free mechanisms to transport music, it will always be impossible to protect it from unauthorised copying. The Internet was to music distribution like the steam engine to industry - it has revolutionised how things are done, but not without some serious fall-out in the traditional world of large corporations - who, lets face it, have been conducting rear-guard actions without really managing to keep the revolution at bay. Just recently we’ve seen EMI fall to an equity company whose goal seems more to sweat its assets than release the creative juices of its signings. I know, if we put all our effort into Coldplay and Robbie Williams, we can just get them to work 36 hour days. Oh dear, Coldplay have broken up. Robbie, where do you think you’re going?
One of the technical casualties of the demise of the music industry, it is already becoming apparent, is DRM. Today we saw the launch of Qloud (pronounced “cloud”, and to quote from this article (which looks like a royalty-free direct copy of the press release):
“The Qloud My Music application is a revolutionary music service that delivers online music to users how they want it – legal, cost-free, DRM-free, on-demand and linked to their personal music libraries – and where they want it – inside social networks where they can share music with and discover it through their friends.”
Qloud isn’t a one-off. A couple of weeks ago, Apple cut the price of its DRM catalogue to 99cents. Want more? Consider EMI’s earlier announcement of the same, followed up also a fortnight ago by its announcement of a partnership with Imeem. As well as indicating just how confused EMI is right now (one is reminded of Microsoft’s relationship with Open Source), its a sign that things are unravelling rapidly for DRM, for a number of reasons.
Not least, technical. As part of a recent purchase, I was offered a license to download a Keane song. I had to jump through several hoops just to listen to the darn thing: create an account, download file, go somewhere else, download license, install… sure, it was a nice song. At least I think so, I’ve since reinstalled my OS and I can’t be bothered to go through all that rigmarole again. Alternatively I could choose to lock myself into a platforma such as iTunes. Both approaches are directly opposed to the viral nature of social networking, pioneered by Myspace and being picked up by Facebook (no doubt to be continued via Google’s OpenSocial).
Music needs to be made to be shared, and this is what will put the final nails in the coffin of DRM. Be not alarmed, dear musicians - there’s still plenty of money to be made even if the model may sometimes need a bit of tweaking.
P.S. who’s going to bet me a bottle of shandy that Nokia’s spiffing new music store doesn’t go DRM-less by the end of the year? :-)
P.P.S. Steve Jobs wrote a good article on the weaknesses of DRM, here.
11-05 – Well-meaning, harmless drudge...
Well-meaning, harmless drudge…
… was how the Oxford Dictionary of Computing defined a system administrator. Or at least it did in the second-hand copy of the first edition, I used to own in my university days, 20 years ago (ouch). While I loved the self-effacing humour, only today did I discover it was also a hat-tip to Dr Johnson, whose dictionary, first published in 1755, defined a lexicographer as:
“A writer of dictionaries; a harmless drudge, that busies himself in tracing the original, and signification of words.”
In these hyperbolic days of IT, perhaps it is right to wonder whether one day the role of the administrator can once more be distilled to that of a lexicographer, be it no less useful or rewarding a position nonetheless.
11-06 – IT Security Analyst Forum (a.k.a. Hey Mum I'm on the telly)
IT Security Analyst Forum (a.k.a. Hey Mum I’m on the telly)
I was fortunate enough to attend the IT Security Analyst Forum a few weeks ago, where I was one of many analysts meeting with a number of security vendors. A a kindly gentleman was there recording the proceedings, and I just came across the videoed results - isn’t the Web marvelous?
Anyway, if you’d like to know more about Freeform Dynamics, how we operate or my views on IT security, please do watch the below!
Part 1: About Freeform and general security views
Part 2: what trends are you noticing?
Part 3: has the analyst forum been a success?
P.S. Yes that is my bald pate in the first frame…
11-07 – More washing of dirty linen in blog public
More washing of dirty linen in blog public
Thanks to David’s connection with Euan Semple (I was idly following Twitter links), I stumbled across this post which led me here. Priceless comment fourth in the list, thanks for pointing it out David!
11-07 – Playing on trains - testing the Eurostar Terminal
Playing on trains - testing the Eurostar Terminal
I had a day off today. Well, kind of - it was one of those days where I actually got a lot of things done, largely because I’d told everyone I’d be taking it as a day off: my reason was that I had been invited to test out the new Eurostar terminal at St Pancras.

I’m still not absolutely sure why I agreed to do it in the first place. Was it driven by my interest in all things new, or my curiosity to see a work in progress on the scale of a station? Was it purely the allure of a free ticket, or something more fundamental, a deep down, inexpressible yearning to spend more time with… trains? Whatever it was, I was in good company, as I found out looking at the motley collection of slightly flummoxed “passengers” that had assembled themselves at St Pancras for the day.
The drill was simple. Turn up with pre-issued tickets (sent in the post), and get on a certain train - as if going to Paris. Get off at Ebbsfleet 15 minutes away, forget quickly about Paris and pretend to be going from there. Find oneself at St Pancras again, forget Paris and check through the arrivals lounge (showing passports - I wonder what would happen if someone lost theirs, having never actually left the country). Check back in and get on a train to Paris. Five minutes later, have train stop and reverse back to St Pancras, requiring one to once again forget about Paris.
Apart from the obvious result that, by the end of it, I was quite hankering after the dirty chic of the Gallic capital, it was all a quite enjoyable affair. For myself I took the role of a “business traveller”, and true to form I also managed to simulate the characters of both “late arrival at terminal” and “apologetic queue jumper”. There was free coffee and tea, a pack lunch and - I am sure this won’t remain the case when the doors open - hordes of smiling security staff to help us through the X-ray checks.
One thing that did surprise me was just how much work there still seemed to be required. While the main concourses were largely sorted, there were swathes of cloth across many of the side-alleys, from which the usual sounds of drills and angle grinders could be heard. For the techies there was Wifi access (though the login wasn’t yet working), and a feature I particularly liked was a 50-yard-long counter with electric points at intervals, for laptops. Though of course, the sockets weren’t yet switched on.
What else? I’d love to be able to comment on signage, announcement quality and passenger facilities, like a good reviewer. Unfortunately however, it looked exactly like a train station, or more specifically, like the soon-to-be-closed Eurostar terminal at Waterloo - apart, that is, from the blank wall of red bricks that faces new arrivals (“Welcome to Britain. Here’s a blank wall, to help your first impressions.”). Most importantly, apart from a glitch at the end (when we were delayed as we tried to leave the platform on the final leg) everything functioned quite smoothly.
To conclude, while I’m still not absolutely sure why I went, I will probably look back on the experience with something approaching pleasure, and with my inner train spotter feeling appropriately nurtured. Peep peep!
11-07 – Writing Lessons from Ron James
Writing Lessons from Ron James
Last week I had the good fortune to have lunch with Andrew James, whose father, Ron James has written a number of books about climbing. At the age of 73 he’s still outdoors, these days having hung up his karibiners and turned his attention to mountain biking, but still writing books about his passions. In the discussions it struck me that Ron had cracked what may be the golden rules of writing non-fiction for “the rest of us,” that is, people whose careers and lifestyles lie outside of the mainstream media.
So, what can we glean from Ron’s experiences?
1. Find a domain that has a community. There are plenty of interesting subjects to write about - but like the tree that falls in the forest when nobody is there, it is unclear whether such writings will ever have a readership. This is pure pragmatism - not necessarily commercial as we shall see, but it will only be the most devoted of authors that will write an entire book to reach only a handful of readers. The Internet can be a great help in this regard - message boards and forums are not only a source of information but also can give you a good idea of the scale of the audience. In Ron’s case he has stuck to outdoor sports - niche market perfection, with plenty of devoted followers.
2. Differentiate what you are writing about. It would be pointless to cover a topic in a way that has already been done - unless, in the past, it has been covered poorly. So if you’re writing a how-to guide look beyond the “First lessons in…” to more specific topics, building on the literature that’s already out there. But do be careful not to forget point 1 - you don’t want to end up too niche! For example, Ron’s current focus is mountain biking, to be sure - but for the over-60’s! Don’t be afraid to research the topic and find out what else has been written on it: indeed, it’s good practice to write a proposal, if for no other reason than to ensure you answer the questions a proposal demands - such as, for example, what differentiates this one?
3. Make sure the benefits are broader than financial. A tough one, this. Its not that nobody gets rich and famous writing books, but more that it is highly improbable. A common fallacy - a bit like seeing someone on the telly and assuming they live in a big house somewhere, whereas the reality is that most actors are only as well off as the next job permits. So, if you’re writing, do so in a way that covers your costs and maybe makes a bit of cash; meanwhile however, ensure take into consideration the wider benefits - through sponsorship for example, or purely the fact that writing enables you to spend time covering a subject you love. Which brings me to…
4. Write about something that you love. There are surely plenty of areas that fit the above three criteria, but you’re only going to get old and resentful unless a certain part of what you do is for its own sake. Write not only because you love writing, but also because you love the subject that you’re writing about, be it music, fly fishing or industrial archaeology.
This last lesson is important. Ultimately, whatever you do, you need to be doing it for your own satisfaction, as well as for the potential readership. This will not only help you enjoy the (sometimes mind-numbing) process, but also result in an output of which you can be justly proud.
11-19 – Why I've replaced Vista with Linux
Why I’ve replaced Vista with Linux
This decision was a long time coming but I think it is the right thing to do right now: I have reformatted the hard drive on my laptop and replaced Vista with the latest version of Ubuntu Linux, as the main operating system. I did this for a number of reasons: it’s probably worth going through them one by one.
Building a picture of Open Source today. “Desktop Linux is ready for the mainstream” we are told – but is it? And how to know without trying it for real? I had Linux running in a virtual machine on Vista, and it looked fine, but I tended only to play with it and not really put it through its paces. To give it a proper once-over there really is no substitute for putting it in as the “main” operating system. I should say up-front that this shouldn’t be construed as a comment on Vista, which I am actually getting to like (see below). The same caveat should be applied for other applications, proprietary or open source (for the record, however: I’m not in any hurry to move over to OpenOffice just yet!)
Testing virtualisation. There’s a variety of combinations of virtual environments that can exist today – one of the strengths and weaknesses of virtualisation (I am quickly discovering) is that anything goes. Linux on Windows, Windows on Linux, either or both on a hypervisor from either side; add to that the potential for running individual apps (e.g. with Wine or Softricity) or remote desktops and it all becomes very complicated indeed. I decided to start with Linux as, to be fair, Vista is already big, and I decided I could do without the base overhead. I’m now running virtual instances of Ubuntu server, Ubuntu desktop and Windows XP – see below.
Getting my hands dirty. Here’s the thing – I’m an old UNIX hacker at heart, and I kind of miss playing around with this stuff, which I haven’t really done since the 0.94 SLS days. It’s certainly been an interesting experience so far pushing a few of the boundaries of today’s desktop Linux and seeing what gives… or doesn’t. I’m also planning on doing a bit of programming again, most likely in Ruby on Rails, for which direct use of the LAMP stack seems more appropriate than developing in Windows and running emulators or indeed, virtual machines. Of course this will also help me build more of a picture of open source in general, or at least trigger a few conversations: see next.
Engaging with the community. There’s just so much happening in the blogosphere, and some of the most animated discussions come from developers and open source advocates. For me, this decision partially comes down to succumbing to the temptation and joining in – heaven knows I won’t be able to keep up but at least if I’m sharing some of the experiences I’ll participate more than just watching from the sidelines.
Avoidance of bias. It’s important in this job to be able to see all aspects, and I have felt uncomfortable in the past commenting on certain subjects without a full appreciation of how it feels to experience the other side of the coin. Meanwhile, Linux adoption is rife in Eastern Europe and Asia, making it even more important to understand what life is like for non-Windows users. Its worth doing this to get the balance right – not least because certain behaviours and expectations are very different. In Linux, for example, the attitude is very much “there will be a package out there” (the package manager lists twenty-three thousand packages, of which I have a paltry fifteen hundred installed) but the “out there” experience also extends to tweaks and fixes, so be prepared to muck in. The Windows “attitude” seems to be more, “I’ve paid for it, so it better work!”
Response to accusations of bias. I want to be able to talk about the good stuff that comes out of Seattle without being accused of bias, or being considered some kind of shill. At the risk (see, here we go) of facing the wrath of all those who feel Microsoft is the nemesis of the IT industry, I actually do, really believe they come out with some pretty good stuff. I also think Sun, IBM and everybody else comes out with good stuff. There’s plenty of good stuff out there, and I really don’t see why Microsoft should be excluded from the good stuff debate just because they had some sharp business practices in the past, or present. After all, who didn’t - and who doesn’t.
Finally, I secretly wish I had a Mac. No I don’t… well, yes I do but I’m not sure it would be the answer to my prayers, and I would be concerned about lock-in. Oh, the irony.
So, there we have it. It’s already been quite a ride, as I’ve tested out a number of Linux distributions, tools and configurations before settling on my preferred setup. Which is, Ubuntu Linux 7.10 running KDE, and hosting a virtual instance of Windows XP via QEMU/KVM for my Outlook Exchange client. For virtualisation, I did try out Xen, both from within OpenSuse and as XenSource Express, but neither supported laptop suspend/resume (and XenSource setup on a single laptop was becoming a pig. I’ve needed to do various tweaks and resolve a number of issues, as I started doing this I wondered whether this was a comment on Linux – but it could equally be due to my lack of current experience. I have set up a dual-boot configuration with Vista, but this does not boot by default so (for the time being) it is there as a security blanket.
Does it work? So far, so good. I’m having to use the command line more than a little, but to be fair this is largely due to using the virtualisation capabilities, which are outside normal (i.e. non-geek) behaviour I think. There are a few bugs and things I might suggest were done differently, if I were in a position to comment – which is exactly what I’m getting myself into here, so expect some further case notes on my own blog under the tag “geeking out” (these won’t appear on IT-Analysis or IT-Director if you’re reading this post on one of these sites.) I still need to get myself organised from a data standpoint – I’m configuring Samba as I don’t just yet want to trust my data to sit inside a virtual machine, for example! - and I also need to set up my external monitor for ease of switching screens.
Whether or not I can work like this is one thing. I am missing certain things, not least LiveWriter and the Vista Sidebar – as general remarks things are not quite as slick as Windows, but perhaps I haven’t got my configuration right yet. I’ll give myself a month or so like this, so I can establish whether or not I actually want to work like this. For now, the jury is out but I shall keep everyone posted.
11-21 – What's in the (Linux) box
What’s in the (Linux) box
Here’s a bit more information about my chosen Linux configuration. I’m using a Samsung Q35 laptop with a Centrino Duo processor and 2 gig of RAM. It’s partitioned with a 30Gb drive for Vista and the rest given over to Ubuntu Linux 7.10. I’ve installed a number of packages on top of the base install, specifically for:
Virtualisation
I wanted to run XP within Ubuntu to access my Outlook email client - though there may be other ways (for example through direct Exchange access). I did have a go with VMWare Player for Linux VM’s, and with Xen as a more general virtualisation platform (no suspend/resume, fwiw) before discovering that KVM worked with the virtualisation features build into the Intel CPU to give a pretty fast and very usable virtual experience.
Having downloaded the KVM package, I’ve set up a number of virtual machines, including a 5 gig one to run XP with Outlook and Word - it’s tight, but it works. If I wanted to run any other Windows apps I’d probably have to roll a new VM - at the moment the only software I lack is Groove and Mind Manager, so perhaps I should do this.
The two commands I need to remember to run each time are:
$ sudo modprobe kvm-intel # to insert the necessary libraries into the kernel $ sudo kvm -boot c -m 768 -smb windows -cdrom /dev/cdrom /home/jonno/vm/xpdesk.img
Occasionally KVM will crash on boot - this seems to be a known bug, and I should stress it’s highly repeatable (this is a good thing i.e. it doesn’t happen randomly), which means that once a configuration works, its good to go. Specifically (and ironically) it crashes when trying to load the splash screen from the Ubuntu live CD, for example. The workaround is to run QEMU or KVM with the -no-kvm option, which doesn’t talk directly to the processor. It’s slower, but if this is done for XP installation then it generates a working image that can then be booted normally. Ubuntu server installs OK, but Ubuntu desktop hits the issue every time it boots so for the moment it isn’t an option for a KVM virtual machine.
File access
For the time being, my files remain in the Vista partition, call me old fashioned but this is a pilot test, not a gung-ho let’s-throw-away-the-key epiphany. So I need to be able to access them not only in Linux (easy enough, the partition is mounted by default in /media), but also from inside the XP virtual machine. For this I have installed Samba, which enables files from the host PC to be accessed like they were on a network drive. Hence, by the way, the need for the “-smb” option in the kvm launch line above.
For my own future reference, the IP address of the network drive is the same as the gateway in the XP VM. I had to go through a bit of configuration rigmarole to set up Samba - I recommend just following the standard tutorial to set up a Samba user and smb.conf file, and going from there.
Modem access
This was one of those “cool - it works!” moments. Turns out that the Huawei E220 device that T-Mobile provided me with is a pretty standard piece of kit (who knew?) and that the driver for it is already built into the kernel (how cool is that). Trouble is however with the device, being a la fois a USB thumbdrive and a modem, the OS doesn’t always recognise what its being at the time. To resolve this, some kindly fellow has written a short C program, which needs to be preceded with a modprobe command - at least I think so but I haven’t fully worked out the order yet. This seems to work:
$ modprobe usbserial vendor=0×12d1 product=0×1003 $ sudo /sbin/huaweiAktBbo
For info on the source code, run a Google on huaweiAktBbo, it will also tell you which library to include for the compiler to work (and yes, I popped it into the sbin directory myself). Another couple of tips - use dmesg to see what the latest status is, use lsusb to see whether the requisite 3 USB tty’s have been created, and do check the lights are on on the modem.
I’m using wvdial to connect to T-Mobile in the UK, with the following /etc/wvdial.conf file:
[Dialer Defaults] Phone = *99***1# Username = web Password = web Stupid Mode = 1 Dial Command = ATDT
[Dialer hsdpa] Modem = /dev/ttyUSB0 Baud = 460800 Init2 = ATZ Init3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ISDN = 0 Modem Type = Analog Modem Init5 = AT+CGDCONT=1,“IP”,“general.t-mobile.uk”
Random thoughts
As well as the above there’s some other bits and bobs - like discovering I didn’t need vncserver to get remote access to my Linux desktop (Xvnc works just fine) - incidentally the command line is:
$ x11vnc -usepw -auth /home/jonno/.Xauthority -display :0
Still to be sorted are:
- switching between laptop and external monitor. I have configurations to run either, but I haven’t yet arrived at a point where I can switch from one to the other without any gyp. My Microsoft head says “surely this should just work,” and while I know there will be some clever setup of the xorg.conf file that does just that, the tempting option (kludge) is just to have two xorg.conf files, and to switch between them as necessary.
- installing a decent Synaptics Touchpad driver. At the moment its too sensitive which means that suddenly I will find myself typing somewhere else in the document than where I started. Shouldn’t be too hard to fix.
- installing some network monitoring software. The 3G modem may work, but its not very forthcoming when it comes to telling me signal strength, connection speed etc. There may be something out there or perhaps I will just write something (yeah, right).
- there remains a bug in suspend/resume - or it could be a configuration error, hard to tell. The symptom is that the screen sits there, black and the disk light stays on, even though there’s no activity (or at least, sound). It’s not the same as X failing to load, as ctl-alt-f1 doesn’t show the terminal window either (and ctl-alt-f7 doesn’t bring back the graphics)
- there’s also the occasional difficulty copying a file. Again I don’t want to be quick to leap onto this as something wrong with the system, as often it can be that a previously unknown fault (eg file corruption) can be revealed by trying to access it with a different platform. I’ll report back on this.
- playing multimedia. Given that I’ve handed over a slab of my disk space to install two base operating systems and run a bunch of VM’s, my MP3 collection is feeling the squeeze so I haven’t got round to installing anything here just yet.
Overall though, I have a working system. The Ubuntu forums have been superlatively useful, and there’s lots of other sources of help out there, which is nice to know.
I’m afraid to say however that I share the feeling that Ubuntu 7.10 (aka Gutsy Gibbon) may not have been quite as ready for prime time as it should have been. I had installed Feisty Fawn (the previous Ubuntu version, 7.04) before, and it did seem to “just work” which is of course a major factor with which to judge desktop Linux. First I tried downloading 7.10 as an upgrade, but for reasons I can’t now remember I decided to go the whole hog and replace it with a clean version. Everything seems to be there but there are a few quirks - KDE menu items showing up in Gnome for example, or the fact that every now and then my window manager seems to change - it went from Gnome to Xfce once, and another time from KDE to Gnome. Very strange (though in hindsight it might be associated with installing updates).
Having said that, it is quite usable - I just think if I was installing Ubuntu for somebody else I would default to Feisty. And I’m very, very sorry to say but the arguments “new versions should always be treated carefully” or “with Microsoft it would be worse” just don’t cut the mustard, if mainstream desktop users are being targeted. Once it makes it onto the magazine cover, it’s got to work.
More soon as I work through the other stuff above.
December 2007
12-01 – Has it been a week with Ubuntu already?
Has it been a week with Ubuntu already?
Its been an interesting experience so far - notably my reading and writing of blogs has suffered as I’ve been tinkering and tweaking, but I think I now have a stable environment, notably:
- Ubuntu 7.10 running Gnome - VirtualBox for Outlook, Office and Mind Manager access - Firefox and Thunderbird for Web and personal mail - KDevelop for Ruby development - gTwitter, Skype and Xeyes in the toolbar - OpenOffice for simple word processing and looking at presentations - Drivel for typing this
And it all works OK - well, it should, shouldn’t it? I’ve tested pretty much all the options and features that could be alternatives to the above, but for the most part they’re either not suited, or not working. Specifically, there appears to be a bug in the current release of Evolution, which is preventing me from accessing Exchange directly. I haven’t spoken to the Evolution guys but I’ve read pretty widely on this and no dice. Its not blocking but it woudl be nice if it worked. I’ve also tried the gadgets tool (name eludes me) - it doesn’t work under Gnome, which for some reason I keep coming back to from KDE, don’t ask me why but its just simpler and cleaner. Ah, that’s why ;)
I have had inordinate problems with screen resolutions, on my external display; I was also having issued with the screen freezing up for periods but it now transpires that the latter was caused, or exacerbated by the former. Newbie tip: don’t try (like I did) to hack your xorg.conf file, before running the command to detect and auto-generate such a file from scratch. This worked much better - its all documented in the Ubuntu display howto here: https://help.ubuntu.com/community/FixVideoResolutionHowto There are issues with the display freezing in Gutsy, but I would recommend sorting this first and see if it resolves them.
Update: I was also having an issue with suspend/resume not working, which seems to have “gone away” now I’m running with the new xorg.conf. Spooky :)
I’ve also got to get my microphone working. I was surprised to receive a Slype call a few days ago - surprised because we don’t tend to use it that much for work any more (default action: reach for phone). I grabbed my headset and plugged it in to find that I needed to configure the ALSA device driver, and it wasn’t going to just play so I left it. Still need to get round to that.
I also want to look at Kandy as an alternative for driving my USB 3G dongle. Apart from that, I think I’m done. It was interesting - a few days ago I went back to my Vista install for some reason, grumbling as I did about the Ubuntu display issue. When I logged in however, I did have a similar issue with recognising the display resolution etc, which made me have a bit of a rethink (conclusion: displays are tough in any OS). I’ve tried a couple of other things - for example installing a software configuration management too for my development efforts, before remembering that it could be quite a tricky thing to deploy, and removing both it and Apache. Lesson learned - there’s such a thing as too much choice!
As a final point I had a sudden ah-ha moment as I used XP within VirtualBox. I had been worrying about what happened to my data if the virtual machine should get corrupted in some way - but then it suddenly occurred to me that everthing within the computer was virtual and at risk, bing converted into a string of 0’s and 1’s and processed through this sexy-looking, but ultimately deceptive Von Neumann machine. The answer: to back up the data, of course. So I have no installed SmartSync within my virtual environment, and it is doing exactly that. Whoa!
12-11 – There's something about having enough disk... for a while
There’s something about having enough disk… for a while
I had a bit of a screwdriver couple of days this weekend, building (or, in modern flat-pack parlance) assembling a bed, and also replacing the hard drive in my Archos 340 (AV300 series) audio/video jukebox. This latter task had been a while coming, as my music collection alone now takes up 48GB - the straw that broke the camel’s back was inheriting a collection of classical CD’s from a good friend. These are now digitised and the originals stowed, leaving me the listening pleasure but also causing difficulty in knowing what to store, where.
So, I finally succumbed and purchased a 160GB hard drive. There’s quite a lot of information on the Web about upgrading an Archos AV300 series - thanks guys - the one thing I didn’t know was whether it could take a 160GB, though I had read reports of success with the 120GB drives. Answer: no it can’t - I now have a 125GB partition for stuff the Archos can play, and a 35GB partition for various videos it cannot. Live and, through a number of attempts at reformatting, learn (second answer: accept the first partition size the Archos proposes, around 128GB I think).
Having then spent a slow and boring time transferring files from the RAID box to the Archos, I now have a bunch of films recorded from the TV, the aforementioned 48Gb of music and our entire digital collection of family photos. I don’t know if I am now in that gadget honeymoon period (you know, when anything new seems really, really useful) but it is quite remarkable what a difference it can make to have everything in one place. There are some films, for example, that I have been meaning to watch ever since they were recorded - but now I might actually do so, given the fact they are conveniently placed on the jukebox, rather than stuck away somewhere on the server. Right now I’m listening to a bit of Dvorak on a long-haul flight, you can guarantee I couldn’t have done without the new drive.
It takes me back to my IT manager days, when we seemed to be forever struggling against a tide of data. The answer would invariably be the same - to adopt coping strategies for as long as possible before planning in some downtime and going through a consolidation exercise. Things would be great for a while, before eventually our best-laid plans would give way to the pressures referred to by my previous boss Rob Hailstone as “the wardrobe principle.”
Perhaps the worst example of this was caused quite ironically from having too much storage. Sun Microsystems, in their infinite generosity supplied a batch of 40 SS10 workstations with an equal number of - if my memory serves me correctly - external 400 MB drives. At first we were daunted and gleeful in equal measure - this was free stuff, after all - but over time the discs became incorporated into the IT environment. Oracle was a hungry beast, not just because of the database sizes but the number of test instances we needed to run.
For a period there was no problem that couldn’t be solved without throwing extra disk space at it. After a while however, the disks that had held so much promise became a burden of their own, and we had to consolidate things down again.
Still, and no doubt like things will turn out for my newly rejuvenated Archos, it was nice while it lasted.
P.S. Incidentally, a note for Archos lovers - the trick with bending back (carefully) the battery contacts, as remarked upon in a number of places on the Web, really does work to restore battery life. Thanks again!
12-11 – World Community Grid - virtual edition?
World Community Grid - virtual edition?
I’m in the process of downloading WCG (again), this time for my new environment. It occurred to me that I might be able to get more out of it by running several WCG instances as virtual machines - turns out I can, according to James Bliss. Worth a go - but I do wonder whether I’m helping destroy the planet with all that additional power required to drive the extra CPU cycles. Ho hum - worth the risk.
12-12 – Can software developers be protected from themselves?
Can software developers be protected from themselves?
It’s now six weeks since RSA Europe, when I made a diary note to take a deeper look at the SAFECode forum. SAFECode stands for the Software Assurance Forum for Excellence in Code - we can be profoundly grateful that the founders didn’t try to expand out the entire acronym. It also stands for “increasing trust in information technology (IT) products and services through the advancement of proven software assurance methods” - a kind of Green Cross man of the IT world, helping software developers across the highly risky freeways of the technologcal world.
The SAFECode idea is to co-ordinate software best practices across software vendor companies, build in appropriate checks and balances to ensure the resulting applications are secure (or at least, to minimise the risks). Is it necessary? Where there’s smoke there’s fire, and to be sure, Microsoft is no longer the only target of cyber-attacks. As hackers mature into commercial operators, no longer motivated (just) by “giving it to the man,” an ever-widening pool of programs is coming under threat.
In principle, then, SAFECode is a good, worthy and valuable idea. It is by no means guaranteed to succeed, for a number of reasons. Don’t get me wrong - of course it will be a good thing to co-ordinate and share best practice. From the point of view of its longer term success there are several howevers, based around:
- Credibility. To succeed, the SAFECode forum requires to be seen as successful. This is a conundrum but it isn’t new - consider the ITIL library of systems management best practice, which has taken a good 10 years to establish itself. It may be that SAFECode by itself proves inadequate because it focuses only on security, and quickly runs into the weeds as it tries to integrate with the wider picture of software development, which is itself peppered by competing best practice, from waterfall to RUP to agile.
- Critical mass. While there are big hitters in the list (from the site: EMC Corporation, Juniper Networks, Inc., Microsoft Corporation, SAP AG, and Symantec Corp.), the number of members is not yet adequate to cause a mass adoption or understanding of the best oractices it wants to espouse.
- Clarity. SAFECode can perhaps learn from the mistakes of other forums - notably in this case ITIL - by opening its documents to the widest possible audience. A quick glance at the publications page indicates that the organisation does not yet have anything to tell people, not in terms of best practice. The wrong thing to do hereon in would be to make any publications for members only, or indeed available only for sale. Commerciality will get in the way of SAFECode’s mission, if not scuppering it already.
- Collaboration. The technology world has come a long way since the smoke-filled rooms in which many best practice standards have been conceived. We have ridden the open source wave and now we are in the midst of a new era of collaboration, as illustrated by social networking. The fastest route to success (and I’m not always a fan) for SAFECode would be to build a Wiki, and open it up as widely as possible with appropriate editorial responsibility. While noise to signal would have to be managed, this would aid both visibility of the process and road-testing of the results
- Certification process. Without some kind of certification, SAFECode members do not have to prove anything for themselves, nor would there be any kind of recourse should SAFECode practices not be kept. Certification needs to have teeth - while anyone can join the forum, only products that fulfil appropriate criteria should be marked as “SAFECode certified”, and only organisations that continue to apply the best practices should be able to maintain their member status.
In summary, then, all initiatives such as SAFECode should be applauded. However, the forum should be judged not on its existence alone, but on its ability to change how applcations are written - and ultimately, on whether the risks posed by member applications are reduced. This may seem like a tall order but if SAFECode can’t provide some kind of guarantee, then it will be of little use. Not only this, but its currency will very quickly devalue, to the detriment of its founders and the credibility of their products.
12-12 – Kindle - powered by Linux
Kindle - powered by Linux
Well, I had to check and sure enough, Amazon’s new Kindle device is powered by Linux. Obvious really - and what’s equally obvious is that there will spring forth a developer community. Given that Amazon has release the tarball it seems unlikely they’re locking things down too hard. While it may not be the prettiest kid on the block, it’ll be interesting to see what it spawns.
I did think about registering kindlehacks.com, but then I thought better :)
12-12 – Rethinking social networking in 2008
Rethinking social networking in 2008
Spooky - I was just collating some thought about social networking then Anne Zelenka posts half of my thought process. The power of the meme or proof of a higher power? Most likely just coincidence but anyway, it prompted me to throw down my thoughts before they get blogged into the past. So, here’s my uncorroborated opinions and unfounded predictions for 2008:
- There will be consolidation of the social networking market. I just received a Xing email, and several Spock and Plaxo Pulse invites arrived today. The fight in the corporate space is with LinkedIn, and there can be only one. The same goes for personal social networking.
- Twitter will “vanish.” I don’t believe Twitter will exist in the same form a year from now. Most likely scenario: the company will be bought and integrated into a larger offering; alternatively it will become a messaging backbone for other services. Despite the highly vocal tweets of a few twitterati, most of the world don’t work that way, or that fast.
- Facebook will lose to the next generation. Lets face it, Facebook was fun for a while, but is there really anything keeping us there? Facebook is all face and no heart, or soul - an integation platform to be written to. When something more interesting turns up, Facebook’s fickle “customers” will walk.
- The real winners will be the leaves and the trunks. A few social networking sites will become the “trunks” - consolidation hubs that enable integration between sites. A few others will specialise in “leaves” - offering customer-specific tools that suit the needs of their subscribers. I expect Microsoft to be a leaf player, not a trunk player, for example. Google will be a trunk player.
That’ll do - now taking beer-oriented bets for whether or not the above will prove true a year from now.
2008
Posts from 2008.
January 2008
01-04 – Why 2008 for enterprise identity management?
Why 2008 for enterprise identity management?
Like many people I suspect, I have struggled to get my head round identity management. This is less to do, I suspect, with the nature of the thing itself (great intro here, and I’d recommend Neil M’s reports on the subject), and more with the fact that there’s so much going on, in so many domains. The concept of identity itself is a nebulous beast, stretching from personal identity (yup, me, got that one) to corporate identity (aka managing and provisioning roles and access rights) and even more broadly, to that bar conversation – “every person, thing, asset etc can have an identity” – which can very quickly unravel into a flight of fancy.
Identity is a hot topic these days of course, what with incidents like the loss of all those records from the HMRC punting identity fraud into the public eye. Examples are legion, of identities being stolen, misused or otherwise abused – its perhaps surprising that incidents such as Goo-do-no-evil-gle and the Scoble Facebook hack have taken so long to materialise. While none of these examples are particularly relevant to the concepts being espoused by corporate identity management, one nonetheless stimulates interest in the other. There are overlaps of course - the hapless employee who lost the HMRC disks could have been deemed too dim to warrant access to the disks in the first place, but this thought process is in a different compartment to thinking about the risks caused by offering up our kids and (indeed) our bank details to all and sundry.
The issue for corporate technology sellers and buyers alike, is that while the subject of identity may no longer leave people glazing over at the slightest mention, conversations can munge all of the above issues into a convoluted glob, incorporating on one hand worries about the protection of personal information, and on the other practicalities around ensuring corporate information and systems are only accessed by those who have been granted access. Given that this industry thrives on three letter acronyms, perhaps we need a couple of new ones - “Personal Information Protection” for the former and “Enterprise Identity Management” for the latter. Thus, EIM could have been used to support PIP, in the HMCE case.
Taking just the corporate, “EIM” side of things. this looks to be an interesting year. The last couple of years have seen a number of acquisitions and product announcements in this space from the larger management vendors, notably CA and Oracle, IBM, BMC, Sun and Novell: the most recent step has been to bring in roles-based management and directory integration. There have been a number of challenges along the way, some of which remain – for example the architectural decision of whether a database or a directory is sufficiently scalable to serve up identity-related information at the required level of granularity; meanwhile a variety of standards are being put in place. Catalysed by the more general, populist buzz, all of these things put together should yield more general acceptance, and resulting deployment of identity management solutions.
I should admit to a level of personal interest here, in what amounts to “the greater good”. While I view the HMCE incident with disappointment, I don’t subscribe to the headline-grabbing faux-abhorrence that some press have expressed, and I certainly don’t believe any one person should carry the can. Given the problem is indeed systemic (I believe so too), and if we can also agree that such a thing could have happened in most organisations, then we require a systemic approach to solving it. Taken in the round, identity management can offer such an approach, underpinned by the appropriate use of technology – this is most definitely a place where technology alone cannot provide the answer, but neither can the problem be solved without it. Indeed, if the HMCE incident serves to raise awareness and adoption to the extent that other organisations do not suffer the same fate, then it will not have been without value.
01-14 – Goodbye dual boot, hello virtualisation
Goodbye dual boot, hello virtualisation
I confess, I nearly did away with Linux last week. Something was consistently going wrong and for the life of me I couldn’t work out what - the result was that, at far-too-regular intervals, my machine was hanging/locking/freezing. At first I took it as it came (good moment to sit back and stave off the back pain) but after a few weeks it was becoming untenable. Until - finally - I stumbled across the message threads (for example) that suggested setting “pci-noacpi” in the boot script. Blow me if it doesn’t work, though I’m sure I’m missing out on all kinds of clever stuff!
As a result, I’ve decided to stick with Ubuntu Gutsy as my base operating system, for the time being. There’s a whole stack of reasons - advantages and disadvantages - and I wouldn’t advise Linux (even Ubuntu) for just everybody. I’ll get round to documenting these over the coming week or so.
This shouldn’t necessarily be construed as some massive switch from Windows to Linux. There are still things I either need, or like to do in Windows, so I am sticking with a hybrid configuration; however, as already discussed, with its smaller footprint (about 650Mb of memory in active use, rather than the 1.2Gb I found was required for Vista in the same scenario) Linux is the preferred base OS for virtualisation. Perhaps the biggest leap of faith is the fact I have just deleted the dual boot: I’m finding that running XP in a VirtualBox virtual machine is just as usable, and far more accessible than having to boot into a separate configuation to access Windows-based applications. It also means I’m working with just one set of files, rather than synchronising between my virtual file store and real, though of course equally virtual file store.
This last point was quite an epiphany for me. At the start I was concerned about what might happen, should the virtual hard disk get corrupted… until I remembered I was equally concerned by (and experienced in) real disks getting corrupted. The answer, of course, was doing a backup. Then, of course, one remembers that everything is virtual, imaginary, made up combinations of electronic signals to give us the impression of data. Phew, but in a good way.
The configuration I have now is much simpler than trying to manage dual boot - there are less file systems to mount, less apps to install and keep up to date, etc. And of course, I can access all my applications at once. On the data side, I still need to delve deeper into questions of file sharing between base and virtual machines - in principle it is quite simple (for example, using virtual network shares in VirtualBox) but I still don’t understand how things like indexing are handled, or for example what is the performance hit on very large files, if they are accessed over a pseudo-network.
For now at least, everything is working fine, and so I can get on with writing about technology, rather than playing with it :)
01-14 – Happy New Year!
Happy New Year!
Never reconfigure your computer before Christmas, or you won’t post a blog until the third week in January. So goes the adage - and well, look, it’s true!
More to follow :)
01-14 – Never trust a man in a shell suit
Never trust a man in a shell suit
I was being a bit slow this morning - in more ways than one, as Liz and I headed off on our morning run. So, of course, I was wearing tracksuit bottoms, and at the weekend I had bought a black windproof top with a hood, which I was sporting as we headed out of the village. We were jogging past one of the outposts of the Royal Agricultural College just as a carload of ruddy-faced students drove out. Winding down the window, one of them cried - in a friendly enough way I should add - “I assume you’re running!” I responded with in a suitably nondescript manner and we went our separate ways.
It was only ten minutes later when I realised the alternative - horror of horrors - was that I had actually chosen to be dressed like that. With some relief the rain hit and I managed to muddy myself up enough to justify my purchase. Oh well.
01-15 – Lessons from the photo pro
Lessons from the photo pro
My Christmas present this year was a Nikon D40X camera - I’d sold all my film SLR equipment a few years previously, and I was just waiting for prices to drop, and pixels to rise to the point where it made sense. I’m not a photographer, but I do enjoy taking photos - how fortuitous that my good friend and neighbour Paul Atkinson used to run a branch of Jessops, is a seasoned pro, and also has a Nikon.
Paul took me out the other day to capture a few sunsets: the difference between what I would have taken (think:washed out and hazy) and what he showed me how to do, is quite astounding. I have the latter as my screen background now - it might not win any awards, but it’s got my dog in and it works for me.
01-16 – Bringing wireless networks into the management fold
Bringing wireless networks into the management fold
As part of the briefing cycle for Aruba’s announced acquisition of Airwave Wireless, I had a very interesting conversation with Roger Hockaday, EMEA marketing director for Aruba. In part it was about the announcement, but it quickly turned (as these things do) to a discussion of the wider picture of wireless, and indeed wired network management. “Discussing the wider picture” can sometimes mean, for analysts, expressing poorly veiled disdain at the fact that a vendor has not taken things far enough - a bit like when the triumphant person comes into the room to demonstrate, after 6 months of hard graft, that he can now juggle with 3 balls, only to be shot down by some smart alec who says, “yes, but to do it properly, you’ll need to jugggle with 4.” Not that I ever would of course, and certainly not with this - because the challenge is not one that can be resolved in one go.
In this particular case, it is more like juggling with a ping pong ball, a meat hook and a chainsaw. Wireless networking protocols remain all over the place as the bubble-headed wonks of 802.11 land continue to squeeze yet more bandwidth, and indeed distance out of some highly unreliable physics; on top of the base protocols are build a number of security and management capabilities, which are supposed to be compatible, but sometimes don’t quite manage to integrate. While all the attention has been (laudably) on driving up bandwidth and resolving compatibility issues, the black hole remains centralised management, particularly for legacy products that were not built for remote configuration and monitoring.
Aruba’s acquisition signifies both the need to centrally manage the variety of wireless switches that are out there, and the resolve to do something about it. Having not researched this specifically I don’t know if this is down to latent demand or direct customer pressure, but it makes sense that organisations which have rolled out wireless access points on a more ad-hoc basis in the past, are now seeking to integrate their operation with the rest of their network management activities. And incidentally, it may be that Cisco have been able to offer a more integrated approach for a while - but it is a rare organisation that is a wall to wall Cisco shop, so the same issue arises.
While WLAN remote management might be able to bring wireless management into the same room as wired networks, this is still a step away from bringing both onto the same console (and I don’t mean through screen scraping, or “sure, we can do SNMP traps” type conversations). It is true that wireless network configuration and monitoring has different drivers to the wired equivalent: one will largely be to support roaming end-point devices running a limited set of functions, while the other needs to consider the entire network architecture; there will be different (e.g. security) policies applied to each, etcetera. However, more integrated management tools lead to more efficient (and therefore, less costly) management.
We should therefore see this acquisition, and the impetus behind it, as a flag along the road - an indication of where we are in this work in progress. From the end-user perspective, where (as we have seen in recent studies) people care little about how they are getting their bits, just that they can get them, networks and their management tools should be able to see not just wired and switched wireless, but also “mobile” protocols such as HSDPA as part of the same, managed network architecture - particulalrly when things like Unified Communications really start to tip, and picocells extend such protocols into the office. Now that, if and when it comes, will be juggling.
01-17 – Flock rocks - and Microsoft is not going to take over the world. Deal with it.
Flock rocks - and Microsoft is not going to take over the world. Deal with it.
It was Danny Bradbury that first mentioned Flock to me, a couple of weeks ago when I had one of my regular little tirades against Facebook (in general, I keep these out of my blogging because I might look back and find myself blaming the tool - but if I get one more request to bite a chump, I shall scream.) “I can’t cope with all the messages,” I said to Danny.“ “You need Flock,” he said, a statement I promptly forgot until I saw it mentioned in an article by James Governor earlier today. That’s when I saw it, not when he wrote it.
Two mentions is a strike in this game, so I promptly went off and installed the self-styled social networking browser, Flock. Simple in Ubuntu - if you’re a geek who isn’t phased by creating a few symbolic links, editing config files, installing that pesky extra library or the (unmentioned) fact that there’s already a command in Linux called “flock”. ’Tis fortunate, I am. But I digress. I ran it up, Flock that is, and fell in love.
As an analyst I need to retain a level of product independence, but phew, that’s hard to do with Flock. It integrates a whole bunch of tools - blog reader, Twitter, Flickr and Facebook feeds and a bunch more - into a Web browser, to the extent that it feels like I have an HTML rendering engine as part of the package, rather than a browser with a bunch of plugins. You’ll have to try it for yourself but for me, and from a usability perspective, I found myself an instant convert. As for the Facebook thing from Danny - indeed, the king of clutter is rendered as innocuous as Twitter.
Under the bonnet, Flock is built on top of Mozilla, just like Firefox (and I’m sure a real tech-head could tell you how the two packages align - indeed you might as well get it from the horse’s mouth). That’s about all I want to say about that. Meanwhile, we have the (unavoidably) ubiquitous, bigger picture.
I’ve been quite vocal about the lack of innovation in open source in the past. To be fair, open source wasn’t mooted to be innovative - it was from the outset planned to offer “open” (aka freely available) alternatives to proprietary software. All well and good - but a bit frustrating sometimes, when the game has seemed to be more about delivering like-for-like functions. Why redevelop what’s already there, I wondered, when there was a wealth of new potential to be tapped in the form of new functionality?
Well, innovation has been coming for a while, but its been building on top of the open source layer, rather than within it. Sites such as Google of course are built on the LAMP stack, as are many others: as Stephen O’Grady pointed out, MySQL is the Toyota Corolla of the Web world, offering an easy on-ramp for many a startup. From the shoebox-strewn plains of San Jose to places so esoteric we know them only as ’offshore’, developers are building on top of both open and proprietary platforms with impunity, using whichever languages make technical sense for the job in hand.
And the game is just starting to get exciting. Despite my views on Facebook, and taking into account the debate around the more insidious risks to privacy, I have to take my hat off to the guys that built a service scalable enough to accomodate the massive growth it has seen. Social networking is a work in progress - we’re going to see plenty of failures, and plenty of pundits dissing products before they’ve had a chance to find their niche. And, every now and again we’re going to see moments of pure joy when a bunch of ideas and technologies come together and create something that really does affect how we live and work, for the better. It’s too early to tell whether Flock will deliver one of those moments, but (as with every new innovation), I sincerely hope so. Flock offers a user experience which, for me, is head and shoulders above that offered by “traditional” browsers - the category occupied by both Firefox and Internet Explorer.
Which brings me to Microsoft. I’m sorry, but the idea that the company holds any kind of defensible monopoly position in this day and age, is just laughable. Is anyone, seriously, still “afraid” of Microsoft’s position? Sure, It’s got a stonking portion of desktop market share - but applications like Flock are symbolic of just how tenuous Microsoft’s hold in any area could be, should the combination of functions be right in any alternative platform. This is not new - we’ve seen it already with Google, literally pulling the search rug out from under Microsoft’s feet. How did this happen? Because people chose to use it, and Google had the business model to leverage all those eyeballs. If we believe IT to be a truly democratising force then we should be letting users vote with their technology choices, all the while ensuring that there exist appropriate levels of governance built into international law, so nobody feels locked into any technology platform.
This latter point is important, as we know the lengths that many technology companies have gone to, to get one over on their “partners”. Microsoft’s in the list of course, but it’s by no means a list of one. We should remain vigilant - both to uncommercial acts and the underhand ways in which the law can be played to try to stifle the competition. I agree with James Governor’s view on the recent EU announcements being a waste fo taxpayers’money - “The world is changing. Documents today are not static. They flow through networks, largely enabled by a bunch of web standards. Some companies choose to go end to end Microsoft, for its “integrated innovation”. Other companies choose to go a non-Microsoft route to avoid integrated aggravation. That’s choice.” James goes on to say: “To be fair all its doing is announcing investigations made at the behest of complaints from Opera and ECIS (otherwise known as the Anyone But Microsoft lobbying club). In other words these investigations may lead nowhere.” What a pointless exercise.
For me personally, I shall continue to use non-Microsoft products as well as Microsoft products, deciding based on preference and functionality - that’s personal choice. Corporations and public bodies, large and small are equally free to do the same, with the added factors of technical skills, administrative costs and migration challenges often keeping them with whatever’s the incumbent. It’s unlikely we’ll see that many migrations off Oracle and onto MySQL in the short term, and neither do I expect many private organisations will be in a hurry to convert all of their old documents to any form of XML, even if they needed to - which they don’t of course. That’s the nature of legacy, but its not where the real action is: developing new, innovative technologies and applications that really do enable us to do things that are different than in the past. Whatever happens to Flock in the future, let us take from it this one lesson and let innovation be the goal, wherever it comes from.
Blogged with Flock
01-17 – Road testing the Palm Treo 500v
Road testing the Palm Treo 500v
Back at IT Forum in November, Microsoft gave me one of those snazzy new Palm devices to evaluate. It felt a little weird on the way home, given it was in the middle of the advertising blitz, to have in my bag the same thing that was appearing on the hoardings (and indeed, cleverly projected from the Heathrow Express onto the walls of the tunnel - not seen that before). Anyway, a couple of months have passed so its about time I wrote up my findings.Working back from the conclusion, Microsoft and Palm have indeed come out with a device that should be more acceptable to the mobile masses than previous incarnations of Windows Mobile. There’s a lot in that statement - so let’s see if we can cover it off.
Microsoft has had a bit of a rocky road since it first caught onto the megalomaniac “Windows Everywhere” idea. At the time (so we were told), Windows was going to replace, well, just about everything. Anything was possible, indeed the kool-aid powered propeller heads at Redmond thought nothing of porting core elements of Windows to portable devices and setting out their own hardware configuration to support the “new” operating system. To cut a long story short, it all went horribly wrong: the pesky competition refused to roll over and play dead in a whole number of sectors, the open source movement was unwittingly catalysed (if not spawned) and, well, it turned out to be a lot harder than first thought to develop an interrupt-driven device OS.
We’ve seen several generations of Microsoft’s mobile operating system, and several renaming strategies, leading up to “Windows Mobile”. What we haven’t seen up to now is Microsoft clicking onto the fact that mobile devices are not computers. Up to now, I say - because the Palm device does exactly that. I have been using it in parallel with, and lately instead of my Blackberry 8820. To summarise the positive findings:
- It functions like a phone. There’s a lot in this statement: notably that I could give it to an 8-year-old to make a call, and they could get on and do it as easily as any of these new-fangled devices. The keyboard keys are a bit small, feel a bit cheap and make a funny clicky noise which is slightly off-putting, but they are usable.
- It doesn’t need rebooting. Why on earth I should have to write this at all, is an indication of where Windows Mobile has come from (I would say and how far, but no-reboots should be true off the starting blocks). Still, it’s a good thing.
- It is straightforward to navigate. Straightforward-ish - but I need to add the caveat that I was looking for things from previous incarnations in a way that I probably shouldn’t have. Games, for example, are there but harder to get to; ActiveSync also needed some finding (but perhaps I shouldn’t have needed to look)
- It doesn’t lock up, crap out, put up strange messages etc, or at least it hasn’t for me. I used to have to clear out running programs on a regular basis as they would crowd each other out of the memory - perhaps this is down to more memory, or more clever management, as a user I’m not sure I care.
- It’s an acceptable size. Notice I don’t say “small” - you could put it in a jeans pocket, but you wouldn’t want to sit down. I agree with “chunky”.
To put it bluntly, it works, “does what it says on the tin” etc. The second question is how well it functions as an email device, which needs to be treated separately as it is here that Windows Mobile diverges from other platforms, in terms of how it connects to the server. Some devices allow access to POP email, and Blackberry adds to this with access to Microsoft Exchange, through the Blackberry Exchange Server (BES). One thing I’ve always liked about Windows Mobile is the fact that it integrates very smoothly with Exchange, reflecting exactly what I have on the server: more so than BES, which just doesn’t deal with my email the way I want it to (example: when I move a mail from my inbox to an offline folder, I want it to be deleted from the Inbox: true in Exchange, but not synchronised for some reason to the Blackberry handset. I have other examples around synchronisation). Equally, I prefer the fact that Windows Mobile presents individual inboxes, whereas the Blackberry munges them all together. So, given the fact I’d prefer a Windows Mobile device for email anyway, it was unsurprising I quite liked how this one functioned.
Returning to the handset, then, what about the negatives?
- Battery life is still, to my mind, atrocious. I need to be able to go away overnight, forget my charger cable, and still be able to make calls the next day - not unreasonable I think, but not guaranteed with this device. Sure, I could be less forgetful, but that’s hardly the point.
- No GPS. While I have been coping without this as a nice-to-have, it was very useful when I was out and about to flick over to Google Maps and get directions. There’s still the maps, but no understanding of current location. GPS should be standard issue to mobile employees.
- It’s not perfect - “deep” configuration menus aren’t that easy to navigate, and its not always totally intuitive where to find things, and tehre’s little niggles about all the different beeps and buzzes (why can’t these all be turned off in one go?). The same could be said for Blackberry, however.
There are other downsides, but these are more a reflection of my geeky side than anything. I was disappointed (particularly as I’d forgotten mine) to find that I couldn’t use the Palm as a Bluetooth microphone for my laptop, for example; I can’t install everything I wanted to (notably the freedom keyboard driver); and also, the device isn’t really designed to be used as an ultraportable computer. Tis a bit ironic really, given that Microsoft and Palm have clearly put such effort in, to be castigated for a design feature - but I did like to have something with a screen and keyboard big enough to be typed on. For most people however, this will not be an issue, just the opposite!
So, there we have it. In this world of iPhones and the impending Android OS from Google, there will always be plenty for the geek. I am reminded however, of colleagues, acquaintances and passing strangers I have seen man-handling eminently unsuitable, brick-like smartphones, when all they want to do is acknowledge a mail, send and SMS or simply phone someone. A Blackberry killer? No - but for such mainstream business users, who need to be able to make calls and access their corporate email simply and without a device that needs a week’s training to use, this might be just the ticket.
01-18 – Talking storage at Storage Expo
Talking storage at Storage Expo
For anyone that’s interested, here’s the page for the podcast I did at Storage Expo, talking about what was coming in the world of storage etc. Click here to download it directly. Here’s the intro:
“There is no argument that the complexity and volume of business information is increasing exponentially, and hence the rapid evolution of the supporting storage technology. Tools such as data de-duplication, virtualisation and thin provisioning are moving with great speed from the drawing board to mass adoption. In this podcast, Jon Collins, Service Director, Freeform Dynamics, looks at the latest technologies entering the storage market, and predicts trends for the medium term.”
01-30 – Introducing the Buzz: assessing IT vendor mindshare
Introducing the Buzz: assessing IT vendor mindshare
In this, most oxymoronic of industries that is the IT industry, founded on the hard logic that characterises computing, procurement decisions shouldn’t be about the whims and desires of our all-too-malleable race. But, in the words of an agent in the Matrix, we are “only human” - our judgments are coloured by an ever-shifting set of perspectives taken from our peers, from what we read, from the assumptions we make. And they have to be. Things are changing too fast, as genuine technology advances combine with the leapfrogging tendencies of IT vendors, whisked into a flurry by product marketing and hype. The result: an unending stream of ‘new and improved’ delivered with the suggestion that last month’s investments are now ‘old and inferior’.
Within this, ongoing process of distillation there is a smoke-filled chasm of grey between genuine perspective and uncorroborated opinion. As technology buyers, we continue to develop business cases and conduct our due diligence, but our views, and indeed the very process we go through, will be affected by what’s on offer, and who’s providing it. “Nobody ever got sacked for buying IBM” is the old adage, and IT vendors are forever looking to occupy such a number one spot in their particular markets. The ultimate decision on major procurements may be with the CIO, but influence registers all the way down the stack: in certain geographies the views of front line engineers is courted as part of the due diligence process, and elsewhere, IT operatives have the power to block procurement decisions, or render the resulting deployments next to useless.
So, what drives such opinions, and how much of an impact do they have? We call the overall perception ‘the buzz’, but its not always the case that to be front of mind is to be well considered. Freeform Dynamics partnered with The Register to produce The Buzz Report, collating readership views in such areas as leadership and culture. So, for example, Microsoft may top the list when it comes to mindshare, but it is VMWare that top the leadership and culture charts. Meanwhile, there is only one thing worse than being talked about. Languishing at the bottom of the pile are companies like CA, Nortel and SAP, either because they are less well thought of, or quite simply, that their products just aren’t sexy enough to give them higher profile.
Such information should be handled with care, of course. While Apple may be the current darling of the desktop, it is unlikely in the extreme that we’ll all be taking our laptops out of manila envelopes any time soon. Neither should we read too much into low mindshare, particularly for companies that are doing very well indeed within their respective niches. However, the buzz does offer us some deeper insight into the industry itself: while market share statistics help us understand how successful vendors have been in the past, it is the buzz that gives us an indication of how successful they might become in the future. For technology pundits as well as IT buyers, indeed for the whole industry, nobody wants to be betting the wrong horse.
Tags: NFIT
February 2008
02-07 – On police paperwork and technology promise
On police paperwork and technology promise
As I was driving to the National Motorcycle Museum to give a presentation on archiving this morning, I listened to a constable from a Welsh police force on Radio 4, backing up the soon-to-be-announced (or has it been already) statement that there was too much bureaucracy in the UK police force. For anyone who has difficulty (as I do) spelling bureaucracy, it is helpful to remember that the first part is “bureau”, i.e. “desk” - which is exactly where our local bobbies seem to spend much of their time.
The constable was explaining the overheads involved in stopping a group of individuals and asking whether or not they had any knowledge of an incident (say, a punch-up round the corner on a weekend evening). Each conversation needs to be registered on a form, which takes 10 minutes to fill in - one can only imagine the number of such conversations that take place in central Cardiff on a Saturday night.
Now, of course we need to balance any statements about “overheads” with equal remarks about protecting the rights of the individuals that are stopped and questioned - but that doesn’t seem to be the point here. Rather, one is left with a feeling of, if police officers have to talk to the public for anything other than passing the time of day, there will inevitably be more time spent documenting the conversation than holding it in the first place.
This point was reinforced by an IT professional at the archiving event, who happened to be from a city-based police force. I have to ask - is this as far as technology has come - if we still need our police officers to be filling out sheets of paper in triplicate? I had a bit of a chat about the alternatives to form filling - voice recognition, for example. To support the average copper, with the average regional accent, such technologies are still not yet ready for prime time, I am informed. “Police officers need technology to just work - not to sit through hours of training the software for it to understand them,” I was told. This seems such an utterly fair statement given the 20 years of innovation we have seen since I kicked off my career - that technology should just work - so why is it not true?
The answer, I believe, lies in the gap between technology companies and systems integrators. Some of the things that end-users see as fundamental are not baked into the base software, but are left to partners to implement: an example in the voice recognition context would be the level of training required to support it. Surely there are mechanisms that can integrate with how people work today - passive voice recorders, local accent filters, document scanning etc - that could be used to make such technologies “just work” when used by people on the front line of the business?
I’m not totally despondent - but its situations like this that serve as a constant reminder of, even with all the innovation going on, just how far we have to go with technology before we work out how to close the gap between the wonderfully clever capabilities it offers, and the real needs of its users.
02-07 – On the influence, independence and impact of IT analysts
On the influence, independence and impact of IT analysts
I’ve been reading with interest the whole “influence 2.0” debate, as characterised by Jonny Bentwood and Duncan Brown. For “interest” read “vested interest” of course - as an analyst, I have the dubious tag of forming part of the “influencer community” - which at first glance (to me anyway) doesn’t smell too good - for “influencer” one could read “schmoozer”, “evangelist” or indeed the (slightly gangster-like) “persuader” - like I would be going and seeing some hapless organisation and convincing them that they should adopt a certain technology “if they know what’s good for them…” When influencers of any description are not doing their job as well as they could, they are at best acting like advertising hoardings, and at worst, pimps. Frankly, I’d like to be neither.
Which brings me to a parallel debate - that of “independence”. The concept of an independent IT analyst firm is a tenuous one at best. Most often, the term is used to describe smaller, “boutique” analyst companies, to differentiate them from the big guns (Gartner, Forrester and IDC). But surely these companies are independent as well - or aren’t we all in a bit of a pickle? More fundamentally, given that we are actively working within the realms of the IT industry, can we really be claim to be independent of all technological influences? Of course not. By nature, all IT industry analysts share a common assumption - that information technology can add “value” (a nebulous concept) to organisations and individuals. This assumption is by no means proven - we remain, in historical terms, still on the banks of the primeval technological swamp from which we emerge - but we operate on the basis that there is more goodness to be had, the further we can get up the slope. Or perhaps we are still in the swamp, looking for the bank. But I digress.
To base one’s opinions on such an “onward and upward” assumption is very different from saying that technology offers some magical salve, to solve all ills. That elephantine businesses will be able to pirouette, and consumers will surf on electronic waves… this is of course, pure and unrepentant schlock peddled by those who want to sell their products or consultancy services. The massive difficulty for all involved in in IT is caused by a lack of foresight - it is obvious now (for example) how much of a global game-changer the internet has become, or the social effects of mobile phones in both western and developing countries; what’s not so obvious is what are the next “big things”. The IT industry can be seen as a multidimensional betting shop on steroids - Californian venture capital companies and Wall Street, startups and gorilla incumbents, businesses of all sizes and in all sectors - and indeed, industry analysts - are placing their chips on whatever technologies they believe will give them the biggest return. Virtualisation, SOA, agility, social networking, compliance, green - these are all squares on a roulette table - roll up, roll up!
The IT industry is rife with agendas, and nobody working in technology today can claim any kind of independence from the core assumption above, that somehow, technology can make things better. This is as true for individuals as for companies - careers can be based on the depth of learning about specific technologies or delivery techniques (from my own experience I found it easier to get a job as a UNIX expert than as an “IT manager and all round good egg”). However, there is a world of difference between “technology can make things better” and “technology will make things better”. The can-will distinction becomes nothing to do with the technology, and everything to do with how it is selected, deployed and operated.
And so, to the third i-word in the title of this post - “impact”. Impact is a pretty hard thing to define, but a very easy thing to appreciate, particularly in IT. Most organisations are littered with the detritus of technological failure - applications that were never used or were superseded, servers and storage that were over- or under-sized, network and security devices that failed to be implemented usefully. In a conversation (I’ve mentioned this before) with a Chief Information Security Officer, he was berating security vendors who didn’t hang around for long enough after achieving a sale, to actually ensure the product was configured and operated correctly. Why should they - sales people are measured on their quarterly “number” of sales achieved, and not on whether their customers appreciated the solution once implemented. The agenda item of, “Don’t quit until the product delivers the originally stated benefits,” is often sadly lacking in the T’s and C’s.
Industry analysts, too, can be taken to task when it comes to the follow-through. Historically, analyst companies were set up to help organisations with procurement: to buy products, compare functionality, define a shortlist of suppliers and negotiate contracts. All well, good and useful but while analysts have move upwards into IT strategy, the heritage of the analyst industry also tails off post-deployment. James Governor has talked about Redmonk as being “make-side” rather than “sell-side” or “buy-side” - which is a great perspective in itself; it also illustrates the procurement-oriented focus of most other IT analyst firms.
Trouble is, procurement is not where the real action is, in the companies implementing technology. The procurement stage is just one in a series of stages, each of which involves multiple stakeholders (not just the CIO!). The benefits ascribed to a given product area may be correct in principle, but they will be highly dependent on the business and architectural context, and on a whole set of non-technological criteria, assumptions and activities - as a grouping, we can call such things “best practice.” Get the best practice right, and there is a far better chance of the technology achieving a positive impact for the organisation. Get it wrong, and the impact may be negative or - and this is just as bad in my books - no impact at all.
So - while it is important to be influential (no point in saying something if nobody is listening), it is equally necessary to have a positive impact on the whole of the IT delivery and operational process, not just the time spent at the shops. Analysts can’t escape the fact that much of the IT industry is driven by agendas - and terms such as “market sizing” are squarely aimed at vendor and their stockholder agendas rather than those of end-users. Meanwhile however, the big advantage of a focus on best practice is that, in the (often-strained) dialogue between organisations and their IT suppliers, “best practice” is one agenda that both sides can fundamentally agree on - the vendor really does want to see its technology being successfully used even once the salesperson has received that stonking bonus, and the business stakeholder and operator will both confirm they are happier if the technology is actually achieving what it set out to do. Promoting best practice might not be as sexy as evangelising the latest and greatest technologies (and we love gadgetry as much as the next guy) but it is far more important than any transient attention paid on whatever’s currently “hot”.
When IT industry analysts flutter round the 60-watt bulb of procurement like tropical moths, they will all be clamouring for a share of the light. When the sun comes up the next morning however, it is the people in the ongoing decision making roles that need the most help - and who, also quite frankly, are often a darned sight more interesting to talk to than those who claim to have the ear of the CIO. I’m not going to claim to be technology-independent, but I can wholeheartedly and categorically state my desire to have maximum influence on ensuring the positive impact of IT through adoption of best practice. That may be outside the remit of influence watchers today, but so be it - I can cope if they can.
02-11 – Quick take: P2P document sharing between platforms
Quick take: P2P document sharing between platforms
This is certainly not a formal review but I was doing some investigations into how to share files between team members, potentially on different (i.e. Windows and Linux) platforms. Groove is the peer-to-peer standard on Microsoft Windows, but there are free alternatives - having tried a few, Collanos Workplace seems to be the one that has remained on my desktops. It passes the test of “just working” (though I seem to remember it did need some Linux config tweaks), its not overcomplicated to use, etc. It does suffer a bit from some of the same assumptions as Groove - workspaces lack the ability to sync with local folders for example, which means backups in the traditional sense are difficult (and no, p2p doesn’t mean no backups necessary - if you screw up on one peer, you screw them all.) But overall, its a good as any a place to start for offline remote team co-ordination.
Apologies but I can’t find the original list of products I was trialling, but there are plenty of them. I do remember that competing platform Collaber had firewall issues so no dice.
Incidentally, for Windows document sharing, Foldershare (now acquired by Microsoft and integrated into Windows Live) does appear to be a very useful little tool. It does enable any normal folder to be just “shared”, simply and effectively, so its worth a look. Equally incidentally, the caveat with all such products is they can tend to lack scalability - if you want to put tens of megabytes into the folder at a time, remember how much available bandwidth you have, and how many peers you’re sharing with.
02-13 – Sun buys VirtualBox.
Sun buys VirtualBox.
Sheesh. Does this mean I’m a Sun software user now? And all those years I said they didn’t “get” software. Kind of ironic, that last night I picked up a copy of StarOffice for 20 quid from Staples. Two strikes, I’m out (but watch this space for a comparison of StarOffice, OpenOffice and Symphony)
02-14 – Avoiding motherhood and apple pie
Avoiding motherhood and apple pie
One of the dangers in my line of work is to talk about common sense as if it really existed. For example: it is easy to find your car keys: simply decide on a place for them, and make sure you always put them in that place whenever you enter the house. It is indisputable that having a place for your car keys is a good idea. However, as we all know, we are only human, and there is no guarantee that we will always remember the most straightforward of advice.
I am sure it is the same when advising on any topic, though my experience is limited to information technology. Again, it is too easy to offer the most clearly sensible of counsel: know what you have; be engaged with the business; understand the risks. Such principles are absolutely sound, indeed, they are common sense. Once again, however, we are only human: unless I missed something along the way, I believe that most of our organisations are still staffed entirely by human beings., with all of their foibles. It’s not that common sense is not that; more, that it completely fails to take into account this astoundingly obvious factor.
Statements such as these can very quickly turn into platitudes if left unqualified: motherhood and apple pie. So, what to do? All that is required is a slight difference in perspective. A statement such as, “the IT departments must engage with the business,” is often seen as a pre-requisite for success. In reality however, it is an aspirational goals, to be achieved to a greater or lesser extent. As aspirational goals, we can see such statements as they are: we can also see the challenges that are faced in achieving them – like, in this case, the political issues, the general lack of trust, the uncertainty about who exactly in the business IT should be engaging with.
It is no, simple, rhetorical difference. By seeing the motherhood as aspirational and focusing on the challenges, we can also focus attention on finding ways in which the challenges can be overcome - if indeed, the return really does justify the effort involved. Sometimes the status quo, while imperfect, may be the best worst: perhaps, for example, the coat pocket is the right place to look for the car keys. To finish with yet another trite platitude, if it ain’t broke, don’t fix it.
02-14 – In praise of voice recognition, with all the caveats.
In praise of voice recognition, with all the caveats.
I really like voice recognition. There, I’ve said it. I get a nasty feeling that such sentiments will set me up as either non-independent, or a geeky laughing stock, or worse. But - it works for me: I dictated this post for example. With voice recognition at the moment however, comes a list of caveats as long as your arm:
- the only technology I’ve found that really cuts the mustard is Dragon Naturally Speaking, from Nuance. This is less a product endorsement, and more a statement on the fact that IBM ViaVoice and the Microsoft offerings just don’t sem to cut the mustard - at least they didn’t last time I tried.
- you have to train it. No instant gratification with voice recognition
- results are not perfect - I had to edit the post above afterwards for example
- my experiences with voice-enabled phone directories have been, to be fair, mixed to poor.
All the same, I can see it having a place in our technological future - as a feature, not as a product. We’re already (with bluetooth headsets) over the idea of people talking to themselves inanely, so that shouldn’t be a barrier. Well, we’ll see - I’ve just ordered an OQO PC (a week ago - the first one had to go back because of a power supply fault) and the plan is to test it out as a voice rec device. As with everything, keep watching this space.
02-14 – Martin Brampton and Mambo
Martin Brampton and Mambo
Well, I didn’t know that. Martin was my old boss (or one of ’em) at Bloor Research - and it transpires that he has also been a major developer on the Mambo open source CMS project. Martin was a very wise and pragmatic analyst, but I didn’t know he was an active programmer, more than I can say for me! He’s now developing another CMS called Aliro. Hat off to you Martin, I might well throw it into the pot when I take a look at Drupal.
02-14 – What's all this guff about talent management?
What’s all this guff about talent management?
This is as much a diary entry as any, as I don’t have time to write a full post. But tell me - weren’t businesses always about hiring, promoting and retaining talented individuals? Or did I miss something?
Incidentally there’s a great piece by the FT’s Lucy Kellaway here, on management speak. Oh, for simple language in all walks of life.
02-14 – Why another blog?
Why another blog?
Just in case anyone actually reads this and thinks - “but isn’t he already posting here, and here?” - this blog is an experiment. In finding a voice that flows more naturally than article-oriented blogs; in self indulgence; and in the fact that I liked the URL and wanted to register it. I wanted to have a go at the kind of stream-of-consciousness blogging that some of my esteemed colleagues practice, out of curiosity as much as anything. It will concern itself more with tech issues and my professional life, for what its worth.
If you found yourself here but don’t know how or why, move along, nothing to see.
02-14 – You came all the way down here, just to do this?
You came all the way down here, just to do this?
An interesting remark made by one of the attendees at Tuesday night’s BCS meeting in Plymouth, where I was presenting on security topics. Yes indeed, I went all the way down there, just for that - but it was well worth it. While we would prefer not to live in a vacuum, sometimes it can be a bit isolating spending all of one’s time in a cave number crunching, or indeed, off at vendor events being dunked in the vat of marchitecture. So, I take whatever opportunities I can find to talk to real people about real IT problems. It does more than keep me grounded - it gives me a reason to keep doing it.
For much the same reasons I shall be presenting at Business Continuity Expo and Infosec in the not so distant future.
02-15 – Declarative living and disclosure
Declarative living and disclosure
There’s been a lot of chat on the AR back-channels about IT analyst independence. One thing I have wondered however, is how difficult it is to keep up appearances when using such microblogging facilities as twitter. A bit like, sure, you can fudge your musical preferences with last.fm, but is it really worth it? Similarly, you’d have to be pretty sad to use a made-up persona with Twitter, particularly if (as I am) you use it to connect with people who know you.
02-15 – How was your week?
How was your week?
For information, mine was pretty good. Had some great conversations - particularly with David Rossiter, Ian Murphy and Michael Hyatt, done some good business and generally got on top of things. I’ve been thinking a lot about security, power over ethernet and IT-business alignment, as well as IT analyst independence and the general state of the industry. Somehow, it all ties together.
02-15 – Robin's must-read AR series
Robin’s must-read AR series
Just read ’em. You know it makes sense. Despite little of the analyst “industry” having much of that.
How To Deal With Analysts: #1 The Pre-Briefing How To Deal With Analysts: #2 AR or PR? How To Deal With Analysts: #3 Know The Analysts How To Deal With Analysts: #4 What Do Analysts Do? How To Deal With Analysts: #5 Analyst Research How To Deal With Analysts: #6 The Gartner Gorilla
02-15 – The OQO is here #2. Phooey.
The OQO is here #2. Phooey.
A week or so ago I received an OQO, bought to test voice recognition a bit more. The power supply was bust… so I sent it back. On this one, the battery appears to be duff. Cursory googling reveals I’m not the first with power issues. Please, Expansys, don’t make me send the whole thing back again or I may not want to get the replacement. Well, I will, but patience is wearing thin.
Looks nice though.
02-16 – YouTube fame from the Cisco analyst event
YouTube fame from the Cisco analyst event
Look Mum, I’m on telly again… rather unnerving how the YouTube site says “Jon Collins of Freeform Dynamics offers his ass…”
02-17 – This is a dictated posting
This is a dictated posting
Goodness gracious me, I am trying to use this computer without pressing the key is. It still needs to learn my voice but not a bad start
I am going to trying to publish now.
02-17 – Typed from the OQO
Typed from the OQO
phew - the advice given about kick-starting the OQO battery worked. Now installing: just about everything…
… actually, as an update I have only installed Windows Live Writer, Skype and Dragon Naturally Speaking. Will no doubt get round to Office soon.
02-18 – Can Power over Ethernet make networks greener?
Can Power over Ethernet make networks greener?
It is always dangerous to speculate, particularly in this industry, which sometimes seems to be more founded on speculation than practical reality. Consider, for example, Power over Ethernet (PoE) - essentially offering a way of delivering power through an Ethernet cable. Today, there are a multitude of different devices that can be attached to a network that – WiFi repeaters, video cameras and so on – whose location may not be near a power socket. It makes sense, therefore, that the wire used to connect the device to the network is also the wire supplying the power.
Where PoE has really come into its own, is with VoIP phones – telephone handsets that use a network-based infrastructure rather than a traditional PBX. Voice over IP handsets are exactly the kind of devices that can benefit from power over the net, just as old-fashioned analogue handsets are powered by the PBX. The alternative is to have a transformer next to every phone, which occupies a socket and is one more thing to go wrong.
The downside of PoE is, of course, in the “P”. I’ve written before about how hard it is for hardware vendors in general, and networking vendors in particular, to claim any sort of green credentials for their equipment. The fact that PoE is delivering power, makes it a bit of an anathema to green – particularly as the latest iteration of the standard enables more power, not less, to be delivered over the Ethernet ports. According to the marketing, such power increases are required to support the increasing complexity of VoIP handsets. Colour screens, bigger processors, more memory – all of these things will take their toll and become more of a draw on the corporate power supply. That’s all very well, but it’s not very green, is it?
On the surface, then, Power over Ethernet can hardly be held up as a poster child for green IT. That’s not necessarily the end of the story, however. Let’s consider some of the plans, and likely developments in the PoE space: not least that it may well become built into switches by default, rather than as an exception. From a systems architecture (and indeed, from a manufacturing) perspective, there is little difference between powered and unpowered Ethernet ports. One of the larger network vendors told me that the chances were most of their switches would build in PoE to all ports, at some point in the future.
In principle, that’s still not very green – but there’s more. There are no concrete examples yet, but vendors are also talking about incorporating power regulation directly into network switches: put simply, enabling the switch to regulate supply according to demand. It is not beyond the realms of possibility to imagine the automatic power-down of devices outside certain hours, or indeed, when no data signals were detected (pretty obvious for IP phones, for example). To take this one step further, it is within the realms of possibility to produce handsets that require only a trickle current when in standby mode – and which could signal their requirements to the switch.
Taking such thoughts to their logical extreme, would it be possible to furnish an entire building with a highly regulated, low-voltage, direct current power circuit based on flood wiring (that is, the networking sockets on the wall)? In principle yes – though indeed, there are a number of hoops to be jumped through first. Not only are there the technological hurdles such as the ones above, but also some basic truths, such as the fact that most network wall sockets are not actually enabled: they may connect to a patch panel somewhere, but this will not necessarily be connected to a switch.
All the same, while it may not yet be possible, there is certainly potential. Such a circuit might, for example, be able to replace the currently obligatory raft of telephone and PDA chargers that litter our offices – indeed, I discussed such a thing with one of the senior guys at network wiring specialist CommScope (who brought up the “not-all-ports-are-wired” issue – thanks Ian). Perhaps it might never happen, but it is often only in hindsight that we understand how technologies are to be used: in this case there has already been a precedent set with the charging potential of USB. Why not the same with the network? Such an infrastructure would be able to support a broader range of devices, far more straightforwardly than relying on the mains: as my colleague Tony Lock has pointed out, consider the efforts of the thin client vendors such as Wyse, who are bringing out devices with power requirements small enough to be powered by PoE alone.
Indeed, it can be dangerous to speculate. But equally, just as many technologies also have a downside, so there may be some upsides of PoE we are yet to experience. Just perhaps, and even taking into account the cost of manufacture, Power over Ethernet might just offer an opportunity for networking to demonstrate its green credentials at last.
02-20 – Should we be using computers to heat our own houses?
Should we be using computers to heat our own houses?
A random thought, prompted by a discussion with APC a few years ago. I was surprised to discover (having clearly been a poor student in O Level Physics) that the amount of heat output by a rack of processors, storage etc was exactly equivalent to the amount of power that went in. I know, its so obvious it hurts. More recently, there are plenty of stories of office blocks being heated using computer equipment. The question - as I sit in a relatively warm room, no doubt due to the two computers pumping out hot air right now - is whether such a strategy could also be adopted by the “connected home”?
Which begs the next question - which is the more efficient heating device - the computer or the oil-fired radiator - and why? It would be funny if, at some point in the future, processor cycles were seen as a knock-on benefit of our silicon-based wall heaters…
02-20 – Twits and Faces
Twits and Faces
Thanks to Flock I’ve just about got on top of Facebook, and I’m appreciating Twitter more. I find the latter useful and fun, which is more than I can say for the former (on either count). Proof if any was needed, that different people need different tools. Well, duh.
02-21 – On press releases and ambulance chasing
On press releases and ambulance chasing
A while back, I remember seeing a sketch by Eddie Izzard. The detail eludes me but roughly speaking it covered the cyclic nature of being cool. One could progress from totally uncool, to slightly cool, to cool, to - put one matchstick in the corner of the mouth - very cool, to - put another matchstick in the other corner - totally uncool again.
So it is with technology-related PR, and nowhere is this more starkly illustrated than in the press releases associated with IT security. I have written about how hard it can be to incite a sometimes apathetic audience into action about very real threats; equally, many IT managers will agree how difficult it can be to get funding for security-related purchases. IT security companies have a vested interest in both of these issues: they are obviously not working altruistically. However, in my experience the majority nonetheless do want to deliver value to their clients.
Such desires may be reflected in IT security PR, which often needs not only to explain what a company does, but also why it matters. Frankly, when a “bad thing” is reported in the media it can be gift for any company that offers products in that area – but what to do when there is no bad news to piggyback on? The answer is to put out awareness-raising press releases, to augment the more standard ‘customer win’, ‘expands in Europe’, ‘new partnership’ fodder. It is here, just as with Eddie Izzard’s sketch, that we find the line which should probably not be crossed.
What are the different kinds of press releases? I would grade them into four categories:
· Best practice activity. A vendor may have put together a set of guidelines explaining how to deal with an issue. While it is a fair assumption that it may reference their product or service, it may also contain some sound advice. Press releases saying that a vendor has documented some best practice are little more than treading water in PR terms, but they are innocuous enough.
· Publicising research findings. A security vendor may conduct a study to highlight the scale of a given problem. This is useful when although the area is known about, there is general complacency that the issue has already been dealt with, or that it only happens to other people. Indeed, this is often the kind of activity that we get called in to help with – anonymous surveys may be the best way to talk about an issue that nobody is supposed to have.
· General awareness raising. These tend to be more educational, to highlight that a problem or threat really does exist. A good example of this would be PR surrounding man in the middle attacks, which are a valid candidate for awareness raising. The only downside is that sometimes such press releases assume the audience knows what is being talked about, which is more than a little counterproductive.
· Publicising specific examples of where things have gone wrong. This is probably the worst kind of awareness raising press release. At best, it draws attention to an example of where the threat has been realised, or malpractice has been found in that, “I told you so,” kind of way. At worst, it can only be construed as ambulance chasing, using some unfortunate soul who has found themselves wanting, and attempting to bask in the reflected publicity.
Don’t get me wrong. In general, I like receiving press releases. I may not read all of them, end to end, but I am not embarrassed to admit that I cannot keep on top of everything that is going on, all the time. So, if I am told about a threat that I did not know existed, nor indeed, a product which in some way can resolve that threat, I can add this to my catalogue of knowledge. Equally, however, I make no bones about the fact that I detest ‘ambulance chasing’ press releases. While I concede that it can be useful to use such incidents as examples, they should be used as no more than a passing mention to support any of the other kinds of awareness raising. Consider the difference in the following two statements:
· “The HMFE were foolish, and should get their act together,” said Charlie Farley, vice president of security firm Ultrasecurix. “By using technologies such as ours, it would never have happened in the first place.”
· “Ultrasecurix would like to announce the latest iteration of our product. “It has been redesigned from the ground up to deal with the latest generation of threats,” says Charlie Farley. The many features include… which enable comprehensive protection. “Situations such as those am highlighted at the HMFE only serve to highlight how things are changing and the need to stay vigilant.”
OK, the latter requires the company to have actually done something, which should maybe be the prerequisite in the first place. If, however, you feel the need to put out awareness raising press releases, remember the first three kinds before settling on the fourth. The bottom line is, if you can’t be constructive and add value in the first few paragraphs, then please don’t bother at all.
02-21 – Quick take - Microsoft releases binary document formats under OSP
Quick take - Microsoft releases binary document formats under OSP
Phew - and about time too!
02-22 – Xobni, like Flock, also rocks
Xobni, like Flock, also rocks
Xobni is the Nelson Email Organizer of the social generation. Wonder what happened to NEO? Still a great product, but fails the compelling test once desktop search is installed.
02-24 – Geeking out: testing portable keyboards
Geeking out: testing portable keyboards
I wrote this review of Bluetooth and infrared keyboards a while back, and then promptly forgot to do anything about it, so here it is. A word of warning - I have had issues with the (increasingly locked down) drivers for the Freedom Keyboard. Still, while I’m loving my OQO (review to follow), I can still see a place for these things. I hope its useful!
02-24 – The great keyboard bake-off – Bluetooth versus Infrared
The great keyboard bake-off – Bluetooth versus Infrared
Mobile devices are growing up, to the extent that the latest breed of XDAs and iMates are, essentially, small computers with integrated phone functionality. Despite this rapid evolution in capability however, if you want to type anything significant you’ll still need a good, old fashioned keyboard.
Here we look at two keyboards for Personal Digital Assistants (PDAs) and smartphones. One is the Freedom Keyboard from freedomkeyboards.com, which uses Bluetooth; the other is an infrared keyboard from ThinkOutside. These were tested with a Dell Axim X30 (Windows Mobile 2003) and a QTek 9100 (Windows Mobile Version 5.0) – behaviour may be different for Symbian and other operating system users.
So, here goes.
Out of the box experience
Both keyboards are supplied as a keyboard, a leatherette case, a couple of triple-A batteries to power the keyboard and a CD-Rom containing the drivers. I confess I couldn’t see the point of the leatherette case, which cleverly increases the bulk beyond that which is portable in a standard jacket pocket. Perhaps it is useful for those types of people who like zipping things up before they put them in their briefcases.
It always seems a bit strange to me that drivers for a PDA device are supplied on a CD-Rom, as it sort of assumes that the purchaser will be synching with a computer. This may not be the case if, say, they are travelling, or even if (perish the thought) they just don’t have a computer! The alternative is to download the drivers from the Web, but when I went to the Freedom keyboard web site and selected the drivers page, I was told, “This option will not work correctly with frames.” Checking the box, I realise I was at the wrong site – I should have gone to www.frekey.com. Now I know, but a link from the main site would have been handy.
The two keyboards are roughly the same size when closed, indeed each can fit in the other’s leatherette case to the extent that I can’t remember which goes in which. I think the ThinkOutside keyboard is a fraction thinner, but not to make any difference (unless you’re trying to squeeze it in the back pocket of your jeans). Looking externally, the ThinkOutside keyboard is in strong injection-moulded plastic, whereas the Freedom has rubberised, protective edging.
The ThinkOutside has a clasp that doubles as the PDA/phone support, and the Freedom has a simple latch. Neither is perfect on closure – the latch doesn’t seem to shut properly and the clasp can end up in the wrong position, I could imagine snapping it off when trying to force it shut. Speaking of snapping, there’s a piece of plastic on the Freedom (next to the PDA stand) that looks like it will snap off at some point, but it doesn’t affect the function so its not that big a deal.
Maybe, I muse, I should just be more careful. I open both keyboards, insert the supplied batteries and sit them on the desk in front of me. The ThinkOutside just opens in the right position, with an infrared “arm” that can be rotated to line it up with the infrared transceiver on the device. Less obvious was the Freedom, which required me to consult the manual (quelle horreur!) to find out I needed to get the stand out of its slot. No big one; more importantly, I find that the Freedom doesn’t quite sit properly on the desk. The central hinge is not letting the keyboard open flat, with the result that one end or the other is hovering a millimetre or two above the desk. I wonder whether this will mean I can’t type properly, which, of course, would be a disaster. More on this later.
Finally, I look at the layout. The ThinkOutside keyboard has three lines of keys, whereas the Freedom has four. Advantage Freedom: separate keys for the numbers: in general this keyboard layout is much, much “cleaner” than the ThinkOutside, which has two function keys, and various key over-printings in blue and green. On the positive side, the ThinkOutside has larger keys, so it’s a trade off between clutter and key size. Again, more on this later. First let’s look at how the keyboards connect to my test PDAs.
Connecting to the device
Recall that Freedom uses Bluetooth, and ThinkOutside uses Infrared. I’m always a bit dubious about Bluetooth – too many bad experiences with poor software stacks and incompatibility between transmitters and receivers. Trying the Axim first, I confess thinking “here we go again” when I installed what I thought was the most obvious driver for Windows Mobile 2003, direct from the “Frekey” Web site, and it failed to connect to the keyboard. Freedom Input does offer an 8-meg download which contains all the drivers, past and present, and once I’d installed the one that was specifically for the Axim X30, I was up and running. I didn’t repeat the “Frekey” saga for the 9100, going straight for the Wizard.
Perhaps it serves me right for trying to be clever in the first place. No points deducted, but an extra house point goes to ThinkOutside. On linking to the Web site from either PDA, I was automatically prompted with a “compatible” driver. These downloaded easily, and I was typing before I knew it. With infrared, it was a simple case of enabling the connection. Not so good with the ThinkOutside was the assumption that the device would be less than a certain thickness. This was fine for the Axim, but not so fine for the rather chubby Qtek. The latter had to be either balanced on a metal bar, or opened which risked pressing the QTek’s own thumb-keyboard.
The Freedom keyboard was much more versatile – not only could the stand support both devices, but also it stand could be detached so the device itself could be positioned on a surface, while the keyboard could be put wherever made it easiest to type. Being able to separate the device from the keyboard was a distinct and unexpected advantage.
Both keyboards and both devices responded well to an “off-on” test, retaining the connection. The drivers cohabited quite happily as well, to the extent that I could type on one keyboard, then the other… exactly why I would need to do this is beyond me, but t was nice to know. Equally useless but interesting was the fact that I could switch devices on the ThinkOutside keyboard, a feature not possible with the point-to-point connection of Bluetooth which had to be reset every time I switched. Note there is an auto-reconnect feature when connecting the Freedom to the same device, which is going to be the normal mode of operation for the vast majority of users.
Overall, both keyboards offered wide ranging compatibility, offering the latest drivers as well as being backwards compatible right back to the first PDAs. If I was buying a keyboard en route, and had no PC with me I would be wary of the driver availability from Freedom, but this is a small point (advice: if you’re thinking of going for the Freedom, download the drivers before you leave for the airport!).
More important, however, is how suited the keyboards are to their core function: typing. Lets look at this.
Usability – the finger and knee tests
Typing is as typing does, as Forrest Gump once said off-camera when nobody else was listening. When it comes to keyboards, when all’s said and done typing capability is the only thing that matter the most. When it comes to the ThinkOutside and the Freedom, there’s very little between them when it comes to typing. Both have a set of conventional, laptop-type keys, so there’s no worries about the “dead cat” feel of rubberised keys.
Its worth mentioning that there is a learning curve, in terms of being able to hit the right key at the right time – this is going to be true for any small keyboard. I found it quite easy to use both the more standard-sized keys of the ThinkOutside and the smaller keys of the Freedom, though I do have quite small fingers. Potential users with hands like bunches of bananas may well be better to go for the ThinkOutside.
As mentioned, the ThinkOutside keyboard has many more key assignments than the Freedom. The extra key assignments on the ThinkOutside are helpful when they’re needed, but they can be exceedingly unhelpful when not as their blues and greens distract from the white printing of the letters themselves. While the Freedom is far less cluttered, the disadvantage is a lack of obvious features - the Freedom does not show “Page Down” for example, though it is easy to work out that it’s “Function-Arrow Down”.
Keyboards of this type always seem to suffer from a small amount of character-dropping, so you need to keep an eye on the screen. Both keyboards suffered from this, particularly with the space bars – something to do with the positioning of the microswitches. I say “space bars” as each keyboard has two, one on each side of the hinge.
Speaking of hinges, lets go back to the issue with the Freedom – that it doesn’t sit properly on a flat surface. The good news is that this didn’t prevent me from typing, which would have been a write-off. I did find it slightly irritating, but it was nothing that couldn’t be solved with a well-placed blob of Blu-tak. I do wonder whether this is an issue for all Freedom keyboards, or perhaps the hinge was stiff on the one I was sent; I didn’t try to force the hinge at all, in case I snapped it in two.
Of course, such keyboards are not always going to have the luxury of flat surfaces to sit on. When it comes to balancing keyboards on one’s lap, the Freedom keyboard wins hands down. When opened out it only has one fold, so it is much more stable as a knee-top. In addition, unlike the ThinkOutside, the Freedom doesn’t need the PDA itself to be balanced precariously on top. I found I could sit a PDA on my steering wheel (not while I was driving, of course!) and quite happily use the Freedom keyboard on my lap; This was not really possible, or particularly comfortable, with the ThinkOutside. Note however that both offer better balance than the concertina-type keyboards from a number of suppliers. These open larger and look good, but they do require a suitable surface.
Final thoughts
Overall then, which keyboard would I go for? There is no hard and fast answer, as both can handle the basics. If I were travelling in a way that couldn’t guarantee flat surfaces, for example road tripping, trekking or camping, I would probably choose the Freedom keyboard. If I wanted to take multiple devices with me and switch from one to the other with minimal fuss, I would probably take the ThinkOutside keyboard. Whichever I took, I would be able to get the job done.
While the Freedom wins on gadget vale (watch your kids’ faces light up when you type a message and it appears magically on the device – okay, I’m being a sad Dad now), I admit to a personal preference the ThinkOutside. Now I have got used to the cluttered keys I find I can work faster with it; I also wonder about such issues as Bluetooth interference with Wireless Ethernet, Bluetooth security and potentially battery life, though I have no evidence to prove whether the latter is valid: the batteries in both keyboards have been working fine for several months now. I’m not sure about the rules around Bluetooth on planes, which seem to vary between airlines, another plus point for the ThinkOutside.
To conclude, then. Not so many years ago, infrared technology was seen as a technology past its sell by date, to be replaced by “new and improved” Bluetooth. Here we are in 2006 and infrared is still holding its own: it remains a viable alternative and will no doubt continue to do so, as long as PDAs and other such devices support infrared connectivity.
There is a question whether such keyboards will be necessary at all, given the continued lowering in the entry price of laptops. All the same, there is a growing interest in keyboardless devices such as the Oqo and other Ultra Mobile PC’s (UMPC’s). These devices often come with an integrated keypad, but which is insufficient for any serious typing. For now, such keyboards as the ThinkOutside and the Freedom Keyboard offer a relatively cheap way of turning a handheld PDA or smartphone into a portable word processing device. Not everybody is going to want to stare at such a small screen for too long, but at a tenth of the cost of the average laptop, there are plenty of reasons to invest in one.
02-28 – Goodness gracious!
Goodness gracious!
Its been a while. But I’m back. And there will be books. Lots of books.
Incidentally, this made me laugh - you mean, music reviewers don’t always listen to the whole thing? Shurely you can’t be sherioush ;)
02-29 – 10 things I like about the OQO Ultra Mobile PC (and a few I don't)
10 things I like about the OQO Ultra Mobile PC (and a few I don’t)
I’ve been road testing my new acquisition - the OQO Model 01+ UMPC running Windows Tablet. I’ve been hankering after one of these for a while, but it is only recently that price has dropped to a justifiable level (340 quid + VAT from Expansys). So, what’s so good about it?
- It really is a real Windows computer. Not a PDA, or some other device running Symbian or Linux, but a fully fledged Windows PC. This isn’t some Microsoft hugging statement, more a simple question of broad application support, specifically for voice recognition (see 3) and mind mapping. Bluntly, the things I want to do with this device, I can.
- I can get it out on the Tube. Indeed, I can get the OQO out just about anywhere. It is all very well checking a map on a laptop, but it is a bit of a drag having to walk the streets with a 15 inch computer screen open in front of you. Much of the challenge is logistical (see 8), but equally, the London Underground is not seen as a place for laptops - journeys are shorter, and the potential for theft is reputedly higher (see 7).
- It really does work as a voice recognition Dictaphone. This was the main reason for justifying the purchase of the OQO, as a proof of concept: I am very surprised that such a capability has not been tested publicly before. It’s not perfect, but it does indeed work: I shall be writing more about this in a future post.
- It is a tablet PC. If XP Tablet edition is installed, the benefits that apply to tablet PC’s also apply to the OQO. This includes quite reasonable handwriting recognition: some people prefer to write than type, and indeed it is a lot more friendly in meetings having someone scribbling on a tablet, then tapping behind a laptop.
- It really is very small. This may sound like in stating the obvious, but it is true. The advantage of size is that it can be taken places where a normal computer could not go: it can fit, for example, in a jacket pocket. Yes, you absolutely know it’s there, but it’s not half as obtrusive as a full-size laptop. So if, like me, you sometimes find yourself with that dilemma of whether to take a computer or not, for example to a meeting - then you still can, taking all your files with you.
- It can be taken on holiday. Yes, yes, I know, it should be necessary to take computers on holiday. However, those working in smaller companies don’t always have the luxury of choice; equally there are plenty of uses of a computer that have nothing to do with work. The convenience of the OQO means that it can be put into the bottom of the case and forgotten until it should be needed.
- It more surreptitious than a laptop. Because of (4) it is easier, nay possible to put an OQO into the glove compartment of the car, and it is less of a theft-magnet in general than a fully fledged laptop. From a near distance it looks like some obscure games console.
- It can be used standing up, or while walking. My train ride home yesterday involved an hour’s standing in a tightly packed carriage, but I was still able to finish off the day’s affairs by completing a report and closing down my email. It does require two hands to use the keyboard or pen, however.As another example, a pretty standard thing for me to do on a flight is to get back up-to-date with my e-mail. With the OQO on Tuesday, I was able to upload my e-mail as soon as my plane had landed and the seatbelt light had gone off, which for me was a real boon as I could then go straight to my car in the knowledge that all those pesky messages had been sent to area.
- It can be powered by a portable battery. A couple of years ago I bought a 12V extension battery from Brookstone in the US, for the express purpose to act as a backup power supply for my gadgets when I was out and about. The extension battery is completely inadequate for laptop use, but it can power the OQO via the latter’s own 12V adaptor input. Together with (6), this makes the OQO a much more suitable device for camping trips etc, when access to mains power may be sporadic.
- It looks good. This is very much “last but not least” - but I did get a buzz when the usually dour security staff at Gatwick struck up a conversation about it. Having technology as a talking point doesn’t have to be limited to Mac fanboys, you know!
What’s there not to like? Well. I wouldn’t suggest the OQO as a desktop replacement - with the caveat that I have bought what is now an old model, the OQO is underpowered compared to what multicore desktops can do. Having said that, my virtualisation experiences have led me to believe in the model of smaller computers that are scaled to suit the workload, and the OQO 01+ is an adequate base for office and email use, running on XP. Even so, the screen size is a decidedly limiting factor when it comes to usability - I have found myself frowning when starting to use it, as though some part of my brain is trying to understand if the OQO is just a normal sized computer, but a little too far away.
A second issue is around power. The first OQO I was shipped had a faulty power supply, which I understand is a common fault; the battery when fully charged can power the device for up to 2 hours only, though there is a double capacity battery available (Expansys was shipping spare batteries for 20 quid each, so I bought two of these instead). Finally, a battery “feature” is that, if fully discharged they need to be plugged in for sometimes up to 24-48 hours before they will trip back into charging mode. Nice.
Having said all of that, as a proof of concept (to me) it is keeping its end up admirably. I would love to see an OQO-sized brick that could be inserted into a laptop or desktop form factor like a hard drive, and I am surprised, given its clear usefulness, that we do not see a wider audience for the OQO - I would speculate that this is because few have the luxury of two computers. From the research we conducted last year it was clear that PDAs wouldn’t be replacing PCs any time soon - as costs continue to tumble I expect to see the UMPC form factor to reach a much wider market, not to replace the laptop, but to extend the web of mobile computing still further.
02-29 – Blogging straight in at the long tail
Blogging straight in at the long tail
I was looking at my work blog stats just now. Hmm. Only 5 hits yesterday. “Is it really worth it?” I asked myself, as I scrolled down the page - to see that since I started it a couple of months ago, there’s been over 3000 reads of the site. Not in a gush, but a continuous stream - which makes it all worthwhile really. That’s 3000 more fragments of conversation that I would never have had, augmenting the many other tweets, IM’s and other online communications that themselves augment RL.
And the trend is up, always up.
02-29 – Presenting on Governance in Virtual Worlds
Presenting on Governance in Virtual Worlds
For anyone who’s interested in either topic, I’m going to be presenting on the role and impact of business governance in relation to virtual worlds, in a few weeks at the ISGIG conference in Pisa. What an irresistible topic - here’s my outline so far:
There is (currently anecdotal) evidence that immersive environments such as Second Life are losing their mainstream popularity, as potentially are such social networking sites as Facebook. All the same, together with such technologies as telepresence, the potential for such collaborative technologies is great, in terms of how it enables stronger relationships to develop with the subsequent impact on productivity; virtual worlds also offer the opportunity to interact physically and collaboratively, for example to demonstrate a product prototype. But there are plenty of downsides – not least the potential for abuse which is leading many corporations to ignore, if not avoid such technologies. This presentation considers the benefits and challenges of socially enabled virtual worlds, gives examples of where organizations are using them for corporate benefit, while minimizing the governance risks and operational challenges they cause. Where are the boundaries between real and virtual worlds, and how do they interface with social technologies? What are the problems of doing business in a virtual world, and how is that affected by real word business and regulations? Also, if Second Life is indeed losing its sheen, what’s Third Life going to be like?
Unfortunately Second Life doesn’t run on the OQO 01+ but if anyone’s interested, you can contact Nathan Neumann, I’ll be in there sporadically.
March 2008
03-03 – Going back to a dual boot PC
Going back to a dual boot PC
D’y know, I feel a bit gutted about this but the screen freezes have got the better of me - and I just want to be able to use a computer without … all … the … interminable … pauses. So, I’m reinstalling XP on my primary machine, and sticking with Ubuntu Linux on there as well.
Essentially, if I want to use virtualisation in Linux, its got some kind of conflict on my machine with the graphics controlling software (not sure which bit).
No, I’m not going for Vista right now.
03-04 – Computers that just work...
Computers that just work…
… and do we need faster computers anymore?
Themes I’m thinking about. Interestingly, the former points towards Microsoft and Apple, and the latter points towards open source. I think. More to follow.
03-04 – Who has time to blog?
Who has time to blog?
… was just tweeted by Carter Lusher. Dunno - can’t imagine how it gets done.
03-04 – Will Hillary divorce Bill soon?
Will Hillary divorce Bill soon?
A thought occurred to me as i read about Hillary Clinton’s “last stand” against Mr Obama. If she lost the nomination, would she then - after a suitable time - give Bill the boot for being so generally unreliable as a husband in the past?
I know, thinking out loud. But I wouldn’t be surprised.
03-09 – Pitching at briefings, and the evolving analyst space
Pitching at briefings, and the evolving analyst space
Here’s a story. When I first became an analyst for Bloor Research in 1999, I remember going to a briefing with IBM at Bedfont Lakes near Heathrow, with none other than Steve Mills. I knew he was important, but frankly, I had little idea why - I had come straight from an IT consulting background, and vendor politics were as familiar to me as fly fishing on the moon.
Equally, I had little idea what to do in the briefing. ‘Ask intelligent questions’ was about all I could work out - but I did speak to some of the analysts from other firms, a couple of which I knew from my first ever analyst trip a few weeks before - and one of whom, it would be fair to say in hindsight, knew the game inside out and was treating it with the cynicism it no doubt deserved.
“Briefings are for selling to vendors,” said this analyst, himself from quite a large analyst company. Sadly, this left me none the wiser - I’d never sold anything in my life, and I was rather more confident of my fly fishing skills. All the same, like any rookie analyst, I took the advice on board.
It was when I joined one firm as an associate on the basis that the money I earned would be proportionate to the business I brought in, that I started to think quite earnestly about pitching analyst services during briefings. From my standpoint, it was what everybody did, if they didn’t have full time sales teams.
However, when I actually tried to do it, it was horrible - and I very quickly learned a number of lessons that have changed my world view for good (as I said on Twitter, the whole experience was both ‘unsavoury and unproductive’). It was then I took an active decision never again to try pitching at briefings, which went down the same plug hole as writing follow-up emails happening to remark about services we offered. Since then, I have broadened my understanding still further of what it means to be an analyst - as a result, I have thrown away all such preconceptions and learned behaviours, and decided for myself what role I want to occupy in this, somewhat dubious corner of the IT industry. I’m not alone in this: Freeform Dynamics was set up on the premise that “there must be a better way” - and we have an internal mantra of “no pitching at briefings”; note however, if someone asks what services we offer, we will tell them.
Nobody’s perfect - neither do I think we should be striving for absolute perfection. However, it should go without saying that all industry analyst firms should operate ethically and transparently. Equally, should analysts be making public the times that vendors have asked them to promote a product, for cash? I’m not sure about this - because I think the whole industry is on quite a sharp learning curve since 1999, for two reasons: first - that the dot-com bust derailed the gravy train in which such activities were (apparently) the norm, and second, that the collaborative nature of today’s Web makes it very difficult for dodgy behaviour to remain “under the table”. I’d suggest we give this interesting period a chance to play out, particularly there remain some clear difficulties, even when individual analyst firms are operating at a high standard of ethics. Consider:
- a vendor asks an objective analyst whether the vendor can quote what the analyst has written on their blog, about the foolishness of organisations that have failed to implement a certain technological mechanism. The quote is independently made, but the vendor’s intentions are to sell the mechanism. - a vendor’s PR company pulls a finding out of an analyst study which demonstrates why a specific technology or area would be of wide benefit - a body of analysts who happen to agree, individually, that a new technological area is of interest, succeed, as a body, of hyping up the area - we can see this happening right now in Software as a Service. - an analyst is asked to speak at a conference about a certain topic, and does so, objectively and independently. However the conference as a whole is sponsored by a vendor, and is geared around the markets that the vendor wishes to play in.
Phew - how do you steer through that kind of morass? Our answer, and fundamental belief at Freeform Dynamics is in identifying the win-win between vendor and user of technology, but that’s for another post. But meanwhile, lets relate this back to briefings. At Freeform we recognise that in this day and age, everything we do and say is exceedingly public and rarely forgotten. So, in briefings for example, we would not give a piece of advice that contradicts what is said in one of our reports - the latter are freely available, so the advice could easily be checked. In fact, we do the opposite - as we learn from research, we strive to improve our knowledge, through reports, blogs, presentations and face to face meetings. This requires an underlying level of trust about our knowledge goals and aspirations, which would only be undermined if we took the view that briefings (for example) were a sales opportunity. Consider: “Great conversation about virtualisation we had - yeah, lets keep talking - and I’ll be sure to keep you reminded about our sponsored seminar series!” - it just doesn’t work.
This transparency is at the heart of many of the new breed of analyst firms - ourselves, MWD, Redmonk, Disruptive Analysis to name but a few. It would be blooming difficult to be both transparent and unethical, and the good news is that we’re quite happy not putting in the additional effort it would require. There’s far too much else to be done.
03-10 – Just registered for Vollee
Just registered for Vollee
Apparently my phone passes the test, so a link is on its way. Second Life on mobile phones, in case you’re wondering.
April 2008
04-04 – Microsoft Bloat, Green and the Vista opportunity
Microsoft Bloat, Green and the Vista opportunity
Microsoft’s always going to have a hard time presenting a convincing green story for desktop computing. Its not that the story itself is un-sound: power-saving features are useful as far as they go, and Microsoft as a company is keen to be a good corporate citizen. The elephant in the room however may be summed up in a single, horrible word – bloat.
Microsoft’s story has been a fascinating one, one of the great success stories of the IT industry. There have been several key bets made along the way, which Messrs Gates and Ballmer have stuck to doggedly. This is not the place for a full précis of the Microsoft story, but it’s worth highlighting one of the bets: Moore’s Law, the principle (to paraphrase) that processor capabilities would continue to double ad infinitum.
In practice, this has been characterised by the long-standing truth well known by anyone who has spent the past couple of decades in the industry: that if you want to take advantage of the latest Microsoft software, you’ll have to upgrade your machine. The conversation has repeated with the same regularity as Moore’s Law itself – the bemoaning of how slow everything is running, and the wry nod from those who have seen it before.
Of course, this self-fulfilling prophecy has been of huge benefit to both Microsoft and its hardware partners – companies such as Intel. I very much doubt whether the Wintel alliance was deliberately stuffing software into the operating system just in order to shift more processor units, but one thing’s for sure – neither side was calling ‘stop’. We have also lived through the office bloatware wars, where Microsoft, Lotus and WordPerfect duked it out to see who could out-bloat the competition. (Microsoft won, as we all know)
The attitude throughout from Microsoft – and I know this very well, having asked them on various occasions – has been, “If you want to take advantage of the latest innovations, you’ll need to use the latest technology.” I remember a very public debate I had with Martin Taylor, Microsoft’s ill-fated “Get the Facts” General Manager where he told me that most desktop users wanted far more than just email and word processing. It wasn’t true then, and it isn’t true now.
And so, to Green. While Microsoft might not have been underhand in promoting the “new and improved” – it’s a technology company, after all – neither can the company claim to being particularly green. Fundamental to this is the fact that the power consumption of a device is only a small percentage of its overall carbon footprint. Bottom line: replacing or upgrading a machine undermines any benefits that can be had from ‘new’ power saving features.
What can Microsoft do about it? Well, perhaps that operating system that has been derided as the most bloated of the lot – Windows Vista – could hold the key. At the heart of Windows Vista lies a perfectly sound operating system. There are two issues however – the first is in disk space taken up by installed, never to be used apps; and the second is in the memory requirements for unnecessary run-time services. It should not be beyond the ken of the bright sparks in Redmond to bring out their own tools to monitor what’s really necessary, and strip out anything that isn’t?
Sounds simple, doesn’t it? Trouble is, it goes right to the heart of Microsoft’s core philosophy, and fear – that people might stop buying its software if there is insufficient “new and improved” about it. That’s a fair worry – but it’s happening anyway, as we see Microsoft having to extend support (yet again, with hastily invented acronyms no less) for Windows XP. The same principles could be applied to Microsoft Office – which has already seen a usability overhaul with 2007, now, how about a performance boost? What additional benefits can be achieved offloading tasks to Windows Live services? Etc, etc, the list goes on.
It’s a changing world we are in. While Moore’s Law may continue to apply, many organisations are finding they have more than enough processor power on their desktops to do their day to day work. If Microsoft is really serious about greening the desktop, it has an opportunity to use its position to drive some fundamental changes. The question is, does it have the strength of character to do so? The alternative may be business as usual for Microsoft, but it certainly won’t be green.
May 2008
05-08 – Farewell, Sean Body
Farewell, Sean Body
I have one of Sean’s books beside me, ‘Long Time Gone’, the autobiography of David Crosby. It’s funny - when he lent it to me it was to give me an idea of what a really good music biography could be like, one which stood out from the usual album-tour-album pack. As he was my first publisher, I thought they might all be like that - helping new authors on, spotting non-mainstream potential and working closely to ensure everything could be as good as possible.
Having been around the block a few times since, I know that Sean was pretty unique in his capacity as caring publisher/editor - a hark back to a different era, in some ways. Perhaps because his first driver wasn’t commercial (though he made an economic success of Helter Skelter Publishing, to be sure), he cared mostly about getting the good stories out there. These days, as I have all-too-often been informed, this heart has largely gone out of the publishing industry: too much of it is about achieving the quick peak of sales, getting the TV promotional slots, benefiting from the craze of celebrity that seems to pervade every aspect of modern life.
Most of all, Sean was prepared to give something a shot. Not least, me: he took a bet with whether I could write about Marillion, it was him that convinced me to write about Rush (“What’s that - difficult second book syndrome?”). And of course, when Mike Oldfield called Helter Skelter and asked whether Sean knew anyone suitable to help him write his autobiography, wonderfully Sean put me in the frame - what a shame that, due partially to the onset of his leukemia, he never got to publish what he saw as a breakthrough opportunity.
Sean was a meticulous editor - it is only in hindsight that one can see his attention was already starting to waver, as we worked through the editorial process for Chemistry. Naturally gutted by his announced illness in December 2005, he spent the two and a half years that followed going in and out of hospital, all the while trying to get himself back to work. Perhaps he should have canned it all and looked after himself, but again, hindsight is a wonderfully convenient tool. Throughout the whole process I remained convinced that he’d pull through - he was a fighter and a triathlete, and not the sort of person not to get his way. But he didn’t, this time.
It was with rum pleasure that I saw Sean merited an obituary in the Guardian. He was one of those people who never asked for credit or fame, quietly looking to achieve his own goals. It was particularly sad in the last period that I found it difficult (though not impossible) to contact him because from the writer’s perspective I didn’t want to give him any extra hassle, though as a friend I was wanting to be around. But still, I see from others that he had some lovely people around him, which ultimately, is all any of us could ever hope for.
I remember a conversation with Sean, from when I would occasionally pop in to the book shop off Charing Cross Road, or when we’d go over to the Jazz Cafe at Foyles. We were talking about all that modern technology, and how it meant you could work anywhere in the world. “I quite fancy just taking off to an island,” he said, “I could run Helter Skelter from there, its just a case of being able to communicate and exchange documents and PDFs.” Sean, I see you sitting on a sun lounger sipping some gloriously colourful cocktail, overflowing with fruit and paraphenalia. And I raise my glass to you.
June 2008
06-02 – Ben's Big Madagascan Adventure
Ben’s Big Madagascan Adventure
Don’t they grow up fast - I don’t know whether to be more stunned by the fact that (son) Ben is off to Madagascar on a Scouts trip or the fact he has to be 16 to do it. Only slightly less stunning is the amount of money he needs to raise so he can go - it’s £2,500 all-in, and Ben has launched into fund raising with abandon. Car boot sales, quiz nights and sponsored walks abound.
If anyone’s interested in sponsoring Ben as he takes part in a 26 mile walk around Cheltenham, or indeed otherwise donating toward his costs please contact me directly, leave a comment or indeed, click the button.

Thanks!
06-02 – IIAR Analyst Survey and other musings
IIAR Analyst Survey and other musings
Delighted to see the results of the IIAR survey giving Freeform Dynamics such a big mention. Chuffed also to see Redmonk, MWD and David Mitchell get highly charted, good folks all. Unsurprised at Duncan’s immediate, ill-considered riposte (anyone would think he hadn’t been involved enough in the process…). Fascinated by Alan Pelz-Sharpe’s unconnected but sound piece on the IIAR site, very good. Determined to blog more, here and elsewhere. Feeling generally pretty good.
Onward and upward.
06-08 – IP Address Management - a latent need, not a market bandwagon
IP Address Management - a latent need, not a market bandwagon
It always seems quite ironic to me when I read how industry analysts are accused of ‘bigging up’ vendor offerings, when I and my peers seem to spend so much of our time resetting the expectations of over-optimistic marketeers. Indeed, without such a position, we would offer a far less useful service - on occasion I have been positively surprised that certain companies have wanted to work with us at all, given the utter trouncing we have given their products or how they are taking them, like Beanstalk Jack and his cow, ‘to market’. I should perhaps apologise (and I frequently do) for being so direct - we want people to get the best out of your technology, we really do, so we’d rather be straight with you.
As such, it can be quite a relief when something comes along that is so clearly, obviously useful to so many organisations. Like Internet Protocol (IP) address management, for example. I can’t confess to know the whole space in technical detail, but here’s the skinny from my perspective. It is a well-known fact that the number of devices that need an IP address to connect to the enterprise network, or indeed the Internet has rapidly outstripped the original numbering standard, of 32-bit addresses enabling a potential four thousand million addressable devices. Such things as Network Address Translation (where a local router/address server allocates IP addresses on an as-needed basis using a local subnet, and then translates between local addresses and a reduced subset of externally-visible addresses) have helped reduce the burden somewhat; as of course has the arrival of IPV6, which extends the number of addressable devices to 2^128 (a very big number).
However, a remaining issue is how to manage said pool of addresses. These days the number of required devices has increased dramatically, notably with the arrival of Voice over IP (VoIP) handsets, which are replacing traditional, analogue telephones. From an address management perspective, the Domain Name Service (DNS) protocol is the standard for allocating specific address ranges to specific subnets, but some organisations are ending up with a large number of DNS servers, which themselves have to be managed. The original protocols were never conceived to manage the address allocation, deallocation and reallocation process on such a scale - and don’t facilitate the cataloguing of what address belongs to which department (Microsoft Excel is a more used, but still inadequate tool). Theoretically, organisations could of course allocate addresses statically, once and for all - but all it takes is an office move (requiring a number of devices to move from one subnet to another) and all hell breaks loose.
So - IP addresses need managing, and existing mechanisms aren’t cutting the mustard. This is the breach into which are stepping organisations like BlueCat Networks (who I have spoken to), and Alcatel-Lucent, BT-DiamondIP and Crypton Computers (who I haven’t - but these chaps have) - essentially delivering management tools and distribution mechanisms that really can cope with such huge numbers of addresses and offer quite some respite to those managing the IP network. It is notable that, when I asked BlueCat whether I could speak to a customer, they jumped at the chance and before long I was speaking with Investor AB, a Swedish organisation.
On the call I learned little that was unexpected: yes, the problem existed and was real; yes, it was for the reasons I understood; and yes, the deployment of BlueCat’s address management solution had been a great help. What’s there not to like, I said as we finished the call. And yet, I was left feeling a little puzzled at the end of the call. Notably, whether by agreeing with the problem and solution, I was in some way implicated in yet another attempt to foist unnecessary technology on an unsuspecting public. Particularly in this case - where the solution itself resolves an indisputably technical problem.
But however we might like things to look, the problem does exist and so does the solution. Just as the invention of carpets required the subsequent creation of carpet cleaners, so can today’s overstretched networks benefit from address management. This won’t be a panacea for all ills - it never is, and it should go without saying that technology can never be more than a crutch to poor operational processes or bad managers. I could add a string of caveats at this point but I won’t - rather, I will acknowledge the fact that most network managers do have their heads screwed on pretty well, and defer to their ability to decide whether this would be an appropriate technology for them.
06-09 – Anyone know what this is about?
Anyone know what this is about?
You know, I’m flattered and all… but I presume this isn’t me :-)
06-09 – "That's not a product, that's a business strategy"
“That’s not a product, that’s a business strategy”
I can’t remember who said that to me a couple of weeks ago, but its one of my favourite phrases at the moment - it applies so well to so many things we’re dealing with right now: SOA, Identity Management, Information Management, BPM. Give it a go, and see where it sticks.
06-10 – Master Process Management? Now, there's a thought.
Master Process Management? Now, there’s a thought.
One of the most fun debates I have had in recent times was with a couple of execs from IBM and Cognos, and with a senior analyst from IDC, at last week’s Information On Demand event in The Hague. The question was innocuous enough - “what do you think is the addressable market for Business Process Management?” “That’s tricky,” I replied, “as I don’t believe that there exists a BPM market, as such.”
There followed a great deal of debate, at the end of which I remained to be convinced that there was such a thing right now as a BPM market. That’s not to say that BPM doesn’t exist - far from it, it is an essential facet of many technical capabilities. However it is exactly this factor that makes it very difficult to define BPM as a market.
Where can we see BPM? The ability to capture business activities and use them as a template for service delivery exists in many technologies, as indeed it has done for some time. We have for example:
- Within content management and collaboration tools, BPM is becoming an accepted term for that element of the software that manages the flow of content across different business roles.
- Within enterprise application integration, there is a clear need to understand how applications and dataflows map onto business activities. Initial releases of Microsoft Biztalk, for example, came unstuck until they built in this element.
- Within software development, a logical extension of modelling business processes as part of requirements capture, is to the use the models to support solution delivery.
- Enterprise applications such as SAP have long been delivered through customisable workflows, which require a level of management from a business process perspective.
- Also, BPM works hand in hand with IT and business service management from an operational perspective, such that service delivery can be monitored and supported appropriately post-deployment.
BPM is clearly a highly valued element of IT. But can it really be considered a market - and if so, how should it be defined? Should it consist of the superset of all of the above, or just the BPM element, if indeed it can be separated out in any useful way? Or should it consist only of the “pure play” BPM vendors, those which have a heritage in one of the areas above but are positioning themselves by leading with BPM?
My debating stance, which happens to still match with my opinion, is, “none of the above.” We discussed the comparison with the transport industry - throwing a car, a tank and a plane into a (figurative) bucket just because they all require an engine does not mean we can define the “engine” market. So, as with BPM, there isn’t a coherent enough boundary to frame a market space.
The debate did not result in any shared epiphanies between the participants (though its always nice when analysts from different firms agree). For me, the thought processes didn’t stop there: a few nights’ sleep allowed things to ferment, alongside all those other things I picked up at IoD, not least a slightly flummoxed acknowledgement about the value of Master Data Management. Absolutely nothing wrong with MDM per se, but I still find it quite surprising that it took the industry so long to work out it would be useful to have a single, shared, defining structure for structured information assets.
To whit: following other conversations it occurred to me that the real pain with BPM remained how views of business activities could be shared across tools. IBM claim export/import capabilities between tools such as InfoSphere and Lotus, for example. But what lacks is the knowledge of which is the “master” view - an issue exacerbated when we consider how such information is distributed (and worse, locked in) to applications and software tools across the organisation.
Perhaps what we need, like with MDM, is Master Process Management - tools that enable representations of business activities to be catalogued independently of any application, and then translated between one application and another. The sign-off of a form in the content management system may also signal the acceptance of a new customer in the CRM system, but such information is stored in the heads of those using the tools, and delivered as a point-to-point linkage. How useful it would (and I’m speaking from experience, having been involved in several such activities) to capture such information and relationships once, and use the “single view of business processes” to feed all of the above applications, if not more.
But let’s be clear. Apart from a couple of obscure vendors (pipe up, if you’re out there), such capabilities do not currently exist. There’s no MPM Gartner Magic Quadrant or Forrester Wave, and even if there were, few if any vendors would appear at all. All the same, if we did have MPM tools, I have no doubt that plenty of end-user organisations out there would be much better off. And indeed, we would have an addressable market.
06-13 – IBM: SOA Far, SOA Good?
IBM: SOA Far, SOA Good?
It’s been a couple of months since I jumped on a plane back from IBM’s SOA Impact conference in Las Vegas. For those in the know, this conference is a re-named evolution of the IBM’s WebSphere conference series, and for those less so, WebSphere is IBM’s brand associated with its enterprise scale application server software and assorted gubbins to support transaction management, asynchronous messaging and so on – the “platform” elements of corporate applications. The 5,000 attendees comprised a fair whack of WebSphere developers and architects, and suitably impressive numbers of CIOs and other IT executives.
Why the conference is called “SOA Impact” and not anything to do with WebSphere is of course as much to do with marketing as anything. IBM has been quite forthright in pushing its SOA message, and even the very name of the conference offers one such opportunity to put SOA into the spotlight. The million dollar question is, is it right for IBM to do so? Before arriving at the answer (which, for all you skim-readers is yes, absolutely), it is probably worth a bit of background in SOA.
In this hype-ridden industry, the real trends can often be hidden underneath the layers of hype. One such trend has been the nature of software applications. For the 20-odd years I have been working in this business as a developer, quality manager, IT director, consultant and (only latterly) industry analyst software has been moving inexorably towards a distributed model. In such a model, chunks of software (note the avoidance of buzzwords here) do what they do well, and communicate with other chunks of software as, when and however necessary.
This is a very familiar scenario to anybody that ha been involved in IT over the past couple of decades. For years, organisations have been integrating their legacy systems, packaged applications, databases, rules engines, process control systems and so on using various forms of integration software. And it all works, after a fashion.
Not all of it is how we would like it to be, had we the opportunity to start afresh And this is where the evolution comes in. All those years of deploying and integrating applications have resulted in a pretty good understanding of how things should best be done. The first, most important and blatantly obvious aspect is that things should be planned out in advance, as an architecture rather than piecemeal components. That’s not rocket science, its common sense.
The second aspect concerns how the software elements – the applications and packages, or other discrete units of functionality – should communicate. In this there is an element of rocket science, in that many different mechanisms have been tried over the years. The branch that has developed most strongly has its origins in “object oriented” software design techniques, which begat component based development. It doesn’t matter so much what these things are; more important is the fact that they have evolved over many years before revealing the fundamental truth at their heart: that the best way to consider the interfaces between the software elements, is in terms of the services offered each element.
And so, we have an architecture which is service oriented: SOA. Do you see what I did there? At the risk of sounding like a broken record, this is not some invention of software vendors’ marketing departments, but a consequence of how things have happened in the past.
All of which has been seen as a bit of a conundrum in this fashion industry we call IT. One of the biggest issues IT vendors have had with SOA is that it is an approach, not a product: in other words, there’s nothing to sell. SOA has been confused with Web Services, and has suffered the indignity of being nothing more than a ploy to sell Enterprise Service Bus (ESB) software. But still, SOA is an inevitability – as sure as buildings need architecture, so does software.
Back at SOA Impact, then, IBM is indeed correct in promoting the architecture rather than any particular product line. However, IBM has possibly created a it of a hurdle for itself. Were this any other industry, SOA would be taken as timeless wisdom and we would all be free to get on with more current things. But this is the industry of the new – over-hyped promises of something far better than what went before are what sell technology. Can IBM really afford to stick with SOA in the long term?
The answer is, I blooming hope so.
In marketing and PR departments and among some IT journalists, I have heard more than once the idea that SOA is in some way defunct and we need to move on (indeed, we saw this with SOA 2.0 a couple of years ago, which prompted my esteemed partner at MWD Neil Ward-Dutton to implore that we stopped the madness). Such a standpoint is doomed however, as we are dealing with evolution not revolution: while the detail of SOA may not yet be ironed out, the concept is as sound as anything honed over the decades can be.
It does leave IBM with a bit of a conundrum, however. The company currently has its work cut out, and plenty of services upside, from promoting the SOA message. However, at its heart SOA is not particularly sexy and indeed, the better received it is, the less sexy it becomes as it ends up as part of the fabric. That doesn’t leave marketing much to play with.
All the same, the worst thing IBM could do is look for an alternative to SOA. Build on it by all means – with the caveat that it is early days, so it is important not to outpace what is still a fledgling audience. Perhaps one day SOA will become so well accepted that we shall stop talking about it. In the meantime however, of all the concepts that have been touted by this industry, SOA deserves to remain in the spotlight for some time yet.
06-25 – Presentations and events update
Presentations and events update
I was recently asked for some examples of events I have spoken at, so for the record this is what I’ve participated in so for this year:
Taking back control of IT, Webinar, 28 February 2008 (video stream - registration required)
Improving business productivity through effective content management, Webinar, 4 March 2008 (video stream - registration required)
Governance in virtual worlds, Pisa, Italy, 13-14 March 2008 (slides)
Which is more Important – Compliance, Security or Operability? (Panel Chair) - Infosec Europe, London, UK, 22-24 April 2008 (podcast)
Progressive IT, Sourcing and Architecture, Microsoft Architect Insight Conference - Windsor, UK, 28-29 April 2008 (slides/video stream - requires Windows Media Player)
How to sell virtualisation (Panel Chair), Channel Expo, Birmingham, UK, 22 May 2008
IBM Optim Internal Data Threat event, London, UK, 29 May 2008 (slides)
If you need any more information please do get in touch.
July 2008
07-17 – RSA Panel session confirmed
RSA Panel session confirmed
Just got an email through from those nice folks at RSA Conference Europe. Here’s the skinny:
Session Track: Business of Security
Session ID: BUS-207
Scheduled Date: Tuesday 28th October
Scheduled Time: 16:05 - 17:05 hrs
Session Title: Software and Security as a Service: the risks and the rewards
Session Classification: Strategic
Session Abstract: There is much buzz in the IT industry at present around Software as a Service (SaaS). As with any new trend in IT, there are a number of potential risks which need to be considered when looking at SaaS solutions – but things don’t stop there. At the same time, certain security services can also be delivered using the “as-a-service” model. This panel of security vendors and consultants considers both the risks and rewards of SaaS and security as a service, and delivers practical advice on what organizations should be thinking about today.
Moderator(s):
Jon Collins, Analyst, Freeform Dynamics
Panelist(s):
Gerhard Eschelbeck, CTO, Webroot
Eldar Tuvey, CEO, ScanSafe
David Stanley, MD EMEA, Proofpoint
07-17 – Three's a crowd, so what's four?
Three’s a crowd, so what’s four?
This must be desktop operating system geek heaven - but even as I say that I realise ’m missing out on a whole bunch of ’em. To the point, I have recently come into the possession of a MacBook Pro, which is running OSX 10.4. With that, I’ve got XP running in a (donated, thanks) VMware Fusion virtual machine - which runs like it’s native. Meanwhile, on my old Samsung laptop I’ve gone for a dual boot with Ubuntu Hardy Heron on one partition, and (also donated, thanks too) Windows Vista on the other. What of Solaris or indeed OS/2, I hear you cry.
Its an interesting set-up. A key question is interoperability - which I define as, “Being able to do whatever I want on any platform, without seeing the joins.” I think that’s a bit different to the interoperability Microsoft keeps banging on about, which sometimes seems more about keeping the more evangelical chatterati at bay (incidentally, my suggestion was to ask the silent majority what they thought - I believe there’s far less anti-Microsoft sentiment out there than some bloggers might imply). But the world of Mac interoperability is questionable - iTunes will only recognise iPods for example. Is it a problem? I honestly don’t know - the slickness that the fanboys love so much is a consequence of a tighter control over hardware, and no doubt software specs. Balancing such usability with interoperability is an issue we see in the large in corporate IT shops, and it is no coincidence that CIOs often talk in terms of “One throat to choke.” Thinking out loud: would ‘proprietary’ be such a bad thing, if it just worked?
But I digress. Just one last thing to do is to re-install Lilo, then I’m done.
07-17 – Why I'm interested in Open Source
Why I’m interested in Open Source
Because its having a distorting effect on the rest of the industry. I’m afraid I don’t buy the argument (nor would I have to, but you get what I mean) that all software should be free, as Richard Stallmann would so dearly like. Any more than I would agree that all music should be free, or indeed that my plumber should pop round tomorrow and fix the dripping bath tap. It’s a laudable goal of course, as is world peace and the nirvanic state where everyone just gets on. But its just not going to happen, because various elements of human nature - good and bad - won’t let it.
To me, and unlike what the “try the latest distro” Linux User cover disk would suggest, open source is far more about commoditisation than diversification. I find it hard to believe that there is a place for new operating systems which try to compete on features - as long as we build systems around the Von Neumann architecture, there have been operating system constructs around since 1969 (that’s Unix, folks) and indeed before to support them. I don’t want to ignore z/OS on the mainframe - but let’s remember its precisely because the engineers of a few decades ago got so much right that they are so full of themselves now. Windows is also fine - its an OS which cuts the mustard, both on the desktop and on the server. Bt half the reason I believe that Vista tripped up, was that it did not offer anything sufficiently compelling to the majority, even if its security and manageability features far surpassed those of Windows XP. Didn’t anybody tell Microsoft how hard it is to make a business case for security and manageability?
So, open source offers a commoditisation route: if something is algorithmically so straightforward now, and its a question of evolving it in line with the hardware, then open source offers the answer. No point in paying for something that is already done. There are several advantages: the source is openly readable, which makes it potentially more future safe than anything proprietary. Development continues, in an evolutionary manner, and is funded and resourced across the community, which also provides a proactive support base. Its a model which gives us the LAMP stack - thats Linux, Apache, MySQL and which every programming language you can think of that starts with P. And there is money to be made - but out of services, not so much the software licensing.
And here’s the kicker. When it was realised that the real money was to be made out of services, that’s what had the biggest impact on the rest of the industry. Red Hat started to rake it in due to the fact that corporations wanted to know they had the same levels of support as with their proprietary application base - a fact which triggered Microsoft’s ill-advised “Get the Facts” campaign. IBM started to recognise the role of F/OSS (free and open source software) as on-ramps onto what were at the time more enterprise-ready platforms - Linux to AIX, MySQL to DB2 and so on. And Oracle just started to buy everybody it could get away with, as it always does.
Meanwhile we have Sun, which came surprisingly late to the party. Sun’s going through an open source epiphany at the moment, which is just dandy - though I’ve been spending a lot of time thinking about just how successful they will be. Sun’s heritage with software has been dodgy to say the least - it had a good start with the Catalyst catalogue and a pretty healthy software channel back in the Eighties, but that was in the days when the hardware manufacturers called the shots. Things started going a bit ropey in the early Nineties, when a number of big software plays (developer tools and network management) started to wither on the vine. Java came and should have been Sun’s big success, but the Internet came next and took all the attention away. While Sun was being the dot in dot-com, it forgot to be anything else.
It could be argued, quite successfully I am sure (though I will not try to here), that Sun has turned to Open Source for two reasons. First, it had no other choice, as it was no longer seen as a credible player in the world of proprietary software and it had burned its bridges with the flat-rate licensing deals brought in earlier this millennium. Second, one place Sun does have a growing reputation among its own customers is in services. Today’s open source models are all about building a services revenue stream, and I wish Sun success in that.
In doing so, Sun, IBM and indeed Oracle have embraced open source and integrated it into their business models. There’s one last area of course that open source can be used, and that’s as a competitive weapon - the only major company which is yet to embrace open source in the same way is Microsoft, preferring still to approach open source from the point of view of interoperability, not as an integral part of its software platform. Personally I think this is a mistake, but - let’s be frank, what the bloody hell do I know. Microsoft’s ultimate responsibility is to maximise its shareholder value, just as the rest of the majors. I have no doubt that they have done the maths, just as the others will have done.
Which comes back to the first point. If all software should be free then that’s great, but I don’t see IBM , SAP, Oracle or HP open sourcing any of its core moneymaking platforms. With good reason, from their perspective - its not in their commercial interests to do so. However it is in the commercial interests of some players to knock the competition for being proprietary, even while being quite happy to retain a significant proportion of the proprietary software market for themselves. Its a dangerous strategy - ask any of the bigger companies how they see the impact of open source on their own software base in a few years time, and they’d be hard pushed to give a straight answer. Fortunate for them that this industry has a very short memory, nobody will notice when they change their minds.
It’s all good fun isn’t it. Perhaps that’s the biggest reason why I’m interested in open source: it’s not the software itself, though that appeals to my geeky side; nor particularly wanting to consider the community driven development process, though that is a phenomenon in itself and worthy of attention. Nah - its watching the big guys duke it out in what is in fact a global game of paintball, with all the ducking and diving, short-lived alliances and backstabbings, and where the nature of the code maters little more than the colour of the paint.
07-19 – Blatantly looking for sponsorship
Blatantly looking for sponsorship
For anyone who might be interested (and for those who arent :-) ) Im running a half marathon on October 12th. Its not my first - I did that a couple of months ago, baulking at the idea of sponsorship in case I didnt finish. Which I did. So, this one is the Royal Parks Run in London. Ive decided to run for UNICEF, what a fine bunch of people, and I quote, working for children and their rights. I would very much appreciate any sponsorship, as of course would they - I want to raise a thousand quid so the way I see it, thats only 200 kindly souls donating a fiver. How hard can that be?
So, if you do feel like splashing out five pounds (but of course , dont feel limited by that!), you can donate here. Thanks for all your support, and for reading!
07-19 – Totally non work related - running for charity
Totally non work related - running for charity
For anyone who might be interested (and for those who aren’t :-) ) I’m running a half marathon on October 12th. Its not my first - I did that a couple of months ago, baulking at the idea of sponsorship in case I didn’t finish. Which I did. So, this one is the Royal Parks Run in London. I’ve decided to run for UNICEF, what a fine bunch of people, and I quote, “working for children and their rights.” I would very much appreciate any sponsorship, as of course would they - I want to raise a thousand quid so the way I see it, that’s only 200 kindly souls donating a fiver. How hard can that be?
So, if you do feel like splashing out five pounds (but of course , don’t feel limited by that!), you can donate here. Thanks for all your support, and for reading!
07-24 – Running round San Jose
Running round San Jose
Getting ready for the Royal Parks Run… things went a bit dodgy at the end!
October 2008
10-20 – Royal Parks Run - done!
Royal Parks Run - done!
Well! The Royal Parks Half turned out to be more fun than expected, it was a beautiful day and a lovely opportunity to see bits of London in a whole new way (through sweat-scrunched eyes mainly :-) ). My time was a gentlemanly 2 hours 12 minutes, which roughly equates to 10 minute miles, I was chuffed to bits with that.
Thanks very very much to all those who sponsored me (and anyone who didn’t, there’s still time!) - I have raised just over £500 for Unicef, which is fantastic!
You can see (and even buy, though that would be wrong) pictures here. No sleep till Bath in March!
10-28 – Dome run
Dome run
Lovely - three miles and dome views with sunrise coulis…

10-28 – Testing out the podcast thang
Testing out the podcast thang
Here’s a podcast to go with this article. Its a bit quiet. Spot the infinite loop.
10-30 – Social existentialism, and the Yourdon-Fry effect
Social existentialism, and the Yourdon-Fry effect
“I’m no stranger to celebrity myself,” said the man. And in a way it was true - he had spent many years working closely with such types. Who was he? It doesn’t really matter - he could have been a journalist, a PR guy, a lawyer, a waiter, a taxi driver or indeed, even a biographer. Each role provides no more than a context within which people can relate.
And then, something like social networking comes along and throws any such context out of the window. I confess - and now it’s my turn - that while I am (indeed) “no stranger to celebrity”, I still did get a certain buzz when I saw that Ed Yourdon was following me on the microblogging tool, Twitter. This guy is as close to a celebrity as a software developer can get - back in the Seventies he was among the luminaries of the time, talking about modularity, cohesion and coupling alongside Barry Boehm and Fred Moore, Tom De Marco and Larry Constantine. These fellows were way before my time - when I arrived on the scene in 1987 they had already entered into the collective consciousness, for all I knew having left the empty husks of their physical being behind them.
Which begged the question: when I ‘followed’ him, in Twitter parlance, was I up to something a little less salubrious than just wanting to ‘join the conversation’? I’m now sufficiently advanced in years to have moved up the stack a little, and I have on extremely rare occasions seen signs of what it might be like to have a following. But - when engaging with the great Mr Ed, was I incorrect to have felt a little rush that maybe such a great man (still great, I should add, despite having been wrong on Y2K) might have noticed me touching the hem of his virtual coat?
From my own music experiences and others’, I’m pretty comfortable with the general idea of being a ‘fan’. Despite it admittedly being short for ‘fanatic’, some of the best artists and producers in the world have confessed (if that’s the right word) to holding certain of their peers in awe - Alex Lifeson of Rush, for example, remarked he found it difficult to know what to say when he actually met his guitar hero, Jimmy Page. A little aspiration goes a long way in this short life we all have, and no doubt many a little league sportsman only got to the international stage through wanting to be like their idol. It’s a human trait, which we all deal with admirably, for the most part.
The downside is two-fold. Before pondering too carefully the grammatical accuracy of the last sentence (not to mention this one), let me spell it out: first, the nature of modern celebrity can create idolatry where none should exist, through a complete absence of merit. Rare is the human who is immune to participating in such a thing: we watch Jade Goody as she succeeds and fails, all the while commenting how she shouldn’t have been filmed in the first place. What hypocrites we are. Like watching a poorly concocted film which is designed to pull on the heartstrings but which still makes you cry, we are all victims of our own humanity, as malleable and ductile as a rare metal when it comes to being influenced by the press.
The second difficulty is caused by our penchant for hierarchy. “Why can’t we all just get along,” says the pacifist - but even if we could on the surface, the very nature of our aspirationally conditioned being lies just below. We could blame evolutionary drive and the survival of the fittest; or the hang-overs of feudal society; or indeed the dark side of our meritocratic society. The best rise to the top, that’s the theory, and we all aspire to be there as well. Sometimes people even do reach such heady heights through their financial acumen, their innate skill and artistry, or their abilities as an orator. Other times, they get there through sheer determination and hard work, while others get lucky, or simply know where to stick the knife in. It was ever thus.
Which all makes the world-is-flat, peer-to-peer nature of social networking somewhat confusing. Strip back the layers of course, and for many it is not in the least about being social: that loose category of “famous people” are not, in general, trying to join conversations or make new friends with the masses. No - online tools are a marketing tool, and a very good one to boot. Communities can indeed be built, and harnessed to great effect - as proven by a number of artists (including Marillion, which is well known to be at the vanguard).
What’s perhaps very interesting about it all is that, while there is an obvious imbalance between those forming communities and those participating in them, each side does have to give a bit more of themselves. For an artist, a broadcaster or an industry guru wanting to engage with a community for reasons altruistic or otherwise, this does require a certain level of two-way interaction. For some, this comes easier than others - without knowing the chap directly, I suspect from his previous form that this is the case for Ed Yourdon, who does appear to be engaging because he really wants to.
As too does one Stephen Fry - our very own, quintessential conundrum of an Englishman, himself checking all the boxes of what it means to work as a polymathic, and no doubt workaholic artist within this meritocracy. He’s also (if a polymath can be ‘also’ anything) a dyed-in-the-wool technofreak, which means he has adopted such social networking technologies as blogging, podcasting and Twitter with gusto. But where does this leave the assembled masses, who in the past may have been sated by a quick dose of Fry on a Tuesday evening? It’s a tough one - today, anyone that has heard of the man can not only link to him in some virtual way, but also send him a direct message in the knowledge that it may, not many instants later, turn up on his ever-present iPhone (and indeed, his Web page).
It’s sorely tempting to see this ‘opportunity to interact’ as a real opportunity to interact, and I confess to have succumbed on a couple of occasions. There’s the rub: I’m probably as human as the next guy, and indeed, perhaps more so. Even as I write this, I feel just a little of the uncertainty a certain, hypothetical cat may have felt, sitting in a certain, sealed box next to a certain, ill-placed vial of prussic acid. Thus spake Schroedinger: even the act of measuring, or in this case commenting, can have an impact on the situation itself. I rightly question my motives - am I just writing this very piece in the hope Messrs. Yourdon and Fry might read it, decide I’m an all-round good chap and immediately engage in correspondence? Do I think I might be raising my standing among those who agree with me, or am I indeed looking to chide those who are a little too blatant in their social activity? Is this really just a resentful backlash against those who have surfed the social waves to become famous in their own right, or am I deviously looking to achieve the same for myself?
Here’s the truth of the matter: if this ramble is to illustrate anything, it is of the inherent imbalances that must exist in this new set of contexts, in which most of us will only ever be a bit player. As it is, the imbalances are legion. There’s not just the fact that those standing on top of their own meritocratic mountains will find it difficult to take in all the many messages they receive from those further down, or indeed elsewhere in the range. For some it has proved impossible to respond to everything - as both Ringo Starr and Neil Stephenson have pointed out. As second point however (also made by Mr Stephenson) is that some people are generally far too busy actually doing the things that make one popular, to reap all the rewards of said popularity. Success is perhaps most of all a combination of both talent and hard work: however much one has of the former, all can fail if there is an absence of the latter.
Perhaps in a few decades time, we shall look back in hindsight and make sense of all that is happening now. Realistically, what we are experiencing is only the tip of the iceberg - while some of us are already videoing and posting their every move, all but a few points of the world’s population is still going about their business without recourse to any such tools. Warhollian fame still requires TV - but the highly integrated, digital age looks set to supersede such old-fashioned concepts, as the our Bebo-centred youth culture is already illustrating. We are likely to see the online world in the same way our forebears saw metalled road surfaces - the elder generations may have used them to their advantage, but the youth have never known any different. In the short term, we can all be thoroughly indulgent, testing the waters and pushing the boundaries of social networking; no doubt as things evolve, so will a number of new contexts, within which we will all learn once again to relate.
November 2008
11-01 – A bit of a re-org
A bit of a re-org
I’ve reorganised my blogs, and this will be the last post you see on this blog. For day-job analysis I shall be posting on Freeform Comment. Meanwhile, you will find background reference to all things tech and the skinny on things analytic at IT Industry Outsider - I’ve migrated all posts and comments from here to there. Over at joncollins.net will be my non-technical alter-ego.
Hope that makes sense!
11-01 – Closing in
Closing in
Just testing… moBlog. Weather cold, fire burning.
11-01 – That's that done, then
That’s that done, then
I’ve reorganised my blogs. Here you will find background reference to all things tech and the skinny on things analytic. For day-job analysis I shall be posting on Freeform Comment. Over at joncollins.net will be my non-technical alter-ego. Hope that makes sense!
11-05 – Lack of problem solving at schools? I think not.
Lack of problem solving at schools? I think not.
Here’s the thing. On my travels I have spent a reasonable amount of time with academics and industry leaders, in the course of which on various occasions they have bemoaned certain faults in our education system. It has been worrying, to say the least, to consider that the kids coming out of schools today lack the skills they need to play a full part in our society in general, and to act in technological and scientific roles in particular. Being a parent as well, I have been feeling a certain implication - my kids are in their teens, which puts them in the centre of the debate.
Quite deliberately, then, I have been asking their teachers and other parties what is the truth of the matter. To work back from the answer: industry and academia may have had a point. But there was never a golden age of education either - as one person put it, “I don’t think the education system of the past was ever designed in any way other than to get people through exams.” It may be - and this is an area I haven’t yet fully investigated, so consider this uncorroborated - that it was the elitism of universities in the past that served to minimise the impact of what was an education system for the academic few. Who knows.
But to bring things up to date, you will notice I said, “may have had a point.” From my dealings with schools as a parent, and more recently as a governor, I have seen a very different picture. The teachers I speak to in general are using a language and teaching style which is totally at odds with the idea that school is exclusively about passing exams - they describe different methods of learning, the importance of investigation, looking for alternative solutions and so on. Furthermore, they do so across the curriculum - from English and Design, to Maths and Geography.
So, where’s the truth? When I put the question, bluntly, “Are we failing our children, and potentially our society,” I have been informed that such practices are really, quite recent. Young teachers - those only two years into a career for example - confess that the way they teach now is very different to how things were even when they were at school. All the same however, the way they go about their business does seem to be infused with what can only be described as teaching problem-solving skills.
There are still challenges. Yes, and there remains unanimous agreement on this, our schools are still struggling when it comes to serving up budding scientists. Elsewhere, the side effect of all this educational positivity seems to be a veritable flood of jargon - perhaps a side-effect of quite rapid, and what has probably been workshop-driven change is that for the outsider, it can be difficult to engage without first knowing what is being talked about. This may seem a trivial point but it is important - our school system may be undermining its own credibility with the average parent, or indeed industrialist, by cloaking itself in terminology.
All the same, there is positive news to be had. Let nobody be unconvinced that there is a struggle taking place, to improve the educational lot of our kids. However it is one that the new educational approaches do look in danger of winning. And indeed, the recent demise of KS3 SATs (there’s some jargon for you) may have unfortunate side-effects in the short term, but may well act as a further catalyst to progress. We may be yet to see the benefits of what’s already been achieved, but the future looks bright.
11-06 – Today, I shall mostly be...
Today, I shall mostly be…
… At Zycko’s partner event, (In)Spire. It’s an interesting conference for an analyst to be at, because it represents where the rubber hits the road for many technologies - in the channel. Bottom line: its al very well to have a great product or a world-beating position, but often, its down to a wide variety of other companies to deliver on the promise. I’m currently sitting in a session from Neil M’s old company Zeus, with an audience made up of value-added resellers working with organisations of various sizes.
Bottom line: its important to give people something they can really work with, and not just a good story. Sounds obvious but often forgotten.
And don’t ask me why the (In) is (in) brackets.
11-19 – Leaving Las Vegas... and not for the last time
Leaving Las Vegas… and not for the last time
I’m sure there is a good bit of Las Vegas. As I sit here at Newark Airport, half way home, I am racking my brains to think what it might be. The past couple of days I’ve had the delight to attend what turned out to be a rather enjoyable conference (really), and I’ve spoken with some great people and had a pretty good time. If only I could say that Las Vegas had anything to do with the more pleasurable parts of the trip, but I just can’t.
It took me a while to realise what was wrong, then one morning, when I was out for a half hour jog, it dawned on me. I ran past a bunch of twenty-somethings standing outside a casino, and one of them laughed, heartily and out loud. I suddenly thought what a rare phenomenon that was - there’s beaming smiles up on the hoardings, and lots of faux-bonhomie around the tables but genuine, friendly laughter is a rarity.
What is it about that place, that man-made rats nest of gaudy and overblown structures, that saps the soul? I genuinely don’t know. But there is something unnatural about the whole place. One only has to traverse the length of the canal system in the Venetian, a place where it’s never night and never day, to get an idea of this. When I first went, I decided that the canal system would give a pretty fair impression of what US hell would look like - a nothingness that somehow resembles familiar places and ideas, and which never, ever changes. I have since been told that the inside of the Excalibur would be the UK hell, but I haven’t yet had that pleasure.
Oh boy, I can’t wait to go back.
11-19 – On Jacuzzis and Peer Programming
On Jacuzzis and Peer Programming
“Wow, when I was a kid we had to fart in the bath,” said Eddie Murphy’s character Billy Ray Valentine, when told he would have the use of a jacuzzi. They don’t make films like that anymore, do they? Well, actually they do but a little nostalgia never hurt anyone.
So it might be with rose tinted spectacles that I remember my first programming job. Straight off the best green screens and teletypes that University could offer, I was presented with all the complexity and delight of programming on an Apollo workstation, running a weird hybrid of UNIX and Apollo Aegis. Or at least, we were - given the expense of such devices, I had to share the workstation with another rookie, Mike.
Mike and I used to work together on various things - we were trained together, set tasks together, solved problems together, wrote and debugged code together. One slight issue was that Mike and I were just a little competitive. We played off each other, pushing ourselves just a little in the process. I would say we knew we were doing it but we didn’t - after all, we’d never had any other work experience than this.
It was only after a couple of months that we started to get what we were doing. I don’t want to over-blow this, we were no supermen but we certainly picked up a reputation of getting stuff done. I couldn’t even say that we were better than anyone - it just so happened that we were working in the kind of place that Tom De Marco might cite as an ideal working environment for developers. So we certainly didn’t stand out from the crowd; on the contrary we fitted right in.
Which brings me, in a rather round about way, to the shared experience of jacuzzis and Billy Ray’s nostalgia. These days we refer to peer programming as a somewhat new phenomenon - at least to the uninitiated. There are two comments I can make: first that it is not a new phenomenon, and second that speaking from experience, it works.
There’s a lot of thread to tie up here. One, as already mentioned, is the danger of being over-nostalgic. But like many agile practices, peer programming is founded on common sense. While it will not be appropriate for every situation, either should it be rejected just because it is being pimped as something new.
December 2008
12-11 – Thinking Outside the Bowl - Storage Expo 2005
Thinking Outside the Bowl - Storage Expo 2005
Hello and welcome to the second day of Storage Expo. Was anyone here today that was also here yesterday? Yes? As you know, industry analysts need to categorise things, and that put you in the category of people who need to get out more. Come on, this is storage!
But seriously (OK, I was being serious…), I’m not sure it’s particularly my place to stand up here and preach. The industry analyst’s job is to understand what is going on in technology, and to help end user companies make informed decisions about how to use such technologies. I’ll be absolutely frank with you – I don’t believe that we’ve done that good a job of this. We used to – when it was down to things like databases or office automation packages or hard disk arrays, and it was quite straightforward to compare and contrast the features, or determine who was buying what.
Even as technology started to get more complicated, we analysts continued to provide a useful service – to cut through the complexity and explain things as they really were. A few years ago however we started getting a little too wowed by the marketing - everyone should have enterprise applications, we said, and all the enterprises bought the. Wow, that was clever, we thought, so we did it all over again with other things, like SANs. OK it wasn’t just us saying it; then the Internet arrived and all hell broke loose. “E-business or no business,” that was the mantra, sounded great but the trouble was it was just plain wrong. Sorry. There was a technological storm, where wave after wave of new concepts, gadgets, types of software, appliances and so on struck the shores of end user organisations, who were trying to take it all on at the same time as coping with the frustrations that were caused.
Rather than trying to help people cope, or attempting to slow things down in any way, the industry analyst positively encouraged this situation. Marketing models were developed for IT vendors to make the most of the situation – nothing wrong with that in principle, but let’s face it, the crossing the chasm model was designed specifically to help IT vendor companies sell more stuff. End users didn’t help – I’ll never forget a CIO show me a copy of “Management Today”, open on a one-page article written about the latest and greatest technology with a circle of highlighter pen and the words “When can we get one of these.” I think it was CRM, but I’m not too sure.
The crossing the chasm model – does everyone know what it is? Starts here with the early adopters, before dropping into the “chasm” and emerging as a mainstream technology. Over here … we have the laggards.
Trouble is – and I hate to break it to you guys – this curve is open to abuse. I know, I know, that’s terrible – but it’s true. Marketing departments the world over have not only been trying to second guess the curve but… would you credit this… they’ve been hyping up technologies to force them into the mainstream!
Impossible to believe, I know. Until suddenly, quite suddenly, a couple of years ago the end user community stopped listening. I think the last technology trying to get through the door as it slammed was location-based services, and weren’t they going to be great? The people I feel the most sorry for are the pizza companies – we were going to be able to find the nearest pizza place, anywhere in the world, their sales would have gone through the roof…
Since then we’ve had the bubble bursting, the downturn (never call it a recession) in our industry, the rapid deflation of technology share prices and frantic scrambling for some companies to keep in business. Those enterprise applications haven’t turned mediocre businesses into global success stories, we’ve discovered what we knew in our hearts all along – technology alone can’t solve the problem. While technology can be part of the solution, there’s no such thing as a “technology solution”.
With all of this in mind, where are we now, now that the dust has settled? While there’s certainly a lot more positivity in the industry, the dynamic is different. We’re seeing a lot of consolidation work, many companies are re-organising their server and storage infrastructures and using them as a common basis for their applications. We’re seeing companies looking to extend their existing applications, for example to add collaboration facilities or better integrate them together. We’re seeing certain types of organisation in Europe building compliance features into their architectures – notably in the financial sector, or companies with strong linkages to the US. Indeed, the majority of today’s activity in larger companies involves improving what is already there, either by replacing it or extending it.
What does all this mean? We can see that the bulk of current IT activity is architectural in nature. Companies largely have the applications and services they need, and now they just want them to work better: surely, this is not too much to ask! Trouble is, current buying behaviour does not match any particular chasm model – as companies are trying to improve their lot rather than attempt some new way of doing things, it could be said that the chasm is already crossed for both infrastructure and for major applications, which let’s face it, together make up the majority of most organisation’s IT. We’re 80% there – now we’re just trying to work out the 20% to get things working together as well as possible.
Indeed, there are plenty of technologies that have well and truly crossed the chasm. ASP’s, for example – that’s application service providers – were another casualty of the bubble bursting. They’ve been quietly getting on with the job however, and are now well and truly mainstream. Consider this picture for example – can anyone tell me what it is? Now, if anyone can tell me where it is, then that deserves a prize! This is World of Warcraft, a massively multiplayer online role playing game, or MMORPG. You may think that showing this is no more than a thinly veiled attempt by me to turn a computer game into a taxable expense, and you’d be… no, I wouldn’t be so shallow. (nod) (shake) (nod) etc
World of Warcraft is also an ASP. Not only is it hugely popular, but it’s also virtually impossible to pirate, as it follows a subscription based model. And it works – extremely well. Many of the reasons cited for ASP failure, which were largely around service quality, have had to be solved for this to become a reality. Not only is it a growing phenomenon, it is proof if we need it that the chasm has already been crossed for infrastructure. The role playing genre may well be inside a tornado of its own, but that’s another story! Did you know there are now currency markets for virtual gold?
But we digress. While we’re on the subject of virtual things, this is possibly an appropriate moment to mention storage virtualisation. What do we mean by storage virtualisation? Essentially, virtualisation offers a layer, through software or through a hardware appliance, so that we can manage and provision our storage resources as though they were one, big, “virtual” pool. Virtualisation can exist in a number of places in the infrastructure, and I’m not going to go into the technical detail here. There is a hall full of people out there that have been girding their loins to talk to you about exactly that for a number of weeks now. What I will tell you – and this will probably spoil their pitch – is that virtualisation is not a product, not as such. There may be vendor companies that sell it, no end user company will ever be using it in isolation.
Of course, I don’t really need to tell you this, as you already know. However, someone does need to tell some of the vendors. Virtualisation cannot cross the chasm because it is an architectural construct. We already employ virtualisation techniques in a variety of ways on computers and around the network, we have virtual memory, virtual LANs and so on. The ability to virtualise storage is no more or less than the storage vendors catching up, and implementing open mechanisms so that their storage hardware can be accessed transparently. Rather than greeting virtualisation with delight, many end user companies are reacting with relief – “at last, you’re giving is what we needed all along.”
These architectural mechanisms cannot cross chasms, instead, they are steps along the way for a better managed, more efficient infrastructure. I call this the “Arathi” model, if anyone’s interested – you heard it here first – as you can see we are on the way up the mountain, but there is no single peak. If we were honest with ourselves, we would see technologies such as virtualisation as way points on our journey towards better storage , but instead we insist on hyping them up, treating them as goals in themselves. As a result, they appear as a series of false summits. Virtualisation is one of these – it may have certain benefits, Nothing is more exhausting or debilitating to a climber than a false summit.
OK, that’s virtualisation. What of ILM in all of this? Infrastructure Lifecycle Management, ILM, another term that is bandied about. What’s that all about then? Let’s get one thing straight. You can’t buy ILM. Its absolutely not a product, you’ll never see it in a catalogue or printed on a CD-ROM. You’ll never install it, or patch it, or debug it. So what exactly is it?
Let me put it this way. If virtualisation is a false summit, then for storage, ILM is the mountain. It’s not about hierarchical storage management and tiered storage, or efficient archiving, or full-index search mechanisms, each of which are just facets of a highly efficient, well-managed storage architecture. You can’t have one without the other – the high efficiency and the good management.
While from an end user perspective we might be looking at mountains, chasms and all that, from an end user company perspective I believe the model should look more like waves. This is a series of waves hitting the Great Barrier Reef, so you know.
There are, essentially three phases that we need to go through to reach the top, none of which can happen in isolation, but some of which are dependent on the others. There are essentially three, distinct phases that lead us towards the goals of ILM, and they need to be followed in order. Or rather, they will be followed in order, whatever we would like to think or our IT suppliers would like to think, for the simple reason that the dependencies between them go in one direction only.
What are these phases? Here we go. We have infrastructure consolidation, followed by resource management, and then service management. Infrastructure consolidation comes first, because you need something to manage after all. Of course it could be argued, one could get on and manage the existing, convoluted legacy environments, but you seem to have made the decision despite what us “experts” might say – you want to consolidate first, thank you very much.
Second comes resource management, or making the most of what you’ve got. Virtualisation fits here – it’s software designed to help hardware use more efficient – but hang on, haven’t we a name for that already? We have – its called “the operating system”. That’s all we’re doing here – reinventing the wheel. Sure, it’s a much bigger, globally distributed wheel, but it’s a wheel nonetheless. At the moment, all this clever technology for virtualisation, for hardware orchestration, for intelligent archiving and so on, all of it exists as isolated packages but they’re currently being integrated together. In five years’ time, its not that you won’t even recognise the packages, you won’t even see them!
Thirdly we have service management, which depends on the ability to manage resources. This is about delivering a service to the business – in other words, when real end-users need something, they get it, and they are held accountable for it. This is about resource allocation, SLA management, all these terms, essentially it is outward facing towards the business applications and their users.
Another way of looking at this is shown here. Did you see what I did there? From the business perspective, first, we need to get the basic hardware platform in place, then we need to deliver on how we operate these platforms, for our own benefit as IT people running an efficient operation – we may be able to automate certain aspects of these processes, using tools such as virtualisation software. Only then are we really going to be able to deliver an optimal service to our end users – the business – the people. In storage terms, achievement of all three equates to achievement of something resembling ILM, by any definition of the term. Should it include content management? Of course! Indeed ILM without content management isn’t really worth having, as it lacks the linkages into business information flows, which let’s face it, are pretty fundamental. You can’t know what to do with your data unless you know what its for.
OK so far? Right, what I wanted to do was pick up on some proof points of what I’m saying, from our research. There are a stack of studies I could have drawn on, I would stress if you have any questions at all or if you want to debate anything, please do get in touch, my email is at the end of this presentation. I should also mention, you can sign up for our reports for free, no obligation, just mail me or check the Quocirca web site if you’re interested.
Let’s put some meat on the bones, then. First, infrastructure consolidation: as we mentioned before, this is well and truly underway, as shown here by some research we conducted over the summer for EMC. Here it is – you can pick up a copy from the EMC stand. We conducted another survey for Oracle recently, on grid and virtualisation, and we had similar results. That’s about 90% that plan to consolidate, and two thirds that have projects underway – fair to say there’s a trend there.
When we look at software technologies to make the most of such consolidated infrastructures, we don’t get the impression that things are up and running just yet. A third of companies see virtualisation as majorly important, and most of the rest give it some credit, but its hardly a glowing endorsement, is it? Other research we’ve done backs this up – 60% of companies are seeing virtualisation as an option, and 40% are doing something with it, but these numbers are growing. It looks like a second wave trend to me.
As we said before though, all this clever stuff is only really worth doing if we have the right processes in place to take advantage of it. We haven’t – given the fact that a third of companies aren’t even doing backups correctly, if at all, what chance do we stand with higher level processes such as managing a virtualised storage environment? To go back to the mountaineering analogy, its like putting on crampons without wearing any shoes. The only results will be sore feet and blisters.
This is an important point. I was a bit harsh on vendors earlier perhaps, suggesting that they are just opportunist sales folks, taking advantage of an unsuspecting public. Surely not… but the truth is, they wouldn’t do so if we didn’t let ‘em. Let’s face it, we’d all love to believe that the marketing was true, even today, that our problems really would all be solved. We also all know that if we put our own houses in order, we’d be better able to serve our own companies. There are always reasons a-plenty – conflicting priorities, shortage of time and so on, all of which are valid, but the point remains – nobody should expect to be able to run a super-efficient storage infrastructure without putting the operational basics in place. It stands to reason.
To move on to the third circle, that of service management, I have a silly question for you: should we want to do this stuff anyway? As a question, it’s a trifle unfair – which CIO is going to say he or she wants to run an inefficient shop, totally ignoring business need as he does just what he wants? Sure enough, when we ask this kind of question, we get a unanimous, positive response. So, we don’t ask it very often. What we do ask is how important are certain types of technology to the business – email is the rising star, as a study we performed for Dell in July, demonstrated.
Email was very important, sure, as you can see – 90% of respondents thought so. When we asked how many sales actually were conducted via email, the response came back as 25% - that’s a heck of a lot. Indeed, it’s a bit scary – I’m not sure Exchange or Notes were designed as the ultra-resilient platform for 25% of the world’s sales transactions! And that’s without considering that the underlying servers and storage are up to the job.
Its just one indication of how important it is to get IT right for the business. Another example, based on our more recent research, was when we asked questions about what people would like to see improved in their existing IT. All of the questions were answered yes, as you might expect, but most interestingly, “Finding information” is top of the list of all issues. If that’s not a business issue, I don’t know what is – and we know this, we know through the time we waste ourselves, hunting for that email or file, matching up one customer record with another, rifling through our pockets for a business card and so on. This is not a problem to be solved with some clever technology, rather we need to be better organised about what we are storing, where and why. In short, this is a people problem.
So, we cannot succeed through technology alone. This is the bit where I start wrapping up – three more slides and we’re done. The way we see this is as the law of diminishing returns – it gets harder as we move through the phases. It is no wonder that we concentrate on buying new technology, because that’s something we know how to do – I used to run a network, I was that soldier, I know. Far harder is to understand how to improve how we do things, build better relationships with our internal corporate customers, particularly if there is no impetus from board level. This guy – Thomas Malthus – understood first that there are only so many resources to go around, and the more you dig into the resources, the harder it becomes to support the problem space as it grows, be it people, complexity, relationships, data quantities or whatever.
Given all of this, we’ve had a fair crack at trying to establish where exactly we are along the road to achieving these goals. Or should I say, how high up the beach? We asked a whole set of questions to cover each of these phases, and used the responses to generate some kind of index. Fascinatingly, as you can see, while the individual questions and answers were totally different for each of these sub-indices, the results have come out awfully similar – overall then, in the UK and Ireland, we can say that similar levels of progress have been made across all three phases. These are marks out of ten by the way – looks like we’re just over half way, but let’s not forget the law of diminishing returns!
Of course the index values by themselves mean very little; however it does give us a starting point for comparisons. Here’s one example, by industry – the only point I want to draw out, is that “other public sector”, by which we mean, non-healthcare, that’s government departments, councils and so on, is definitely falling behind, whereas utility companies are edging ahead. We can drill down and see why this is, I’m not suggesting we do this now but if you want further information you can pick up a summary of the report from the EMC stand.
So, what was all that about “thinking outside the bowl”? Weve had chasms, and bridges, mountains and false summits, journeys and waypoints. The point is, we’re not going to get there by thinking about technology alone. The IT departments of many companies realise this already, and the ones that use their business requirements to drive how they implement their IT, and who implement comprehensive operational processes to boot, will stand more chance of “getting there” than those that rely on technology alone. We do not need vendors, or anybody else for the at matter, to tell us what technologies we should be considering, be they dressed up in terms such as virtualisation, ILM or whatever. What we do need are companies that can work with us to understand the needs of our own businesses, and help us to define and deploy technologies that work for us, not for them.
With that in mind, it remains for me to thank you for listening, and to say I hope you enjoy the rest of the conference.
12-29 – That was painless-ish
That was painless-ish
Just upgraded to Wordpress 2.7. Following an “issue” with the site which turned out teo be me needing to top up my data transfer quota. Thanks Fraser :)
2009
Posts from 2009.
January 2009
01-26 – Crossing the Agile Development experts
Crossing the Agile Development experts
Youch! Just before Christmas I posted an article on Agile Development on CIO Online. There’s nothing like constructive feedback, they say, and this was indeed nothing like constructive feedback… so anyway. I collated a set of responses and found I had written too much for the CIO Online comments field, so I have posted my responses here.
Here we go.
My goal in this article was to distil the content of a research report in a way that would be usable at CIO level. For your information, I was trained in DSDM and worked as a software development consultant for a number of years before becoming an analyst. Indeed, given the amount of time I have spent trying to convince more traditional developers of the merits of non-structured approaches, I am very interested to experience what happens when one dares to suggest that Agile might not be the ultimate answer. I hope that helps set the context.
Thank you for your comments – let’s work through them. As a first point however, I hope nobody has an issue with the main thrust of the article, as stated in the conclusion: that while there is a place for Agile approaches, they should not be attempted without due care. To try to suggest to CIOs (or anyone else) that things are otherwise would be irresponsible in the extreme. Do feel free to try to catch me out on the rhetoric, but please do not lets obscure this, most important point.
@Mike – I am delighted for you. There is no doubt that Agile can add a great deal of value if things are done right. I would be interested to know what level of consulting, mentoring etc was required, and how the 11 years panned out - was it forming, storming, norming, performing for you? Apologies for the sprints/scrums slip of the pen. But I don’t “fail to see” anything – rather, we have been advised by people who have had difficulties on Agile projects, that the shorter timescales can make things harder, as documented in the report.
@Agile Dude – I’m using Agile as a word for the same reason this does. I have spent many years watching in-fighting between methodologists, indeed, I would think it would be fair to say that the U in UML, once derived, meant that people needed something new to fight about. I mention some experience above which, while not as comprehensive as many people I know, and plenty I don’t, puts me in a better position than many of my analyst peers. Your mileage may differ but I suggest that you talk to me first before drawing any conclusions about competence or otherwise from a single article.
Finally, and sadly to your point, we don’t offer any services in this space. I’m not worried about a backlash as much as I am worried about “experts” trying to suggest that Agile is an easy ride.
@Ilja Preuss – good point. But it requires strong managers, and not the types who think rebaselining is a project management technique :). As stated in the article, the key factor is to impose a suitable level of structure – whether agile or otherwise, this will win over (let’s call them) anarchic approaches. With regard to communications, indeed, this will be a factor but the continuous development/integration involved in Agile requires this communication more actively than does the gating/reviewing involved in (say) waterfall. It’s not me saying this but the audience we researched.
@Dave Rooney – It’s a fair cop isn’t it! Indeed, when I was running the software development environment for an out-of-control software project (which led me to write the article Craft or Science? Software Engineering Evolution back in 1994) I was led think that if only I could have taken a dozen of the bright sparks involved in the project and locked them in a side room somewhere, they might have been able to deliver something faster than the hundreds involved in the ‘main’ project. You point is “divide and conquer” which I think is totally valid.
Less valid is the remark “you would have done your research” unfortunately. I could answer, “If you had done yours you would have discovered bla bla bla…” but I will resist the cheap shot :-) Seriously, I’m very happy that there are good experiences of Agile out there. I don’t say Agile isn’t suitable for large projects, the people we researched did say it gets harder to get right for larger projects. This might be obvious to you, but that’s another way of saying it’s fundamental, and this won’t be obvious to everyone.
@Mark Levison – did you read the report itself, or take a look at the materials from which it was derived? If so I’m not sure I understand your use of the term “back yard”.
@Alistair Cockburn – shame on you for suggesting I twisted the results of my own research report, particularly given that the citations you draw from the report are indeed summarised in the article. I suggest you re-read the article and the report, and tell the CIO Online audience where exactly the article denies that Agile approaches are beneficial, and where exactly the research report disagrees that Agile should be treated with due care. I am embarrassed by your myopia, but equally, I do understand that it can be difficult to see beyond something you have been espousing so long, and generally so well. Frankly, if you had spoken directly to me we could have had a quiet discussion and most likely avoided this faux pas.
@Anil Oberai – Thanks for your considered response. I think key to what you are saying is that it is important to choose project stages and documents which will work best for your organisation. I have no problem with hybrid approaches, nor with the recognition that wholesale adoption is not always the wisest approach off the bat.
@George Dinwiddie – yeah, I’ll cop that one :-) but all the same, in my experience this one bears out – that people want to stick with the monolithic, or reach out towards a ‘brave new world’. There doesn’t seem to be much middle ground – but I believe there should be!
@Agile/XP Coach – God forbid I should ever take advice from you. No wonder you keep anonymous.
@Grant (PG) Rule – There was some good information garnered in the research about what metrics are seen as appropriate – and you are absolutely right that few saw resource consumption as a valid metric. I’m a bit wary of metrics for a number of reasons – not least that very few places I have seen have tended to implement them in a way that would satisfy their advocates. Which leads me with two questions – are such metrics a valid pre-requisite to project success, and if so, what is the relationship between having the right metrics implemented, and the delivery of project value? When I was a real developer, the metrics that seemed to make the most sense were those that were outcome-based, for example the number of problems fixed; however, artefact-based metrics (number of use cases, test scripts etc) have not always been quite so successful. This is not an area we have researched significantly, so I would be very interested in any steers you might have.
February 2009
02-06 – His mother's genes
His mother’s genes
If your son says, “I’m going to make a snowman like in Calvin and Hobbes,” be afraid.
02-07 – Snow Day 2: Igloo crazy!
Snow Day 2: Igloo crazy!
The snow here has been absolutely fantastic - real winter wonderland, memory making stuff. There have been village snowball fights, sledging and a phenomenal team effort to build an igloo. Take a butchers at this. More due tomorrow apparently.

02-24 – Shorts: the Zune as a cautionary tale for Microsoft interoperability
Shorts: the Zune as a cautionary tale for Microsoft interoperability
The Microsoft Zune.
Nice hardware. well thought through.
Works with Xbox.
However.
Requires own sync software.
Doesn’t work with Windows Media Player.
Can’t interact with Windows Mobile.
Two indexing mechanisms, on top of Windows indexing.
Online social tools still vapourware.
No integration with Mesh.
And, for the record, no non-Windows client. Ouch.
March 2009
03-07 – Cloud computing - myth or reality?
Cloud computing - myth or reality?
If you’re interested in cloud computing and whether there is any substance behind it, you may wish to take a look at this presentation. It was compiled following discussions across the Freeform Dynamics team, and is undoubtedly a work in progress - we shall update our understanding as this area of IT evolves and solidifies.
[slideshare id=1108269&doc=cloudcomputing0-4a-090305165524-phpapp02]
P.S. This is as much a test of embedding a Slideshare presentation into a blog post as it is a link to the content! Turns out its all very straightforward.
03-09 – Pizza Burn
Pizza Burn
Burn, baby, burn burn burn. Also known as: a teenager’s guide to cookery. Step One: watch the oven.

03-31 – Anyone know what this is?
Anyone know what this is?
We found this fist-sized blobby thing in the pond over the weekend. If anyone has any idea what it is, please do share!
A bigger version is on Flickr.
April 2009
04-16 – Reading The Big Switch #1 - the economist's world view
Reading The Big Switch #1 - the economist’s world view
Recently I’ve got into the habit of picking up (largely from the Library) a number of books of a certain genre, and ploughing my way through them - in general, it is the ones that are most compelling that will drive me past the first few pages. And so, I currently have, on the bedside table, amongst others:
- The World Is Flat by Thomas Friedman
- The Long Tail by Chris Anderson
- The Big Switch by Nicholas Carr
As well as providing the selection, an advantage is that such books can be seen as a set, as well as individually. Fantasy books for example often seem to start a short period after a massively destructive event, following the perpetrator when he/she is still small, as they pick their way through calamity; meanwhile, popular tech-business-economics books like to mingle anecdote with theorising (to the extent I have likened them to guys in a bar in the past).
To the point - which is the book that has reached the top of the pile - Nicholas Carr’s Big Switch. Highly eloquent, but I can’t help myself wanting to pick holes in it - not through pedantry or resentment I should add, more because in places it just appears plain wrong. “Computing is turning into a utility,” it says on the back cover. “No, it isn’t,” I say. “Cloud computing is a revolution,” it says. “Nope,” I find myself responding - in the knowledge that in the many debates I have with my colleagues Dale, Tony and Martin on the subject, these are points on which we unanimously agree.
Don’t get me wrong, I’m all for provocative titles that get people thinking, yadda yadda. But it’s not just the conclusions I’m finding flawed, it’s the justifications as well. Which wouldn’t be such a problem if the chap wasn’t so widely read - kudos of course, but what if people actually acted on what is proposed without thinking things through first? Given that by the time I have finished the book I know my flighty mind will already be moving onto even fresher fields, I know I will have missed the opportunity to compose a suitably well argued riposte (if that is what is required). So instead, here is the first in a series of posts about The Big Switch, which offer thoughts as I go along. I promise less introduction, more meat next time.
A. Technology shapes economics shapes society This appears to be a bit of a theme in the early sections of the book, both explicitly (page 22). Perhaps it’s because I’m not an economist (so what do I know) but I don’t believe the relationship is either linear, nor in the right order. “Technology shapes society shapes economics” would be more accurate, though society does drive technology (particularly in terms of military demand). Economics, from the layman’s perspective, is the mathematics of society - the former may be able to model the latter, and even enable certain predictions to be made, but economics is the mirror, not the thing.
The reason I bring this up is not to illustrate my knock-kneed incompetence when it comes to economics (and to be sure, I would no doubt be left for dead should I meet a bunch of economists in a dark alley and attempt to argue my way out), but because it illustrates the framing of the debate when it comes to The Big Switch. Mr Carr is fundamentally one of a clan who believes economics offers the route to explaining most things societal (and equally clearly, I am not) - with the result that arguments will have an undoubted economic slant. Unfortunately there are many perspectives that need to be taken, of which economics is only one in this case. IT architecture is another (given which, I believe a better parallel for IT might be the evolution of the transport system rather than the harnessing of electricity), and business readiness is a third.
A point I will come back to no doubt (as it is the central premise here) is that while economic drivers for full-fat utility or cloud computing may be compelling, there are a whole bunch of other reasons that make it impossible today. Or indeed tomorrow, or in 10 years’ time. We are currently struggling with the data privacy issues that are caused by single companies or government departments building increasingly complex information stores. Such challenges exist to be ironed out, it could be argued - or equally perhaps they are a demonstration of why we shall never fully consign our personal and corporate lives to the cloud. It’s a similar debate about whether there should be a global super-government - of course it would be more efficient if it could ever work (which is moot), but would it be desirable?
While not knowin’ much about economics, I do understand it isn’t all about money - which is a good job. But to see economics at the centre of the debate - the fulcrum if you will - is as unfortunate as seeing architecture as the centre, or business processes or whatever (I remember having a conversation with someone at a certain large software company, who said everything was about ‘management’ because there was management everywhere). As Anais Nin was reputed to say, “We don’t see things as they are, we see them as we are.” Or indeed, as our vested interests drive us - it was Mr Carr who pointed out McKinsey’s vested interests, but in his own debating stance he has been very clear that he has decided a certain stance to be true, and his own interest will inevitably to be defend his position.
From here I shall be moving onto more solid ground. The next blog in this series will cover the book’s misunderstanding of client-server, or more importantly, what is really the cause of data centre inefficiency today. Stay tuned.
04-16 – Virtualisation - The State of Play
Virtualisation - The State of Play
I presented this at an IBM-hosted event a couple of weeks ago. Enjoy.
[slideshare id=1225834&doc=virtualisation090-5a-090331021532-phpapp01]
04-16 – Why Sun Microsystems makes me angry
Why Sun Microsystems makes me angry
It’s not often in this job that I feel genuinely cross about an industry situation, but I find that’s the case with Sun Microsystems. before I start I should declare my hand - I used to run a Sun environment of a few tens of servers and a few hundred workstations. When I say “I used to run” to be fair I had a team of people doing most of the work, including Oracle DBAs, UNIX administrators, software tools people and the like, all of whom were pretty good at their jobs - but I did still get my hands dirty, mostly on the sysadmin side.
Sun was one of the very first companies I ever knew the tagline for - “the network is the computer” - something I first learned when I visited their offices to run some benchmarks on a cross-compiler available from the Catalyst catalogue. To be frank, twenty years on I’m still not sure what the tagline means if I think about it too hard - but it still sounds good. Perhaps Cisco has finally cracked it with its, “no, no, we’re not a server company,” Unified Computing fabric, but time will tell on that one! Something for another post perhaps, I’m here to talk about Sun.
Another declaration I should make is that I haven’t got a monopoly on the facts. But I have, in one way or another, been following Sun’s activities for the past couple of decades. For better or worse: while the hardware has frequently been impressive, and wile the level of innovation has been fantastic at points, equally, I have suffered the consequences of when ‘open systems’ meant ‘anybody can get in unless you know how to fill the holes’, the battles between BSD and System V, the dangers of having the wrong support contract, and so on and so forth. These are no rose-tinted spectacles, I assure you.
But today we see the company failing to agree a price for its own demise, having failed to convince the world that open source is ‘the one way’, and having failed to monopolise its clear advantages in development software, and yes, it makes me angry. Not because of these failures in particular, but because even while Sun has been under-performing in each of these, they all ignore what has always been the company’s core strength - that of building world-class data centres and surrounding infrastructures. Sun’s reputation and brand is built around IT architecture with all its ‘ilities’ attached - scalability, availability and the like. In this, Sun has not so much been giving away its crown jewels, more that it has left them to crumble into dust .
When and how did this happen? It’s difficult to argue otherwise than the wind went out of Sun’s hardware-centric sails five years ago, when the company became “the dot in dot-bomb”. Too much inventory of second hand stock, e-customers literally vanishing left, right and centre, massive commoditisation of the market (cf HP/Compaq), proprietary vs ‘industry standard’ all played their part. Scott McNealy may not have retired at that point, but a little part of the spark died, and the company was late to many parties after that (the decision to adopt AMD chips for example).
I could talk about Java, and my agreement with a Sun exec a few years ago that it wasn’t so much that Sun was in the wrong ball park, it didn’t even get that there was a game on. But software has always been peripheral to Sun, stuff that runs on the boxes. This isn’t what makes me angry however. What does is that a few people at the top of a company can be in denial about what great things it could be doing for its customers, and can set a strategy which not only ignores what people want from Sun Microsystems, it also ignores what the majority of customer organizations want from their IT. Here’s a clue: it’s not wall-to-wall open source, as the open source model is a means, not an end. Ultimately, as an ex-customer, I feel let down as the values I thought I shared with the company have been eroded to a point of irrelevance.
What should, or indeed can be done at this stage? If I knew I could be very rich of course, but that would also assume anyone at Sun Microsystems would listen - the company hasn’t been famous for this in the past. I would start squarely in the mindset of IT architecture, and building efficient platforms that can deliver appropriate service levels to enterprise and mid-market customers. Isn’t that what Sun does? I only wish I knew - it certainly isn’t what it seems to spend its time talking about. I would drop the argument about open source vs proprietary - it’s laughable, particularly given Sun and its own proprietary history. Sadly a Damascene conversion to open source and a few (arguably sound) acquisitions doesn’t make an open source company in practice. Its customers don’t want it from Sun, and Sun can’t deliver it.
Not everything that Sun is doing is necessarily bad - the cloud-for-developers approach also seems sound for example. But any goodness is being lost in the noise of delusion. I genuinely hope that Sun Microsystems, once proud, returns to its former $200 billion status, for the right reasons - notably that it is delivering what customers want. And on this latter point then, finally, if I were Sun I would stop talking about how great everything is (when the whole of the rest of the world knows it isn’t), knuckle down and start delivering.
How hard can it be.
04-23 – Dear Mr/Mme Car Park Manager
Dear Mr/Mme Car Park Manager
Thank you very much for the parking fine I found on my car as I departed from Kemble Station this Monday evening. I would like to explain the circumstances around this fine, in the hope that you might reconsider.
As is quite common for me, I was planning to catch the 09:19 train to London Paddington. I left home with plenty of time, so much in fact that I arrived at the station shortly after Nine O’Clock. As I saw I had time on my hands, I chose to buy my parking ticket at the same time as my train ticket - in this way, I thought, I would save having to file two expense forms.
So, I drew up outside the station and popped inside to queue. The line was quite short, to the extent that I was not in the slightest bit worried. At first anyway - until I realised that the person at the front of the queue was buying an old person’s railcard.
Starting to fret slightly, I sighed with relief as that person was dealt with - and then the next, who was also very slow. My nerves were further agitated at the sight of a car park attendant out of the window.
The next person in line was about to be treated when I heard the announcement for the 0907 to Cheltenham. To my horror by this point, the ticket officer gave his apologies and left the booth, to deal with the train. The car park attendant was still outside, and I did start to go speak with him but before long (though it seemed an age) the ticket attendant returned.
I calmed down somewhat as the next person was dealt with and the attendant (having changed into a summer jacket) drove away. There was then one or two people in the queue, which were dealt with before me - but by now the time was approaching 0915 and the speakers were announcing the arrival of the London train.
Arriving at the counter I asked for a return ticket to London but I was so flustered I completely overlooked asking for car parking as well. Having got my ticket I ran out, got in my car, drove to an appropriate parking space (fortunately there was one nearby) and then ran back to catch the train. Admittedly by this point the parking charge was the last thing on my mind.
I hope you understand that it was certainly not in my interests to park without a ticket, nor would I ever intend to. However I also hope you will take these mitigating circumstances into consideration.
All the best, Jon Collins
May 2009
05-08 – Quick take: Will Borland find a good home with Microfocus?
Quick take: Will Borland find a good home with Microfocus?
Having written quite recently on Borland, I felt obliged to comment on its recently planned acquisition by Microfocus. While the press release talks about ‘market opportunity’ and the like, its also important to think in terms of what it means for software tools customers. A couple of starters-for-10:
1. Borland, having sold off its developer tools division to Embarcadero and kept its application lifecycle tools, has just been picked up by Microfocus 2. Microfocus, having moved from being ‘the COBOL company’ to being ‘the application modernisation company’, has also just bought some testing tools from Compuware
Putting both points together, Microfocus is starting to have quite a comprehensive portfolio when it comes to software development management. But there is a deeper question here. Borland, for all its problems, was doing pretty well in terms of reaching out to the agile development community. Meanwhile Microfocus’ core audience remains organisations that have a larger pool of legacy applications, COBOL or otherwise. Undoubtedly, there is an overlap between the two communities - but it may not be that great: research we have done in agile development draws out one audience, while traditional development and maintenance continues on a somewhat different track.
Where’s the hinge? I believe it lies in SOA - or at least, in the drivers towards an IT environment in which the principles of SOA make sense. This convoluted statement is somewhat deliberate given our consistent finding that many organisations may be doing SOA-like activities, they just don’t use (or even dislike) the term.
A story. A few years ago I did some work with a large, traditional organisation - my task was to train up staff on modern development techniques. Trouble was, I arrived to be told that the CIO had blocked any new application development - any ‘developer spend’ needed to be focused on enhancing existing systems to deliver against new business requirements. While the CIO later relented, it certainly focused how I was to approach what I had been tasked to do.
After a series of workshops, the result was not awfully different from what might be considered a legacy modernisation exercise - understanding the new requirements, mapping them against services provided by the legacy applications, and (through gap analysis) prioritising development activities. Essentially, the first part of the exercise enabled us to identify what needed to be done, at which point ‘development’ could take place in a way that made the most sense.
And what was that way? Certainly, waterfall approaches would have been a poor fit given that there was already a great deal of functionality available - while some (re-)development was necessary, as much focus needed to be placed on a more integration-led approach to software delivery - something which today we might call an enterprise mash-up. Offering a far better fit at the time were timebox-led development approaches such as DSDM - such approaches have of course led to what we call agile approaches today.
From a Microfocus/Borland perspective, then, there is the potential for a good fit between traditional and modern, legacy and agile. Taking into account questions of scalability, availability, security and the like, application modernisation can lead to a platform of functionality to be used across the organisation, which can then be built upon using agile techniques to respond to specific business requirements. I’ve not talked much about the (Compuware) testing piece here - though this is clearly an important element of any integration-led development story. But this is a good start. If Microfocus/Borland looks to help its customers in such a way as to balance old with new, then the acquisition will have my vote.
June 2009
06-01 – Around the lake
Around the lake
Continuing in the ‘exotic places I have jogged’ series, here’s three laps round the lake at Disney World in Florida. Glad I went out at 5.45AM as it’s warming up already (or was that just me!) 21’49“, roughly 8.6 minute miles. 
06-01 – Virtualisation and security - the two-edged sword
Virtualisation and security - the two-edged sword
All new innovations in IT are a double edged sword - with the benefits come challenges and unintended consequences. Not least server virtualisation, which does have a number of security advantages over running software directly on servers. While it’s worth considering these, it’s also worth weighing them up against the challenges, particularly given the relative immaturity of the technology.
To be fair, virtualisation has been around ever since the dawn of computing - what is an electronic computer other than a virtual environment? I did get into trouble a few years back for crying foul when Microsoft claimed, “We’ve been doing virtualisation for many years,” but to an extent they were right - as soon as there is layering or abstraction in a computer system, we have something that could be termed ‘virtualisation’. So, we have virtual memory, virtual disks, and indeed virtual machines.
It’s this latter version of virtualisation that’s garnering most interest currently, and to be more specific still, virtualisation when applied to X86 (i.e. commodity) servers. Until this side of the millennium, server computers didn’t really have the horsepower to run multiple, virtual machines (mainframes did of course, but were still a bit pricey - a factor which is notably changing). Now, with multi-core processors that build in virtualisation hooks (essentially, enabling instructions to be run by the virtual machines in a fashion which makes them pretty much as fast as running on physical machines), server virtualisation has crossed into the mainstream.
From a security perspective, virtualisation has a number of advantages. The first, almost a by-product, is how virtualisation adds to the fundamental security principle of ‘defence in depth’. The virtualisation layer provides an additional level of abstraction which needs to be cracked if the core application is to be reached. In this way it’s a bit like Network Address Translation (NAT) in that it keeps core applications one step further away from the bad guys.
Virtualisation also offers what’s referred to as a ‘separation of concerns’, That is, different workloads (i.e. applications) can be run within their own virtual machines, such that if there is a problem with one, then others should not be affected. Building on top of both of these concepts, security features can be built into the virtualisation layer - in principle (see below). However, virtualisation does have its security downsides. I’ve already mentioned the additional virtualisation layer - this can either exist as a hypervisor (for example that from VMware or Microsoft) or as an extension to an operating system kernel (for example using KVM in Linux). For the additional layer to be effective, it needs to be secure - in some ways more secure than the operating systems and applications it hosts, given that if it gets hacked or goes down, they all go down.
Without dwelling too long on specific vulnerabilities (there’s a handy summary of some here), suffice it to say that the presence of an additional layer adds to the security burden rather than reducing it. Not only is it necessary to secure the hypervisor, but also the management tools that go with it (which may be, for example, susceptible to brute force attacks to attempt a login). There are a number of ways of mitigating these risks, both in terms of patching against specific vulnerability, and also defining building security into the virtual architecture with appropriate use of firewalls and other protective measures. Baking such capabilities into the virtualisation layer is still, admittedly, a work in progress as illustrated by recent announcements such as VMsafe.
There are some additional risks resulting from the increased flexibility that virtualisation brings. For example, a virtual machine may be moved from one, highly protected server, to another far less protected server with out it being absolutely clear that anything untoward has happened. This scenario becomes even more likely if there is insufficient controls over the provisioning and/or management of virtual machines. A virtual machine could even be moved off-site, onto a third party server (at a hosting site or ‘in the cloud’, to coin a phrase.)
Perhaps one of the biggest security risks at the moment is that organisations are deploying virtualisation without always considering the security implications. At a panel I hosted at Infosecurity Europe a few weeks ago, one security pro in the audience explained that in his organisation virtualisation was being brought in primarily for cost reasons (nothing wrong there), but also that the rush towards savings was made without taking security into account (e.g. by costing it into the business case). Security comes at a cost, and like fault tolerance and other risk management approaches, it never works quite so well when it is retrofitted; the knock-on effects of rushing towards virtualisation may also include the aforementioned proliferation of virtual machines, resulting in a more complex (and therefore riskier) environment.
This factor is borne out when we consider recent Freeform Dynamics research suggesting that less than a quarter of organisations feel they are operating at ‘expert level’ when it comes to virtualisation - the impact is that knowledge of security best practice for virtualisation will still be lacking for many.
In conclusion, then, it is important to remember that these are still early days. Virtualisation undoubtedly has its benefits, not least from a security perspective. However organisations adopting virtualisation today would do well to ensure they do not increase the level of security risk they face. A simple risk assessment at the start of any virtualisation deployment, together with an appropriate level of vendor and product due diligence from a security perspective, could be the stitches in time that save a lot of heartache later.
September 2009
09-09 – European analyst of the year
European analyst of the year
Well I certainly wasn’t expecting that! A couple of weeks ago, Freeform Dynamics and yours truly were announced as award winners in the annual IIAR survey of analyst relations professionals. Here’s the stickers:
From an EMEA perspective, Freeform Dynamics was ranked as number 2 after Gartner. I was selected as EMEA analyst of the year, and number 2 (after the illustrious Ray Wang) globally.
I know there’s all sorts of ways of measuring these things - and I also share a number of views with those who say there’s a wider picture of influence (though Vinnie, there’s more to technology decisions than procurement decisions!). That being said, the results are not to be sniffed at - and it’s a distinct pleasure to be viewed in such esteem, so thank you everyone who voted!
October 2009
10-06 – The Bathtub Curve and other over-simplistic ideas
The Bathtub Curve and other over-simplistic ideas
Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It’s funny how it happens - at one stage (for me anyway) it was Eli Goldratt’s Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy’s Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it’s the bathtub curve.
The bathtub curve? Yes, the bathtub curve. It does seem worth an explanation, since I’ve had to explain it every time I’ve mentioned it. “You know,” I say, “When you first implement something there are lots of faults - like snagging in a new house - and over time things settle down… but then after a while the number of faults starts to rise again?” And of course, everyone agrees, because it’s a very familiar picture for anyone working in IT.
I was first introduced to the bathtub curve when an old colleague of mine was working on the railways (or at least for the railway companies as a consultant) to try to help them save money. His findings weren’t pleasant reading. As UK railway companies (and before that British Rail) had tried to sweat the assets - rolling stock, track and the like - to the maximum, things had inevitably ended up right up the wrong end of the bathtub. Maintenance costs were huge, downtimes and delays frequent and so on, as indeed they still are. Trouble was, investment was only ever in one area - “We’ve funded another thousand miles of new track,” someone would say. But that would be without fixing the rolling stock, which would wear out the track more quickly, and so the cycle would continue.

There’s been plenty more written about the bathtub curve, so I won’t dwell - but I did want to relate the challenge with maintenance in general, to IT in particular: that is, when to make investments and upgrades? There will never be an absolute answer to this: while some recent data suggests that a three-year rolling cycle may be appropriate for desktop PCs for example, for a mainframe that may be the amount of time required between reboots. Ultimately, the bathtub curve gives us, for any ‘closed system’ (in IT terms, set of infrastructure assets that need to operate in harmony) a relationship between capex and opex. Capital expenditure will be required on a discrete basis, to replace older parts, upgrade software, and so on, whereas opex covers the costs of more general servicing, support calls, diagnostics and so on.
Wy is this important right now? We understand from out conversations with all but a few IT decision makers that “attention is turning to opex,” which roughly translates as “we haven’t got any cash for capital investments, but we do need to keep the engine running.” Which is fine, of course - but only if an organisation’s IT is still running along the bottom of the bathtub, as measured by SLA criteria and the costs of service delivery. Perhaps the most important question to be able to answer is, “How long have we got before things start getting expensive?” A simplistic question perhaps, but inevitable if things are not treated before time runs out for them. As another adage goes, “A crisis is a problem with no time left to solve it.”
10-08 – The Bathtub Curve and other over-simplistic ideas
The Bathtub Curve and other over-simplistic ideas
Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It’s funny how it happens - at one stage (for me anyway) it was Eli Goldratt’s Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy’s Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it’s the bathtub curve.
The bathtub curve? Yes, the bathtub curve. It does seem worth an explanation, since I’ve had to explain it every time I’ve mentioned it. “You know,” I say, “When you first implement something there are lots of faults - like snagging in a new house - and over time things settle down… but then after a while the number of faults starts to rise again?” And of course, everyone agrees, because it’s a very familiar picture for anyone working in IT.
I was first introduced to the bathtub curve when an old colleague of mine was working on the railways (or at least for the railway companies as a consultant) to try to help them save money. His findings weren’t pleasant reading. As UK railway companies (and before that British Rail) had tried to sweat the assets - rolling stock, track and the like - to the maximum, things had inevitably ended up right up the wrong end of the bathtub. Maintenance costs were huge, downtimes and delays frequent and so on, as indeed they still are. Trouble was, investment was only ever in one area - “We’ve funded another thousand miles of new track,” someone would say. But that would be without fixing the rolling stock, which would wear out the track more quickly, and so the cycle would continue. Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It’s funny how it happens - at one stage (for me anyway) it was Eli Goldratt’s Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy’s Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it’s the bathtub curve.
There’s been plenty more written about the bathtub curve, so I won’t dwell - but I did want to relate the challenge with maintenance in general, to IT in particular: that is, when to make investments and upgrades? There will never be an absolute answer to this: while some recent data suggests that a three-year rolling cycle may be appropriate for desktop PCs for example, for a mainframe that may be the amount of time required between reboots. Ultimately, the bathtub curve gives us, for any ‘closed system’ (in IT terms, set of infrastructure assets that need to operate in harmony) a relationship between capex and opex. Capital expenditure will be required on a discrete basis, to replace older parts, upgrade software, and so on, whereas opex covers the costs of more general servicing, support calls, diagnostics and so on.
Why is this important right now? We understand from our conversations with all but a few IT decision makers that “attention is turning to opex,” which roughly translates as “we haven’t got any cash for capital investments, but we do need to keep the engine running.” Which is fine, of course - but only if an organisation’s IT is still running along the bottom of the bathtub, as measured by SLA criteria and the costs of service delivery. Perhaps the most important question to be able to answer is, “How long have we got before things start getting expensive?” A simplistic question perhaps, but inevitable if things are not treated before time runs out for them. As another adage goes, “A crisis is a problem with no time left to solve it.”
10-12 – Chemistry in paperback
Chemistry in paperback
After a long delay, due to factors beyond anyone’s control, a paperback version of Rush-Chemistry in the works. Please note that this is not a full update, it still ends when it ends (at the end of the R30 tour in October 2004). All reported errors and typos should now be fixed, as listed here: RC Addendum v0.3b.doc
Update: this is now available for pre-order at Amazon UK.
November 2009
11-11 – Towards dynamic systems management methodologies
Towards dynamic systems management methodologies
I do take my hat off to the people who first put together the IT Infrastructure Library, which (like the end of the cold war) celebrates its twenty-year anniversary this year. It’s one thing to learn, both on the job and with little support, the ‘golden rules’ and best practice principles of any discipline. It’s quite another to have both the gumption and skill to document them in a way that makes them usable by others. And I should know – the time I spent working in the methodology group at a large corporate were a fair illustration of how tough this can be.
So, when the books that made up what we now refer to as ITIL were first released, they must really have hit the nail. First adopted by public organisations in the UK, they have since become one of the de facto standards for large-scale systems management. Their authors can feel rightly proud: as indeed can the authors of other best practice frameworks that have, through force of adoption, been proven to it the spot.
However, there could be a fly in the ointment, and its name is dynamic IT – the latest term being applied to more automated approaches for managing the resources offered by our data centres. I know, I know, this is one of those things that people have been banging on about for years – indeed, for at least as long as ITIL has been around, if not longer. So, what’s different this time around?
There are a number of answers, the first of which is virtualisation. While it is early days for this technology area (particularly around storage, desktops and non-x86 server environments), it does look set to become rather pervasive. As much as anything, the ‘game changer’ is the principle of virtualisation – the general idea of an abstraction layer between physical and logical IT does indeed open the door to be more flexible about how IT is delivered, as many of our recent studies have illustrated.
The second answer has to be the delivery of software functionality using a hosted model (‘software-as-a-service’, or SaaS for short). No, we don’t believe that everything is going to move into the cloud. However, it is clear that for certain workloads, an organisations can get up and running with hosted applications faster than they could have done if they’d built them from scratch.
I’m not going to make any predictions, but if we are to believe at least some of the rhetoric about where technology is going right now, as well as looking at some early adopter experiences, the suggestion is that such things as virtualisation and SaaS might indeed give us the basis for more flexible allocation, delivery and management of IT. We are told how overheads will be slashed, allocation times will be reduced to a fraction, and the amount of wasted resource will tend to zero.
We all know that reality is often a long way from the hype. If it is even partly true however, the result could be that the way we constitute and deliver IT service becomes much slicker. IT could therefore become more responsive to change – that is, deal with more requests within the time available. In these cash-strapped times, this has to be seen as something worth batting for.
But according to the adage, the blessing might also be a curse, which brings us back to the best practice frameworks such as ITIL and what is seen as its main competitor, COBIT. In the ‘old world’, systems development and deployment used to take years (and in some cases, still does) – and it is against this background that such frameworks were devised.
My concern is how well they will cope should the rate of change increase beyond a certain point. Let’s be honest: few organisations today can claim to have mastered best practice and arrived at an optimal level of maturity when it comes to systems management. Repeatedly when we ask, we find that ‘knowing what’s out there’ remains a huge challenge, as do disciplines around configuration management, fault management and the like. But in general, things function well enough - IT delivery is not broken.
The issue however is that as the rate of change goes up, our ability to stick to the standards will go down. Change management – i.e. everything that ITIL, COBIT and so on help us with, has an overhead associated with each change. As the time taken to change decreases, if the overhead stays the same, it will become more of a burden – or worse – it might be less likely to happen, increasing the risks on service delivery.
To be fair, methodologies aren’t standing still either – indeed, ITIL V3 now builds on the principle of the service management lifecycle. But my concern about the level of overhead remains the same: ITIL for example remains a monolithic set of practices (and yes, I know, nobody should be trying to implement all of them at once!). There’s part of the framework called ITIL Lite, designed for smaller organisations, but to be clear, the ‘gap’ is for an “ITIL Dynamic” for all sizes of company. In methodological terms, the difference would be similar to DSDM and its offspring, compared to SSADM in the software development world - fundamentally it’s the difference between to-down centralisation, and bottom-up enablement.
Perhaps the pundits will be proved wrong, and we’ve still got a good decade or so before we really start getting good at IT service delivery. But if not the question I have therefore is, how exactly should we be re-thinking systems management to deal with the impending dynamism? We could always wait for the inevitable crises that would result, should the dynamic IT evangelists be proved right this time around. But perhaps its time for the best practice experts to once again put quills to a clean sheet of paper, and document how IT resources should be managed in the face of reducing service lifetimes. If you know of any efforts in this area, I’d love to hear about them.
11-18 – Infrastructure convergence - the two sides of the coin
Infrastructure convergence - the two sides of the coin
Let’s be fair - IT isn’t the only industry fraught with jargon, but it can certainly hold its head up high among the leaders of the field in terms of gobbledygook. The minefield of acronyms we all have to suffer is worsened by the astonishingly bad practice of overloading individual, sometimes quite innocuous words and combining them with new ones, which in turn are subjected to unnecessary and distracting debate. And so we are subjected to hearing such things as, “That’s not a business process,” or “Adaptive virtualisation through best of breed solutions.” Members of the Plain English Campaign must be constantly shaking their heads in desperation.
Realistically however, it’s nobody’s fault. I put it down to the fact that we’re working in such a new sphere of human development that existing language just isn’t sufficient to support the dialogues we need to do our jobs. It doesn’t help either that the industry is stuffed full of geeks (I’m one of these) and armchair philosophers (and these) in equal proportion, but that too is a symptom of the times. Take away the people that are inventing all the convoluted phraseology, and you’d take away the innovation as well.
And so to convergence. There’s a word. It may have existed before the IT revolution - “The massed forces of Napoleon’s armies converged on the plain,” for example - but we’ve taken it and made it our own. Convergence means different things to different people, and given that it looks like it is becoming a very important word indeed, it is worth exploring a couple of these meanings.
IT is all about convergence. Convergence pressure comes from the top down, as a counter to complexity. My dubious understanding of evolutionary theory tells me that it is as much about diversification, as survival of the fittest. Innovation is another word for the relentless drive by vendors to release new products, and providers to release new services, in the hope that some of them will become as popular as Windows, Google or the iPhone. Deep in the infrastructure as well, plenty of new-and-improved technologies deliver all kinds of clever benefits, but only add to the complexity of the infrastructure. Understandably, then, IT environments start to hit issues of fragmentation, of complexity management, of interoperability.
We’re seeing it right now with virtualisation for example - lots of benefits, cost savings etc but we’re only starting to see some of the issues (e.g. virtual server sprawl, back-end bottlenecks) that ensue when virtualisation moves out of the pilot and into production.
Meanwhile, convergence also comes from the bottom up. New technological advances tend to get subsumed into the infrastructure or application architecture - which is why we see waves of merger and acquisition activity throughout the history of IT. But it’s not just about making different things work together - it’s also recognition that certain technologies, which may start independently to solve separate problems, eventually need to come together in some ways. And so in the telecomms world we have that wonderfully obscure acronym, FMC, which stands for fixed-mobile convergence - bringing together traditional telephone infrastructures with mobile infrastructures. We’re also seeing the convergence phenomenon in the data centre - or more importantly, in how the different devices in the data centre communicate with each other, that is, storage, servers and communications devices. IT has always been about processing information and moving it around - and historically, the three types of device have evolved along their own, discrete-yet-interoperable paths. But right now the industry is coming to terms with the fact that there can be only one data movement standard that all devices share - without getting into the fuzzy words too much, this is called 10 gigabit Ethernet. The timing for the convergence of data centre technologies couldn’t be better, given what we’re seeing with virtualisation. Note that it’s not just about everyone saying, “let’s all use Ethernet,” rather, the 10GbaseT standard has had to be defined to support a wide variety of requirements imposed by the data communications, application latency and storage throughput needs of modern IT environments.
In other words, the data centre convergence we’re seeing is not only an inevitable step given the evolution of the underlying technologies, but it is responding to a real need caused by the fragmentation of today’s IT. It’s important to see both together - as there have been all kinds of technology convergence that have come at the wrong time - i.e. not responding to a significant enough need - and that have fallen by the wayside. Examples include policy-based management of security, and perhaps even FMC which will remain a slow-burn until it becomes an necessity. But for data centre convergence, the time could well be right.
[Written at Fujitsu VISIT 09 conference, during a keynote by Dan Warmenhoven, Chairman of NetApp - who famously said “Never bet against Ethernet!”]
December 2009
12-21 – Looking back... and forward
Looking back… and forward
This year has not been easy for many. By all accounts it has been a polarised recession: some of the largest companies have fallen, while for others, business has been booming. The stock markets plummeted only to rebound; organisations have offered themselves up for sale and then decided, actually, they could go it alone; redundancy packages have been announced and then withdrawn; in the UK, house prices are once again higher than they no doubt should be - with the net effect of leaving individuals and companies alike with that best-just-get-on-with-it feeling.
In IT, perhaps, we have seen it all before. This is my tenth year as an analyst - and I seemed to spend most of the ‘middle years’ introducing every article and paper with the words, “as we emerge from the downturn…”. Now here we are again: budgets are being squeezed, Opex is generally the focus over Capex, and vendors and end-user organisations are having to demonstrate value in everything they propose. But ROI is no longer being punted as yet another “next big thing” - it is merely something that needs to be taken into account because it would be foolish not to.
Speaking personally, one thing is for sure, this certainly was an interesting (in the Chinese sense of “May you live in interesting times”) year to be taking over in charge of a company. Freeform Dynamics may be small but - just like any other company - we still have P&L to consider, bills to pay, mouths to feed. Looking back, I believe 2009 will be the year that we grew up as a business - across the team we learned to be more disciplined about what we do and how we do it. We remembered what it was we were set up to achieve, and I can now explain to family members what it is I do without them looking politely puzzled, or simply glazing over.
It’s been a hard year, but equally, it has been a good year for Freeform Dynamics. As we’ve grown we’ve learned more about how to engage with the audiences that matter to us most, that is, the community of mainstream IT decision makers and professionals. At the same time our writing skills have been developing, both through sheer force of having to do lots of it and with the help and support of David Tebbutt, who brought a much-needed level of editorial discipline to the company in the two years he was with us. Another high point was undoubtedly being voted second analyst firm in Europe by those people who manage analyst relationships. Strip everything else away and all it meant was, “The people we like to talk to.” Which, of course, means a great deal.
What of next year? When it comes to predictions, I’m afraid I can’t share some of our competitors’ opinions that Cloud Computing will be “The big thing” to happen in 2010. It occurred to me quite recently where the confusion lies - that in this wonderfully innovative and yet charmingly backward industry, changes that happen to IT vendors are often misinterpreted (transference, maybe?) as changes to how organisations do business. If we turn the telescope around, yes, sure, I have no doubt that Cloud Computing is already having quite an impact on the vendor space and will continue to do so. But while changes in the engine might be profound, we are a long way off achieving what end-users might consider to be a smoother drive. Back in 2002 I called the journey towards the Universal Service Provider “the revolution that never was” - and while I remain confident of the destination, it will take more than a few articles in Management Today, a hastily-organised conference and a Twitter hashtag to get us there.
My money is on virtualisation - an area of IT where the reality and the vendor hype do appear to be achieving some level of parity. Most organisations are still, in general terms, only trialling virtualisation. We can say both that the best is yet to come, and the fan has yet to meet the fecal matter. This will be a big year for infrastructure - getting the hardware layer working as a layer, moving data and execution capability around, building in the qualities required to underpin acceptable levels of service delivery, from data protection to risk mitigation. Very boring, very necessary, and undoubtedly where much of the action will be.
I’d also spare some thought for Green IT. We shall continue to see the debate broaden beyond power and cooling, and come April of course, “Green” will become a compliance issue. On the radio the pundits are talking about the failure of Copenhagen, but at the same time, individual countries have now moved a lot closer to deciding their own strategies. IT undoubtedly has a major part to play, not just in sorting itself out but particularly in how innovative use of technology can help reduce the overall footprint, and we shall be continuing to watch this area with interest.
Meanwhile, service provision in general is, as implied by Cloud, going to continue in its rapid evolution (that’s one for the punctuated equilibrium fans). The machinations of vendors are quite fascinating - companies with their heritage(s) in hosting, telecommunications, software, hardware, internet search and so on are all deciding, rightly or wrongly, that they are running after the same ball. It is like watching a flotilla of tankers trying to turn around and head in the same direction, towards a goal that remains, thus far, a mirage - some parts of the model are proven (managed services, outsourcing, certain contractual vehicles and so on) but others (Platform as a Service, anyone?) remain highly speculative.
Then what is this goal? IT is not an end in itself, it’s a set of mechanisms that help individuals collaborate better, and companies deliver services to their customers. Much depends on integration - at both a hardware and software level. It is all too easy to work out potential answers, but far harder to establish what will really make a difference given the mish-mash of technological slurry that already exists. At Freeform Dynamics we shall continue to peel back the layers of understanding, as seen through the optic of what’s really going to make a difference to mainstream IT - the centre of the bell curve. Our core business remains research, from which we shall continue to deliver down-to-earth, practical guidance aimed at real people, not theorists. It’s a simple enough vision: if you can’t see the wood for the trees, who would you rather meet - an elf with a magic map, or a woodsman?
From our perspective, we’ll be starting the new year with a team at full strength - we’re all delighted to welcome Dale back, now he has finally been able to say goodbye to what at times was a truly debilitating illness. We have some announcements up our sleeves, and we shall continue to grow our team of analysts, both to strengthen our core areas of coverage and grow beyond them. My heartfelt thanks go out to everyone in the team, for sharing the passion about what we are trying to achieve and working so hard to deliver on it.
A huge thank you also to everyone we have interacted with. It’s a complex web, isn’t it - the touch points between technology producers, consumers, commentators and advisors will no doubt continue to blur. 2010 will be “interesting times” for many, and we have learned enough over the past few years to not take anything for granted. But there remains plenty to look forward to.
12-30 – Adobe Bridge CS3 not responding?
Adobe Bridge CS3 not responding?
I was having an issue with Bridge CS3 in Mac OSX Snow Leopard - it was using half the CPU (i.e. 100% of one core) but didn’t appear to be doing much else - before eventually crashing. Having browsed around a bit last night and this morning to little avail, I thought I’d share the solution I eventually stumbled upon - to delete the cache. The official link is here, which includes information for both Windows and OSX users.
Thanks also to Planet Neil for the tip-off.
2010
Posts from 2010.
January 2010
01-21 – Quick take: top ten worst passwords
Quick take: top ten worst passwords
This makes interesting reading - it’s a report, sponsored by those good people at Imperva, about password worst practices. It’s got some sage advice too, for anyone who wants to know how to get a bit better at setting passwords.
Her’s the top 10 - personally I’m surprised that swear words don’t figure, but that’s small consolation.
1. 123456 2. 12345 3. 123456789 4. Password 5. iloveyou 6. princess 7. rockyou 8. 1234567 9. 12345678 10. abc123
Take a look at the full report here.
P.S. There may be a longer take on whether Imperva *should* have conducted the analysis, but perhaps that one’s for history to decide!
01-28 – Brighton Marathon
Brighton Marathon
Well I never. In a moment of spring madness, I agreed to join Marillion’s Mark Kelly to run the Brighton Marathon in April. The chosen charity is Water Aid, which has a pretty straightforward remit - access to clean, safe water should be a right, not a privilege. If you feel like sponsoring us pop over to our Justgiving page.
It’s also worth highlighting a Marillion song, the last line is “A tap with clean water”… Full lyrics here (selection below)
And here’s the song.
Common cold. Dirty water. HIV. Common apathy. Common crime. Perfect nonsense to the next generation
Have we caught up yet? Is it time? Well I say it is. I say it is. Deaf and dumbed-down Enough is enough
Gimme a smile. Hold out your hand. I don’t want your money I don’t want your land I want you to wake up and do something strange I want you to listen I want you to feel someone else’s pain
A tap with clean water
01-28 – Thinking aloud about Fair Trade chocolate and Kraft
Thinking aloud about Fair Trade chocolate and Kraft
Hurrah for Green and Blacks to announce that all of their chocolate was going to be fair trade by the end of the year, I thought to myself as I munched a bar of Maya Gold and read the Guardian article to find out it was the first chocolate bar to be certified Fair Trade. Perhaps is was the cocoa but I couldn’t help wondering about the timing - G&B’s is owned by Cadburys, that has announced a similar move (Dairy Milk already, others to follow) - and the latter has of course just been betrothed to Kraft foods.
Now, I’m no politician but I can almost imagine the board room conversation where either or both brands decided it was now or never to go fair trade, given that the big nasty conglomerate would be unlikely to take such a bold step. It would also be difficult to unravel without adverse PR and loss of sales. So, we have the public announcement, whether Kraft likes it or not.
A little bit of browsing later and it seems that the nasty conglomerate has already made a start. Suchard hot chocolate and Kenco coffee already boast rainforest certification. Now, all we need is for Toblerone, Terry’s, Cote D’Or and Daim to do the same, and we’d really be cooking.
February 2010
02-11 – What personality is your IT department?
What personality is your IT department?
Over the years I’ve spent quite a lot of time thinking about the idea of IT maturity models - which are great in principle, but, to be frank, a bit of a blunt instrument when it comes to gauging reality. Many IT organisations are quite comfortable where they are in terms of maturity for example - providing ‘good enough’ service levels without necessarily striving to be best-in-class. Meanwhile, the theory that a large proportion of IT depts remain unable to crawl out of the in the primeval swamp of IT maturity is no more than that - a theory.
Are there better ways to consider how IT departments are structured and how they operate? To be sure, different options exist. A couple of years ago I was working with the ideas used in personality testing, to see if I could come up with a similar model for IT. One hypothesis, which I believe to be true, is that most IT departments will remain much the way they are unless they are subject to some external change - replacement of the CIO for example, or merger with another department. Until such times, it is more important to understand what an IT department is, than what it might become.
The personality of an IT department is multi-dimensional. Based on research we gathered with readers of the Register web site (report), coupled with some telephone study work, we were able to derive a number of characteristics of IT organisations, namely:
- Organisation – whether the IT function considers itself to be a technical or service department (TECHNICAL vs SERVICE)
- Process – the level of formalisation/repeatability of IT development and operations processes (INFORMAL vs FORMALISED)
- Approach – whether projects and deployment generally take place individually, or mostly with an eye on a broader strategy (DISCRETE vs PROACTIVE)
- Dialogue – how interactive are the communications between IT and the business (REPORTING vs ENGAGING)
None of the above is necessarily wrong. For example, an IT organisation may need to be Technical if the business is highly technology-driven; equally, some companies may benefit from a Discrete approach if lead times are the driving characteristic. Smaller organisations, or branches may have different needs to larger ones, and so on.
Such characteristics offer 16 potential combinations, but not all of these will be generally applicable. Combinations we believe will be more likely are as follows:
TIDR - The classic “traditional IT organisation” - IT exists for its own purpose and there is little or no relationship established with the business. Unlikely that IT has a particularly good reputation, and fire fighting will be common.
TIPE - This is “don’t watch the mouth, watch the feet” as the IT organisation appears to be doing everything right in terms of business engagement, but it still doesn’t appear to be moving forward in terms of maturity. This is due to to unwillingness to change from old practices and organisational models, though IT may be highly competent technically.
TFPR - The IT department feels it can do no wrong, and sure enough it is running a tight ship. However there is a feeling from the business side that something is lacking, as IT wants to engage on its own terms and sometimes gives the impression that it knows what is best for the business. IT wants a place on the board, but the business doesn’t feel comfortable with this.
TFPE - While just about everything is in place, IT still thinks like a bunch of techies. This is fine for businesses that are themselves technical, but it may mean that the cost of technology is not fully understood. In other cases it is likely to show itself in a lack of responsiveness as IT works to solve the biggest technical issues rather than understanding what’s the most pressing business need.
SFDR - While the IT function is doing what it can in terms of external processes, it lacks the necessary communications between development and operations. This may be because of political, or geographical reasons; in any case, the result is that ops falls back onto simple reporting, rather than being part of the dialogue itself.
SFPR - While everything is going generally right, IT may well be disappointing when it comes to engaging with the business, resorting to only sparse reporting on progress and performance. This may well be because the business itself does not understand why it should have to engage.
SIDE - There is a high level of engagement with the business, which is admirable. However, perhaps due to a traditionally fragmented approach to IT implementation, lots of legacy or otherwise (e.g. mergers), the environment itself is not in the best state, nor is how it is managed – so backups may be done inconsistently, or there may be security issues for example.
Such a model offers a useful starting point for any IT organisation, in terms of what is going to work - level of process, technology adoption and so on - as well as injecting a level of realism into proceedings. We all have aspirations, but we’re not going to get any better unless we understand ourselves first - that’s the principle anyway. Similarly, for vendors, the opportunity exists to create offerings based on how things actually work, rather than any over-aspirational view, or one-dimensional perspective on IT maturity. And meanwhile, models such as this may offer more realistic paths towards best practice, if and when we move into a more dynamic IT world.
More detail is available on this if it would be interesting and useful to anyone, including broader definitions of all 16 personalities. For now, I’d be very interested in feedback on the principle of the model itself - and whether or not it would work for you. So, if you have any thoughts on this, please do let me know.
March 2010
03-24 – A bit of whimsy - Beyond the Seaweed Farm
A bit of whimsy - Beyond the Seaweed Farm
You can only imagine my surprise when… OK, that’s a lie. Here’s a little something I was writing last summer, based on the few snippets suggested by the liner notes to what was to become Porcupine Tree’s earliest releases Tarquin’s Seaweed Farm and The Nostalgia Factory.
It’s also given me the opportunity to have a first foray into e-books. This one is in ePub format.
If you missed the link, here’s Beyond the Seaweed Farm.
The full http is http://www.joncollins.net/wordpress/wp-content/BTSF.epub
And you can also read it online here.
Enjoy!
The cover is William Hogarth’s The Cockpit. It seemed appropriate.
April 2010
04-18 – Brighton marathon - oh my goodness!
Brighton marathon - oh my goodness!
Well, well. Today I ran a marathon. Who would have thought it - and like so many things in my life, once again I have discovered just how much is possible if you set your mind to it. I’d love to say that it was a breeze, that it was tough but fair, that it was anything other than what it was - possibly the most physical thing I’ve ever done in my life. But that’s the reality. If childbirth is worse, I’m surprised the human race has survived as long as it has.
So many memories…
- the marvellous sticky toffee pudding the night before - thanks Bola and Chris! - the hack into Hove to pick up Mark at 7AM - the serendipitous arrival at the train station, to find a taxi siting there like it had been specially laid on - the chance meeting of one of Mark’s mates Jake, a total gent who gave me my own personal tour of the sights for the first 15 miles - the genuine pleasure of running on a beautiful day until about mile 17 - the delight at seeing Liz and Mum along the way - the uneasy feeling that all those small pains were joining into one big pain - the absolute agony of the final 8 miles, and my gratefulness to all those who called out the name I had printed on my shirt - the complete inability to climb the steps back up to the front, after the finish - the taxi, shower, tea and cake
As for times, Mark came in at 4.19, and me at 4.45. It’ll do - while I was hoping for a better time, I had underestimated how much pleasure I would get from just finishing.
Above all, the whole thing raised £3,000 for Water Aid, which is just fantastic. Thanks so much to all who contributed, friends, work colleagues, Marillion fans, you kept me going on more than one occasion. Thanks (really) to Mark for the whole idea, and the biggest thanks to Liz for all her help and support, not to mention driving home two folks who probably should have known better!

May 2010
05-07 – IT looking forward
IT looking forward
Video recorded at Microsoft tech days, 16 April 2010, following (and referencing) David Bishop at Microsoft’s “Vision of the Future” presentation.
Enjoy!
05-12 – Looking back: Software is Art?
Looking back: Software is Art?
The traditional craftsman uses handed down principles and rules of thumb as the basis of his work: experience gained over many years of apprenticeship. The engineer proves his designs at every stage with scientific laws and mathematics. Craft and engineering differ in their techniques but they have a common goal remain the same : to provide efficient, useful artefacts.
In Computer Programming as an art, Donald Knuth summarises this by saying, “Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces object of beauty.”
The knowledge base that we have built up over thirty years is agreed to be incomplete. Reliance is often placed on software craftsmen, commonly known as ‘gurus’ or ‘hackers’ (in the programming world, a hacker is seen as a programmer of high esteem whereas it is the cracker who breaks into computer systems) – the reputation of some gives them almost hero. Robert Baber suggests that when software development progresses to be a true engineering discipline, such characters will disappear to be replaced by ‘members of professional bodies’. If this is going to happen it is a long way off yet.
And what of software itself? Is software an art form? Knuth refers to ‘beautiful programs’, and it is true that an elegantly structured section of code may be a pleasure to look at, at least to another programmer! This point should not be taken too far: a motorway intersection may well be an object of beauty to another construction engineer, but it is probably not for the rest of us. Artistic qualities do however have practical value for software : an elegant program is reasonably likely to be a well-written, maintainable one. It would be worth considering the addition of a little ‘art’ at all stages of the software life cycle. For example:
1. In the specification phase, a common language document is used to show the producer’s understanding of the clients’ needs. A well-written specification will increase the chances that such requirements have been noted; a readable specification may well ensure that both parties read it at all.
2. Software architectures can be elegant or impractical. An elegant architecture is one which ensures that its subsystems work together smoothly and efficiently; an inefficient architecture will be the source of an unnecessary performance overhead.
3. Algorithms can be fluid, smooth in motion or clunky and slow. A well-written algorithm will be more efficient than a badly written one.
4. The code itself can read like a good book or a child’s first essay. If the code is not readable it will quickly become unmaintainable, especially if its author is no longer available to explain it. Code should be self-documenting. If it is not clear exactly what a given code section is doing then it should be rewritten. In the maintenance phase, the code should also explain how it has been modified, by whom and when.
The quality of the product is dependent on the experience of the software developers – and their management – at their craft. Such craftsmanship must be learned. And as we have already seen, there is not always the time or resources available to provide such training. Peopleware makes the point that developers should be considered as experts rather than resources, and should be given enough space and facilities to permit them to excel. In DeMarco and Lister’s experience, it is this which ensures the optimal productivity of development groups.
Today’s software developer is somewhere between a craftsman and an engineer, using both science and common sense in his work. Tomorrow’s developer may well be the perfectly trained engineer proving everything as he goes. But he is not there yet. At least for the moment, gurus are a necessity and ‘software beauty’ keeps our minds on the goal.
[[And the punchline: this was written in 1994, as part of a bigger piece entitled Craft or Science - Software Engineering Evolution]]
05-13 – Dear Colin McCaffrey, I hope you enjoy the dime
Dear Colin McCaffrey, I hope you enjoy the dime
Dearest Colin McCaffrey,
I don’t suppose you’ve heard of me
Nor had I heard of you before today
I found your tunes on Spotify
And listened to them, by and by
So thank you for the pleasure that they gave
But Spotify, I hear them say
Ain’t a business model to really pay
Much money to the artists that it streams
I sure do hope the ads give you
Something approaching revenue
But frankly, what I ‘paid’ won’t fuel no dreams
I wonder how things are for you
In Vermont, when the check comes through
Do you tear it open, in case it’s your big chance?
Or is it to your bank account
Where an electronic small amount
Registers without a backward glance?
So Col, here’s serendipity
I mis-spelt someone’s name, you see
And I might not have done, another time
But if I never “listen again”,
I’ll remember I once did, back then…
I’ll think back to the words you wrote,
The sugar blues and guitar notes…
And whether you did benefit
From all that clever Internet…
So with all my sincerity
I wish you luck and finally…
I hope that you enjoy the dime
I hope you enjoy the dime.
05-20 – How much "innovation" is just keeping up appearances?
How much “innovation” is just keeping up appearances?
Focus on business challenges, not technology innovations
The IT industry really does bring out the best and worst traits of human nature. Were we always quite so excitable about the latest big thing? It is difficult to tell, as historical records don’t tend to preserve the glee reserved for all things new and improved, whether or not they have any long-term advantage.
This is particularly relevant given that we are perhaps in one of the most inventive periods since the dawn of humanity - at least, that is how things look from the inside. While the jury is out as to the usefulness and ultimate value of many of our creations, the speed of “innovation” has been accelerating consistently over the past 500 years, such that we have now reached a point where it is impossible to keep up with everything that is going on.
It is a very human thing, however, to want to keep up appearances. In the arts it is important to have an opinion on the latest show, film, or book, and our high streets are full of the latest must-have items. You can see where I’m going with this can’t you - yes indeed, our cuckoo tendencies to accumulate shiny things also spills over into how we view, and indeed select and procure IT systems.
We all know this, but many go with it anyway. Marketing departments in IT vendor companies spend their time working out how to make even the most humdrum of technologies look like the best thing since, well, the last best thing. Phrases like “paradigm shift” and “game changer” are used over and again, even though both speaker and listener knows that if the paradigm had shifted as often as predicted, we would have run out of games to change long ago.
Business leaders are subject to the same pressures - after all, in the words of the Matrix agent, they are “only human”. It was only a matter of time before a CIO would say to me that his boss had asked him when could he get some of that cloud computing. The fact that the question doesn’t make any sense is neither here nor there: businesses want to demonstrate they are up with the corporate Joneses, just as CIOs themselves want to have a few leading edge projects on their CVs. Analyst firms as well can be no better, as indeed, if things weren’t quite as exciting as everyone was making out, would we really need analysts to make sense of it all?
Don’t get me wrong, there are lots of new and exciting things made possible through the use of technology IT. It has brought the world closer together, opening up whole new ways of communicating and collaborating, and so on and so on. The danger however, is that we are so busy looking beyond where we are for the next big thing, we don’t give ourselves the time to make the most of what we already have. Organisations don’t always need the latest and greatest technology to thrive, and there is a big difference between being flexible as a business and simply changing because that is what everyone else is doing. Far more important is that their requirements are clearly understood, and that the right tool is selected for the job in hand.
As my old boss Steve, a seasoned programme manager used to say, “What’s the problem we’re trying to solve here?” Okay, his language was a bit more colourful than that but the point still stands. As we look at the waves of so-called innovation and try to decide whether they have any relevance to our businesses, let’s first and foremost focus on the challenges we face, and how best to deal with them. In a couple of years time, when the dust has settled on the latest hyped-up bandwagon, if the challenges still remain then we won’t have been doing our jobs, even if the agenda item of “keeping up with innovation” has been achieved.
June 2010
06-07 – Conference keynotes, and the gulf between simple and complex
Conference keynotes, and the gulf between simple and complex
Another day, another keynote. This time it’s Microsoft TechEd in New Orleans, and Bob Muglia is on stage being all thrilled about all those new and improved aspects fo IT. Three weeks ago it was Ajei Gopal at CA World. Today, elsewhere in the US IBM is kicking off its Rational conference, and I believe from my Twitter feed that SAS has a gig, somewhere.
It’s important to remember that most normal people don’t spend their lives attending IT conference keynotes. the real attendees at such conferences will have paid through the nose to come, and likely there may have been a little lottery between team mates about who should go. I know that back in my days as a punter rather than an analyst, attending conferences was very much a one-off, not just for cost reasons but also because I would have been too busy actually running stuff. (As an aside, it’s one of the reasons analysts exist)
So, most people in the rather humid hall I am sitting in right now will not have the luxury of comparison between multiple keynotes - and indeed (based on the “time “ thing) may not have had the luxury to stop and think about some of the things that are being presented. But, for better or worse, I do - and so it is that I inevitably start to think about how this keynote compares with those in the current batch, which appear like waypoints stretching back through the history of conference time.
Now I wouldn’t be so trite as to give marks out of ten for individual keynote presenters - though I am reminded of some of those dodgy talent competitions in the Seventies. Actually, even better than that, I wonder what Simon Cowell et al would make of Messrs Muglia and Gopal. The hand would slice, the brow would furrow and the head would shake, before the flabbergastingly obvious, albeit reasonably accurate commentary. “Haven’t we heard this before?” he would ask. Well, perhaps, but Simon never did understand the idea of paradigm shifts.
Strip away the repetition, the pseudo-excitement and exec cameos, and there is generally some good, interesting stuff in the average keynote. This one is no exception - new product announcements, capabilities that are now probable (whereas in previous keynotes they were just possible), yadda yadda. Right now for example, data visualisation is having its time in the sun - and to be fair Microsoft has a good story to tell around its SQL Server tools.
However, one area that’s pretty fundamental to IT never seems to get a mention at keynotes. All these new-and-improved capabilities are based on a core premise that they can exist in some kind of isolation from whatever else is in the IT environment. The irony is of course, that they never can - even small companies have existing technology investments, and anything new will have to work with all that old, and seemingly inferior stuff. This means both an integration impact to get new working with old, and a migration impact as the lucky elements of IT get rejuvenated and or replaced. (Hat tip to Roy Illsley for mentioning licensing as well, just before I posted this.)
Even if it were possible to adopt a green-field approach, the sheer complexity of IT very quickly comes back to bite anyone (particularly in marketing) who underestimates it. I was in a conversation a few weeks ago with a systems engineer at the Symantec Vision conference (yes, there was a keynote there as well, from Enrique Salem). The engineer went through a quick summary of the complications of failing over a production environment, in terms of servers, networking and storage, the level of hard coding still required, and the potential for error if any part of the systems were even slightly different. It was a welcome reminder of just how complex things really are, in most, if not all IT environments today.
I won’t go on as the keynote is due to finish soon. But as well as “star quality” or whatever else we feel should be on the keynote scorecard, let’s have a row for “ability to deliver in existing environments”. Perhaps it is the absence of this as a metric which has led to so many great ideas, presented at keynotes past, turn into dust - or worse, be rolled out again a few years later under a new terminological banner. IT is complex, and will remain so however hard people try to present it as something that is becoming simpler. And the sooner industry figureheads can take this on board and talk about it accordingly, the better.
September 2010
09-06 – The great cloud joke
The great cloud joke
Once upon a time, in the cloud kingdom of Cloudalon, there lived a cloud king. One cloudy day this cloud King, who was cloudily named Cloud Cloud the fifth, called his cloud son, the cloud prince Cloud Cloud the sixth, over to his cloud side.
“My cloud son,” the cloud king said to cloud prince Cloud Cloud the sixth, “in another cloud kingdom a very short cloud distance away there lives another cloud king. This cloud king has a cloud princess that I think that you should marry. Here she is, the cloud Princess Cloudina Cloud of Cloudonia.” Cloud prince Cloud Cloud the sixth, upon seeing the cloud princess Cloudina Cloud of Cloudonia, agreed to marry her. And so, on the next cloudy day, in the cloud garden, Prince Cloud Cloud the sixth and stood by the cloud altar and watched his cloud bride-to-be, the cloud princess Cloudina Cloud of Cloudonia, march down the cloud aisle wearing a cloud wedding dress and carrying a bouquet of cloud flowers. Just as the cloud princess Cloudina Cloud of Cloudonia reached the cloud altar, however, an evil cloud magician appeared and cast a cloud spell on the cloud princess. In a cloudily moment, the cloud princess Cloudina Cloud of Cloudonia had vanished.
“What have you done?” cried the cloud prince Cloud Cloud the sixth.
“I have sent the cloud princess Cloudina Cloud of Cloudonia to a cloud cave in the cloud mountain Mount Cloudtop. There, in her cloud cave, she is guarded by the cloud dragon Cloudfang. The cloud princess Cloudina Cloud of Cloudonia is cloudily safe there, but the cloud dragon Cloudfang, will not let her rejoin the cloud kingdoms of Cloudalon and Cloudonia.”
“You are cloudily insane,” the cloud prince Cloud Cloud the sixth said to the Cloud magician, but the cloud magician had vanished.
“What are you going to do, my cloud son?” the cloud king Cloud Cloud the fifth of Cloudalon asked his son, the cloud prince Cloud Cloud the sixth.
“I am going to take my cloud horse, Cloud Lightning, and my cloud sword, Cloud Death, and go slay the cloud dragon Cloudfang and rescue the fair cloud maiden the cloud princess Cloudina Cloud of Cloudonia.”
“May the cloud God speed you well on your cloud journey,” the cloud king Cloud Cloud the fifth of Cloudalon cloudily blessed his cloud son, the cloud prince Cloud Cloud the sixth. With that, the cloud prince Cloud Cloud the sixth got his cloud sword, Cloud Death, and his cloud horse, Cloud Lightning, and rode off to the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia.
The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, rode his cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, the cloud horse Cloud Lightning got tired, so the cloud prince Cloud Cloud the sixth carried his cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Finally, the Cloud prince Cloud Cloud the sixth reached the cloud mountain Mount Cloudtop. There, in a cloud cave on top of the cloud mountain, Prince Cloud Cloud the sixth of Cloudalon could see the cloud smoke from the cloud dragon Cloudfang who lived in the cloud cave in which the cloud princess Cloudina Cloud of Cloudonia was a cloud prisoner. Our cloud hero, the cloud prince Cloud Cloud the sixth of Cloudalon, climbed the cloud mountain Mount Cloudtop and slew the cloud dragon Cloudfang as the cloud beast slept cloudily. The cloud prince Cloud Cloud the sixth of Cloudalon rescued the cloud princess Cloudina Cloud of Cloudonia. But their cloud adventures were not yet come to their cloud close. They still had to get home cloud and sound.
So…
The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, and the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, rode the cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, the cloud horse Cloud Lightning got tired, so the cloud prince Cloud Cloud the sixth carried his cloud horse Cloud Lightning and the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, got tired, so the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, carried the cloud horse Cloud Lightning and the cloud prince Cloud Cloud the sixth of Cloudalon over many cloudy miles along many cloud roads and through many cloud fields. She crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Cloud alases and cloud alaks, though, for it seems our cloud heroes, the cloud prince Cloud Cloud the sixth of Cloudalon and the cloud princess Cloudina Cloud of Cloudonia got lost on their way home, for they wandered into the cloud kingdom of an evil cloud king, the evil cloud king Cloud Cloudonovov of Cloudovia. This evil cloud man had the cloud heroes, the cloud prince Cloud Cloud the sixth of Cloudalon and the cloud princess Cloudina Cloud of Cloudonia, arrested and taken to be driven away and thrown into the cloud dungeon. Just before the evil cloud king Cloud Cloudonovov of Cloudovia put them on the cloudy transport, however, he said….
“Cumulon, im bus!”
With many thanks to http://www.sccs.swarthmore.edu/org/swil/JoelPage/purplejoke.html
October 2010
10-08 – Onward and upward
Onward and upward
As you may have seen by now, I have taken the decision to take a step back from working at Freeform Dynamics.
When I first became an analyst some ten years ago now, little did I know what the journey would have in store. As far as I was concerned with my background as an IT manager, here was an opportunity to share the lessons and experiences I had learned with a much wider audience. Joining Freeform Dynamics at first glance appeared to be more of the same – of course I would get the opportunity to continue working with Dale and Helen, but being an analyst was being an analyst, right?
Freeform’s model was simple – to offer firms custom-designed research at a more accessible price than previously had been available, distributing it using media relationships and the still-nascent social Web. The ‘trouble’ came as a deeply shared goal emerged from our research programme – to focus on how organisations were really using IT, rather than how anybody might want them to. From within the company, this seemed a most natural thing to do, though over time it became apparent just how out of kilter this sometimes was with the broader landscape of IT marketing.
All the same, we plugged on, using the funding from vendor-sponsored studies to broaden our own understanding of how IT is done. We learned, we debated, we discovered, we tested hypotheses and opinions and we fed it all back into the machine. The partnership with online news site The Register gave us unprecedented access to those at the coal face of IT – from senior decision makers to programmers, operations staff and deeply technical specialists. Working with The Register has been a fantastic litmus test, as its audience does not hold back if what we are saying does not resonate!
Being small also made us nimble and we have been able to flex with the times, riding out both the downturn and the arrival of social media to our advantage. Early on, we had taken the active decision not to write white papers – these are vendor-sponsored short documents that actively promote a specific product or approach – and to this day I cannot get my head around how they are anything other than paid-for advertising.
However, we did spot a gap in the market for orientation guidance, that is, simple to understand, balanced explanations of a technology area – what was it, where it was useful, and how to start thinking about it. As the market for hard-core research dried up during the downturn, we identified alternative revenue streams initially founded on basic primers, which led to working with Wiley on the ‘For Dummies’ imprint. This in turn gave us the idea of setting up our own ‘Smart Guides’ – these in turn have been very successful, and much-coveted.
Freeform Dynamics has grown from success to success, even though it has gone through the same tough times as everyone else, and despite internal challenges such as dealing with Dale’s illness. When he offered me the CEO role two years ago I knew I had no choice – it was an offer I couldn’t refuse, not just because of the opportunity, but also that I couldn’t see any clear future to the company if I said ‘no’. And I haven’t regretted a single moment, it was one of the best decisions I have ever made.
Internally however, we all knew it was not going to be forever. I would love to be the kind of person who could take a role and stick with it for twenty years, but I’m not – on the upside (this is what I like to tell my wife), it’s the breadth of experience that makes me what I am. I initially told Dale and Helen I would join for two years, and then we would see what happened; while I in no way feel I have outstayed my welcome, I have had to ignore or otherwise put off a number of other opportunities and activities. Even as part of such a sterling management team, running Freeform Dynamics has been a full-time job in every sense.
In practical terms, the timing of my departure was never going to be ideal. At this point however, we have defined a structure, approach and organisation which enables the company to lead with its own agenda, rather than being buffeted by the agendas of the industry; we have a more pro-active approach to growing client relationships; we have a renewed focus on research as budgets start to free up, and finally, a reputation that is second to none as a ‘voice of reason’ for the industry. The mainstay of the exit plan was to leave things in good shape, and I believe that is exactly what we have achieved.
I shall be spending the next couple of months dealing with some immediate side projects – yes of course, there is a book in the works, and I’ll be working with Dale and Andy on some research we have been doing into more consumer-related aspects of technology. I’ll also be taking the opportunity to stand back, do some research and have a proper think about it all, something that just hasn’t been possible until now. I’m delighted to remain an active affiliate of the Freeform team, and I am not ruling out anything for the future.
Above all, I am immensely proud of everything we have achieved as an organisation. The Freeform team is undoubtedly one of the best I have ever worked with, not just in terms of competence, which equals any analyst firm out there, but also philosophy. Rather than forming an ivory tower so common among analyst firms large and small, Freeform Dynamics occupies the same space as the people it ultimately serves – those who define, procure and deploy IT systems and services for their organisations. To my very core and whatever happens in the future, I will always be proud to be a card-carrying Freeformer and I look forward to whatever opportunities may arise to work with Dale, Helen and the team again.
Onward and upward indeed!
10-24 – Rush-Chemistry now available in paperback
Rush-Chemistry now available in paperback
As mentioned previously, I’m delighted to announce that Rush-Chemistry is now available in paperback. This book first came out in hardback in 2005, but is only just becoming available as a soft-cover due to the sad death of my friend and mentor Sean Body, who ran the publishing company Helter Skelter. The book has been re-proofed and all of the typos, inaccuracies etc. (as listed in the addendum) have been fixed.
UPDATE: If anyone wants buy directly from me (signed or otherwise), I have ordered a number of copies from the publisher so let me know by comment below or by emailing/Paypal to jon-at-joncollins-dot-net. I will send the books out as soon as they arrive!
Prices including P&P for are as follows:
- UK - £11
- Western Europe - £15
- US/Rest-of-World - £18
Rush – Chemistry is the complete history of the world’s favourite Canadian rock band. The book follows Geddy Lee, Alex Lifeson and Neil Peart from their schoolboy days right up to the global success of their thirty-year anniversary tour.
Here’s the official text:
Against a background of disinterest from the media and a refusal to compromise their music, Rush’s success was by no means guaranteed. Since the beginning, only the determined efforts and downright stamina of the band members and those around them were sufficient to counter the wall of silence. Sharing a single-minded determination to take on the system and win, Geddy, Alex and Neil have never rested on their laurels. Pushing themselves to achieve technical excellence, never avoiding the challenge of taking on new musical influences, through huge changes of fashion and major personal tragedy, the entity we know as Rush has endured. Thirty years on, the band is still creating new music and packing arenas and stadiums around the globe.
Meticulously researched over three years, Chemistry draws on over 50 new interviews with those closest to the band. As the most detailed biography of Rush ever written, this book pulls together the threads and investigates the reasons that have enabled this band to succeed against the odds.
November 2010
11-01 – Running the eBook gauntlet
Running the eBook gauntlet
As I start editing a new book edition, I find myself forced to think about eBooks once again. After all, the printed page is so last year - isn’t it? Whether or not print will ever go out of fashion (no doubt topic for another post), it’s equally clear that electronic books are here to stay.
As an author, then, what’s the quickest way to getting into electronic print? The first caveat is that while things are undoubtedly simpler than they were a few years ago, eBooks should be seen as a work in progress. Before Amazon and Sony’s eBook readers first came to market, there was no agreed standard for the eBook file format. Adobe PDF was the most commonly cited but it didn’t offer what is now seen as fundamental for offline e-readers - the ability to change text size, with pagination adjusting accordingly.
Today’s eBook readers have standardised around a limited set of formats. EPUB is the ‘most’ standard, as adopted by a number of e-readers including Apple’s iPad. The Amazon Kindle uses a derivative of the ‘legacy’ Open eBook format, known as AZW. Excuse the acronymitis but I hope it provides a point of reference if you see the terms elsewhere.
Writers want to write, and any moment they spend faffing around with technology is at best a distraction, and at worst a blocker. In traditional models, publishers have handled most of the gubbins, but have themselves acted as the main blocker to publication - while this has had the benefit of a level of quality control, it hasn’t been the sharpest of instruments. Surely, if an author has a (good) short story that was never likely to be published anyway (or if it was, only to be buried in an anthology), isn’t a better route providing a simple workflow with minimal intervention from anyone?
Googling on “how to create an epub ebook” yields long-winded lists of tools such as the one here (from the producers of Stanza - but it hasn’t been updated in over a year) or here. JediSaber has a tutorial on how to make an EPUB eBook, but it is not for the faint hearted! While many tools are free, there’s also the option of shelling out for a tool such as Adobe InDesign - but unless CS5 is suddenly fantastically simple, that won’t be for the faint hearted either.
This isn’t helping so far is it? OK, here’s the rub. I’m an author, I want to write a book using a ‘standard’ word processor such as Microsoft Word. Then I want, without sweat, tears or hacking XML tags, to see said book appear on my e-reader device. For the love of God, isn’t there a simple way of doing this right now?
Funnily enough, the simplest route today appears to lean on that old stalwart of electronic publishing, the PDF, then using a PDF to EPUB converter. There appear to be a number of options, here, here and here. And here for Mac. Which may just offer the answer.
For authors who also want to make money from their writing, it may also be possible to miss out the middle man and let the service provider (such as Lulu or SmashWords) do the hard work. I’m just creating an account on SmashWords, which according to Joe offers the most straightforward way of creating a downloadable ebook. More news when I have it.
11-05 – Epilogue to Suburbia eBook and Audiobook
Epilogue to Suburbia eBook and Audiobook
Well that worked! Experiments and a bit of homework with various electronic book mechanisms led me to Smashwords. As an experiment, I’ve uploaded the text from a short story I wrote a few years ago, and it’s been converted into a variety of formats which is pretty cool. While I’ve made it a paid item, you can read the whole thing here.
I’ve also recorded the audiobook version, for future experimentation - notably with Podiocast.com. Watch this space - and click here to listen!
11-06 – Non, merci!
Non, merci!
Et que faudrait-il faire ?
Chercher un protecteur puissant, prendre un patron,
Et comme un lierre obscur qui circonvient un tronc
Et s’en fait un tuteur en lui léchant l’écorce,
Grimper par ruse au lieu de s’élever par force ?
Non, merci. Dédier, comme tous ils le font,
Des vers aux financiers ? se changer en bouffon
Dans l’espoir vil de voir, aux lèvres d’un ministre,
Naître un sourire, enfin, qui ne soit pas sinistre ?
Non, merci. Déjeuner, chaque jour, d’un crapaud ?
Avoir un ventre usé par la marche ? une peau
Qui plus vite, à l’endroit des genoux, devient sale ?
Exécuter des tours de souplesse dorsale ?…
Non, merci. D’une main flatter la chèvre au cou
Cependant que, de l’autre, on arrose le chou,
Et donneur de séné par désir de rhubarbe,
Avoir son encensoir, toujours, dans quelque barbe ?
Non, merci ! Se pousser de giron en giron,
Devenir un petit grand homme dans un rond,
Et naviguer, avec des madrigaux pour rames,
Et dans ses voiles des soupirs de vieilles dames ?
Non, merci ! Chez le bon éditeur de Sercy
Faire éditer ses vers en payant ? Non, merci !
S’aller faire nommer pape par les conciles
Que dans les cabarets tiennent des imbéciles ?
Non, merci ! Travailler à se construire un nom
Sur un sonnet, au lieu d’en faire d’autres ? Non,
Merci ! Ne découvrir du talent qu’aux mazettes ?
Être terrorisé par de vagues gazettes,
Et se dire sans cesse : « Oh, pourvu que je sois
Dans les petits papiers du Mercure François » ?…
Non, merci ! Calculer, avoir peur, être blême,
Préférer faire une visite qu’un poème,
Rédiger des placets, se faire présenter ?
Non, merci ! non, merci ! non, merci !
Mais… chanter, rêver, rire, passer, être seul, être libre,
Avoir l’oeil qui regarde bien, la voix qui vibre,
Mettre, quand il vous plaît, son feutre de travers,
Pour un oui, pour un non, se battre, - ou faire un vers !
Travailler sans souci de gloire ou de fortune,
À tel voyage, auquel on pense, dans la lune !
N’écrire jamais rien qui de soi ne sortît,
Et modeste d’ailleurs, se dire : mon petit,
Sois satisfait des fleurs, des fruits, même des feuilles,
Si c’est dans ton jardin à toi que tu les cueilles !
Puis, s’il advient d’un peu triompher, par hasard,
Ne pas être obligé d’en rien rendre à César,
Vis-à-vis de soi-même en garder le mérite,
Bref, dédaignant d’être le lierre parasite,
Lors même qu’on n’est pas le chêne ou le tilleul,
Ne pas monter bien haut, peut-être, mais tout seul !
December 2010
12-08 – Reading up on VR Spheres
Reading up on VR Spheres
No US road trip would be complete without a visit to Las Vegas, and so it was this summer we found ourselves in the deliciously unsalubrious Excalibur Hotel. Serene, sophisticated, refined, the Excalibur was none of these, a mish mash of worn-carpet kitsch and tired tourists negotiating wheely-bin sized suitcases around the garish ranks of slot machines.
Standing innocuously near the entrance were two black nylon spheres, a small braided rope separating them from the public. “Back later” said the sign, and we did indeed come back ten days later, following a round trip of the Grand Canyon. With a couple of hours to kill before our flight we found ourselves inexorably drawn back to the spheres and the virtual reality game they controlled.
As Ben played, I got chatting to the cashier-cum-founder of VirtuSphere. Turns out the principle has been around for a while, but technology is only just catching up, what with bandwidth requrements for 3D wireless headsets and all. Put simply, each sphere operates like a giant mouse ball, only in this case a human is being the mouse, encaged within the ball. To paraphrase Morpheus, the future is not without a sense of irony.
A bit of research tells me that VirtuSphere is not the only kid on the block. Here in the UK, the University of Warwick has also been developing a ‘cybersphere’ for the past decade. No idea who patented it first (I’m sure we’ll find out) but it does beg the question about whether the virtu-cyber-sphere thing could have more mainstream uses, once the cost of entry drops beyond a certain point. I’m sure I’m not the first to consider how a transparent sphere might be used alongside an Xbox Kinect, for example.
And of course, the Inter Orbis connection with the two spheres did not go unnoticed…
12-13 – Print on demand an unqualified success
Print on demand an unqualified success
Of course. There are hundreds of thousands of people who know this already. But it doesn’t appear to be common knowledge in my peer group that print-on-demand is a perfectly viable way of creating something that looks, and feels, to all intents and purposes, like a book.
When I tested out lulu.com a few weeks ago, I had the advantage of owning a pre-formatted set of PDFs - the print-ready files for Separated Out. Lulu is a US-based service, so I uploaded the files, hit the big print button in the sky, and waited a month before the Amazon-like package dropped through the letterbox. Lulu’s version was printed on cheaper paper (which I had selected), and the cover was very slightly out of kilter. However it was a book, to all intents and purposes.
I don’t underestimate the skill required in layout, font selection and so on, as well as editing - this is not the end of the publishing industry. Given such services however, quite why we still have the term ‘out of print’ is beyond me. As well as, “Not available in this country.”
Final note: I asked friends what UK-based services might exist and I was recommended to two: Blurb.com and AuthorsOnline. I also found PrintOnDemand Worldwide.
12-13 – Virtual worlds redux
Virtual worlds redux
(post prompted by looking at PlaneShift, and wondering how things had progressed since I last played a MMORPG).
There seem to be three universal truths: the first that some things change faster than expected; the second that others change slower than expected; and the third, that we usually get them the wrong way round. As the esteemed Richard Holway was reputed to have said, “I’m great at predictions; I’m also p***-poor at timing.” As I dug through one or two old blog posts about virtual worlds, then, I was struck not just by how long ago I wrote them, but also by how little has changed since I did. Which isn’t to invalidate any of the principles, more to illustrate just how hard it is to gauge just what is round the next corner. It’s worth remembering, for example, that Facebook was only just being launched beyond educational establishments at that point.
Here’s a quite interesting presentation from 2008 which would be worth revisiting from today’s context.
Update: Further reading up about Habbo led me to this post about Anonymous. It’s a small virtual world.
2011
Posts from 2011.
January 2011
01-26 – Found in a field - a piece of pot
Found in a field - a piece of pot
Just handed over a piece of pottery to the Corinium museum - not quite roman treasure but it still brighened up a dog walk!
According to Alison, the curator, “It appears to be a post-medieval rim sherd from a large vessel. It could even be fragment of Ashton Keynes ware dating to the 18th century. A later 17th-century assemblage of Ashton Keynes ware from Somerfield Keynes, Gloucestershire.”
Alison also provided the following background from Ed McSloy:
“Archaeological work ahead of housing development at Somerford Keynes, Gloucestershire found a pit containing an assemblage of late 17th-century pottery, almost exclusively products of the Ashton Keynes kilns. The site lies in the adjoining parish to Ashton Keynes, Wiltshire, well-known as a producer of glazed earthenware pottery, and particularly important in the supply of utilitarian vessels to Cirencester and Gloucester between the 16th and 18th centuries. The composition of the group mirrors closely that of the urban markets and, with the addition of a ‘chicken feeder’ form is a likely representative of the kiln repertoire from/in this period. The dominance of Ashton Keynes products in this group, which includes a number of seconds, suggests that local domestic requirements for ceramic could be met almost entirely by the nearby kilns.”
February 2011
02-04 – Jack Crymes RIP
Jack Crymes RIP
As part of a recent re-organisation I was running through my addresses, to find that John M. Crymes Jr. died in May 2009. Jack provided technical assistance to Rush on the Canadian leg of their ‘Exit Stage Left’ tour. Here’s an updated obituary from his university alumni site:
“John M. Crymes, Jr. (EE ’68) of Marin County, Calif., died in May 2009. He studied electrical engineering at the University of Virginia, qualifying in 1968. A pioneer of professional remote audio recordings, he designed and built the world’s first mobile audio recording truck in 1974. He worked with, among others, Bob Dylan, Bruce Springsteen, Eric Clapton, Paul McCartney, Natalie Cole and the Grammy Awards.”
One day I’d love to write the book about all the engineers who quietly went about their own jobs, supporting the superstars without need or expectation of any part of that fame. So many stories.
02-21 – Quick music industry (on the take) take
Quick music industry (on the take) take
I was alerted to the article “The death of the music industry” by @featuredartists on Twitter earlier today.
My reply:
“@FeaturedArtists Interesting, upcurve 89-95 looks incongrous as well. Returning to realistic level vs failure to monetise? NB shipments only”
It was only a few hours later that a better answer revealed itself – oh the serendipity, as a friend on Facebook highlighted an article on ThisIsLondon. To quote:
“So dry your tears at the thought of all those struggling record companies. The truth is that when a previous new technology called compact discs came along, they drove up margins to milk fans as they repackaged back catalogues. This made them so rich and complacent that they turned down the opportunity presented by Napster – which they could have bought for £10 million to dominate the digital market – and allowed Apple to steal their industry.”
Nuff said. for now.
02-21 – The day that Apple died
The day that Apple died
But February made me shiver, with every paper I delivered…
I have never believed that any but a tiny-but-psychopathic subset of IT executives actually set out to achieve world domination. The majority, whatever their aspirations, are destined to do reasonably well, to achieve good things, to shine for a while before one of the corporate monoliths buys them out, gives them an SVP and a pat on the back. They stay for a while then head down to San Diego, spending the next few years scrubbing the deck on their own yacht before they get bored and try to do it all again.
Every now and then, however, someone gets lucky. The seemingly alchemic combination of timing, functionality, design, cash flow and damn good marketing creates a perfect storm, and the wannabe global corporation suddenly finds itself in the position it had dreamed about, hoped for, even planned for. Cue the champagne, but don’t reach for a second glass as you discover that the even bigger challenge compared to “getting there” is “staying there”.
There’s another phenomenon at play. All those mid-sized IT companies sail as close to the wind as they possibly can when it comes to getting one over on the competition. Sales tactics, marketing games and architectural shackles are perfectly valid tools - if you want to win, you have to do what you can to ensure the competition loses. It’s all very well until, well, until you actually do win. And then, almost before you take your first sip of the Piper Heidseck, you are exhibiting monopolistic behaviour.
At which point, you have a choice. Do you ride it out as long as possible, making hay while the sun still shines, building a cash pile in the knowledge that when the party is over you can, duly chagrined, rebuild your reputation and keep going at the level you will have achieved? Or do you… oh never mind. There is no option B of course - as you quickly discover, with the markets baying for ever-increasing levels of shareholder value. With no other choice you press on, and over time you even start to believe that you’re doing nothing wrong.
We’ve seen it several times before. Microsoft played some fantastically underhand games in the 1990’s, destroying any chances that competitors in the browser, office automation or music spaces would get a look in. They’re still paying for it now - a reputation, once tarnished, is very difficult to scrub clean. It was with some sense of irony that we watched the court cases reach a conclusion, years after the ‘crimes’ were committed. The world had already moved on, and new competitors had changed the landscape to such an extent that the judgements no longer had any relevance.
We could talk about Oracle, which has managed to stay just beneath the radar of corporate scrutiny, despite having systematically erased all but one of its competitors (The only explanation is that the company is just too boring to garner significant media attention, and thus judicial interest). Or Sun Microsystems, which played all the games and did very well for a while, but then bet its core hardware business on the dot-com boom, and was irretrievably brought to its knees when the whole thing imploded. Rough justice again - the company was ten years too early for the cloud “revolution”, and would no doubt have been a major supplier to the acreages of data centres springing up today.
Today however, no company is more in the spotlight than Apple, that darling of the media, the upstart that held onto its vision, with its maverick CEO and oh-so-secretive-but-thrilling image. There is a point in every company’s history that can be seen as “defining” - a point of no return, however well business is doing financially. In Apple’s case it was last week’s news (I fail to find an exact date, but I think it was Thursday… UPDATE: The Readability blog marks it as Friday 18th February) - that the company was implementing a subscription model which took 30% of media companies’ revenues should subscriptions be purchased from within an app.
The outcry has been deserved, and genuine. They f*cked us over, said one exec. Various reports confirm that the powers that be are looking to investigate the legality of the move - and perhaps in 15 years’ time, with intense (and expensive) lobbying from all sides, they might reach a definitive conclusion. But that’s not the point. For all its sexy products, its developer ecosytem or its carefully thought out approach to interface design, Apple has - like the guy in Run Fat Boy Run (in the green) who trips his hapless friend - shown its true face.
It’s not the first time. The arrogance of the response to the iPhone 4 design flaw. The raising of the portcullis against Adobe Flash, rather than a dialogue on how to make it better. The constraints on apps in the App Store, and the lock-in experienced by each and every iTunes/iPod/iPad consumer, taking things to levels way beyond what Microsoft dreamed of, but could never reach.
No doubt people will stick with Apple, at least for a while - from a subscriber perspective, little has changed, and the company does make rather nice devices. Be in no doubt however, about the stranglehold the company now has on a whole segment of the market, not to mention on media reporting in general. The shame, perhaps, is that in the future, the name and cheeky logo will be linked to a company which went out of its way to f*ck its suppliers. Given the growing levels of bad feeling that pervade a company still (financially) on its way up, it’s unlikely there will be too many people rushing in to help as its trajectory starts to tip.
Leaving the last word to Don McLean.
I met a girl who sang the blues And I asked her for some happy news But she just smiled and turned away I went down to the sacred store Where I’d heard the music years before But the man there said the music wouldn’t play…
March 2011
03-25 – It all began with the bright lights…
It all began with the bright lights…
It doesn’t seem at all wrong that as I write this, I should be on my way to a music convention in the name of Marillion. Going back a few years now, as the millennium approached I remember buying the album ‘Marillion.com’. I was bowled over by the inside cover – hundreds and hundreds of passport photos, faces of the fans on whom the band had become entirely reliant for its income. Here was a symptom of something far deeper than selling recordings to casual listeners. Whatever was going on, I knew, I wanted to be a part of it.
Meanwhile some big changes were taking place in my working life, as I stepped away from hands-on IT and became an industry analyst. These were exciting days for technology: the ‘.com’ had arrived, and what a splash it caused. The good times were not to last of course, as for a thousand reasons, the over-inflated bubble of expectations collapsed and many who had gambled on the promise of e-commerce lost their shirts.
The visions of online nirvana weren’t necessarily wrong, just architecturally incomplete, technically premature and economically unsustainable. Despite the dot-bomb, many of the principles have continued to be developed – and while some new brands such as Amazon and eBay survived to become household names, many other, older organisations have reaped the rewards of their evolving online presence. This evolution continues.
As I once again start out on a new journey, things don’t seem all that different to how they looked 12 years ago. Today’s commentators talk about cloud computing like it was in some way new, and social networking like it is an end game. Neither is true – both are just way-markers, brightly coloured handkerchiefs tied to sticks along the road. Look back and they are clearly visible, still fluttering. Look forward, and they are harder to discern particularly as both marketers and pundits try to re-direct traffic towards their own, agenda-led targets.
Technology is shaping the future for sure, but I don’t believe that humanity will fundamentally change – either to become something it is not, or some augmented version of itself, homo steroidiens. Rather, IT will have succeeded when it fades quietly into the background and enables people to be more what they are – bringing up their families, watching and playing sport, and indeed, listening to music and sharing the experience. Such things are fundamental to being human.
Technology can help, but all too often it still hinders. One challenge is to put the emphasis in the right place, on the goal, rather than the toolkit. That’s why I chose to step away from looking at technology per se, and why the next stage of my own journey will involve more of a focus on how technology enables communities to exist, in all walks of life. Best practices learned by musicians and artists will be just as valid to governments, aid agencies and private companies – and vice versa of course.
Of course I will still have an interest in all things IT – just as a craftsman needs to know which chisel to use (though these are new crafts, and nobody can claim to be more than an apprentice). At the same time, the best thing about communities is that they encourage participation, and I am thoroughly looking forward to rolling my sleeves up, building, facilitating and above all joining in. More very soon, but for now I will close the computer and go enjoy a weekend with wonderful people and great music, which is ultimately what it is all about.
April 2011
04-09 – Unusual Suspects
Unusual Suspects
To coincide with our Summer Garden Party bash, we’re having some mugs made with Dazz Newitt’s homage to Messrs Wilkinson and Glover, as appeared on the cover of Separated Out. It would be good to get an idea of quantity before I put the order in - so, if you are interested please let me know by emailing jon at joncollins dot net or as a comment/message, then I’ll know how many to order. All I need are names, quantities and emails for now!
Pricing shouldn’t be more than £10, including UK shipping for those who can’t make the event. Additional costs will apply for international shipping. Any profits will be donated to Nordoff Robbins, a music therapy charity, so it’s all in a very good cause as well!
Click on the picture for a bigger version - and apologies for my dodgy photoshop skills ;)
May 2011
05-24 – Gary McKinnon and the next 24 hours
Gary McKinnon and the next 24 hours
Before I start, I should say I’m all for justice. Find the bad guys, get the evidence, have a trial before jury, and if they’re found guilty, bang them to rights. Justice isn’t always such a simple, binary thing however – and our ancient legal system has evolved over the centuries to take into account not everything can be as clear cut as people would like.
And so to the case of Gary McKinnon, the erstwhile hacker who has spent the past ten years on extradition row since he dared to break into a number of US military and NASA computers in 2001 and 2002. A few elements of Gary’s case are pretty clear cut: he openly admits that he accessed US military systems and had a good look around.
Where things are less clear are whether he undertook “the biggest military hack of all time” as the US authorities would have it. The whole area is subject to debate – which is why we have courts of law, evidence, juries and all that very important stuff.
So, why not just get Gary on a plane and in front of Judge Judy at the first opportunity? Things aren’t quite as simple in this context either. For a start, Gary has been diagnosed with Asperges Syndrome, a very real condition which could explain both why he didn’t fully appreciate the impact of his actions, and why he would not come through the court process psychologically unscathed.
There’s also the very real potential for mistreatment. Guantanamo and Bradley Manning’s current conditions both illustrate how US incarceration can stoop way below the level that what our own government and people would consider humane. Our historical record may have dark spots as well, but that doesn’t mean we should just go along with others.
The point is not whether or not he did access US computer systems – it’s whether someone with a medical condition should be handed over to a foreign authority with such a reputation. I would say no, and a number of far older, wiser and more legally astute people have said the same – such as Justice Mitting, who granted a Judicial Review into the lawfulness of Gary’s extradition.
Gary should be tried for his actions, but he deserves a fair trial in the UK that takes into account the complexities of computer hacking, and how thinking has evolved over the past ten years since he was spotted. When Gary accessed US systems, he did so by running a simple script that looked for blank passwords – and he found plenty. It is unlikely that such weaknesses would still exist today.
Meanwhile, plenty of examples of far more malicious hacking, for financial or other gain, have emerged which put Gary’s own actions into perspective. From TK-Max to the PlayStation Network, these are cases which deserve weighty sentences should the perpetrators be caught. The fact that Gary has spent the past decade with a dark shadow hanging over his head should also be taken into account.
I met Gary in London in 2006, when we both took part in the Infosec Hackers’ Panel. Now as then, I thought to myself, he’s just a bloke who happened to be caught at a time when computer defences were weak and his curiosity got the better of him. Five years later, with Barack Obama in town we have an opportunity to put a full stop on the extradition and end this lengthy saga.
Along with the large numbers of people who have already voiced their support, I appeal to anybody who has ever opened a drawer to see what was inside to flag Gary’s case over the next 24 hours. It might make all the difference.
05-24 – Should we trust the (social) network?
Should we trust the (social) network?
In the wake of the Playstation Network debacle, what should we be thinking about trust? Humans are inherently tribal, and the roots run deep. Benjamin Disraeli might have decided to be “on the side of the angels” rather than apes when faced with Darwin’s (r)evolutionary theories, but you don’t need to wade through the Origin of Species to see examples of humanity functioning at a more primitive level.
Adults, like babies, can become bad-tempered when they are hungry or tired; people parade themselves like birds for the continuation of the species; stressful circumstances cause the secretion of hormones prompting fight-or-flight behaviours that can be difficult to over-ride. Indeed, recent research such as that quoted in the New Scientist suggests that what we believe are conscious actions are registered in the brain shortly after they are enacted, undermining our very sense of free will.
The relevance to modern technology and how we use it for community building is that, however thickly we spread layers of social networking, collaboration and mobile sharing tools upon our daily lives, we are beholden to the the same drives and needs as we have always been.
To take one popular tool, the question is not, “What would primitive societies make of Facebook,” but, “What would Facebook achieve for primitive societies?” To some this may imply an interesting exercise handing out iPads to amazonian tribes, but primitive societies exist far closer to home - for all our plastics and silicon, we’re living in them.
Discussions around community building, collaboration and social networking tools frequently arrive at the topic of trust, like it is something that can be created. While it has long been recognised as a facet of well-formed societies, trust is not something that can be just willed into existence. Confucius is reputed to have said that rulers need three resources - weapons, food and trust - which implies the need for a fourth characteristic - that of a ruler.
In today’s theoretically flat-structured networks of relationships, there isn’t always going to be somebody in charge so trust needs to develop in other ways. People can be grouped around a common theme - storage managers, say, or fans of gregorian chants - who may be prepared to forsake a little uncertainty about their peers. Once a group exists, its very existence can imply trustworthiness - rightly or wrongly. In addition, trust begets trust, which is why peer referrals are so important. “Let me introduce you” assumes a level of pre-vetting on the part of the introducer.
Trust may be difficult to gain, but it can be quite straightforward to lose. Sony may well ride the storm following the hacking of the PlayStation Network, which was restored this week (which illustrates another trust factor, “The devil you know”). Other organisations such as the HMRC were dubiously comfortable in their governance-inept position, as UK tax payers have nobody else to work with. Outside of the public sector, smaller firms with less-well-established brands - and here I’m thinking about the large numbers of wannabe providers of community and social tools - may not be so lucky. Once bitten, twice shy.
05-28 – Mugs arrive Tuesday - get your orders in now!
Mugs arrive Tuesday - get your orders in now!
For all those who expressed an interest in procuring a mug bearing “Unusual Suspects” design as featured on Separated Out, the good news is a couple of boxes of the blighters will be turning up on Tuesday. So get your order in now and you should be sipping your tea courtesy of your favourite characters by Friday! Combined profits from these and the Summer Garden Party in two weeks’ time is going to Nordoff Robbins’ music therapy charity, so it’s all in a good cause as well!
Mugs will be available at the SGP, or for those not able to come there’s a Paypal link below the - ahem - mug shot. If you want any option not in the drop-down list please email me and I will let you know how to order.
EDIT: Just sold the last one!
June 2011
06-08 – Apple iTunes Match and the law of unintended consequences
Apple iTunes Match and the law of unintended consequences
I can’t imagine I was the only one who furrowed both brows on Monday, when Steve Jobs announced “one other thing” in the form of the iTunes Match service as part of its iCloud announcements. The best explanation I’ve found of it is here, with all its uncertainties - but in a nutshell, for $25 per year, you can upload any music from your hard drive, whatever its provenance, and access it from any iTunes-enabled device.
This is not some fanciful idea invented without consultation. A couple of weeks ago, reported Kevin Parrish at Tom’s Guide, Warner and EMI had already been signed but Universal and Sony were still in negotiations. By Monday all four labels were on board, and the RIAA released a statement saying, “When a service comes along that respects creators’ rights and ignites fans’ appetites for their music collections, it’s a win for everybody.” There’s no statement yet from the UK BPI, but it would be surprising if they didn’t follow suit.
What does it mean in practice? According to Greg Sandoval at CNet News, “Details about the agreements are few, but here’s how the revenue from iCloud’s music service will be split, according to the sources: the labels will get 58 percent and publishers will receive 12 percent. Apple will take 30 percent.” Given iTunes’ 225 Million subscribers, that could be a lot of money flowing back into the hands of labels and publishers if people buy into the model.
Of course, those that want to spend on iTunes Match don’t have to be music pirates that have accessed the majority of their music on BitTorrent. Most, if not all will have ripped music from CDs - a model which is stil, theoretically, illegal in many countries. Still others will have accepted an album on a USB stick, or will have been burnt a copy on a CD. If anyone has obtained their digital music collection entirely through online purchase, speak now. No, I didn’t think so.
Effectively, then, Apple’s model legitimises the reality of how things are done at the moment, whatever may be seen as law-abiding according to the morass of legislation around copyright, IP, licensing and so on. It remains to be seen whether the T’s and C’s of the service offer an effective carte blanche - “By subscribing to this service you will be no longer be considered as a threat to the well-being of the recording industry,” or similar words.
Apple’s model is not unique in its - ahem - uniqueness when seen next to music streaming services such as Spotify and Last.fm, as well as newcomer Amazon. Each offers online services of various forms to multiple devices, for tens of dollars a year.
So, what could be the unintended consequences? Much attention is being paid to the legitimisation of piracy, but there’s little to indicate this will lead to more illegal sharing. It seems unlikely in the extreme that the labels have allowed Apple’s T’s and C’s to close the door to future litigation against the Pirate Bays and indeed, single mums who undertake illegal filesharing - particularly given that the iTunes Match model is yet to be proven.
Indeed, even if it did, would consummate pirates trust “the man” to honour the terms of the deal? As asks Music Week, “Will some form of fingerprinting technology be used to filter [pirated] tracks out?” Perhaps the question they should ask is, “Would such technology be used to identify recipients of illegally shared files, and incorporate mechanisms to scan the hard drive not just for music, but also for evidence of such activity?” As we all know, record labels have form in this area.
Frankly, I think any consummate pirate who took Steve Jobs’ offer on without being 100% sure they weren’t going to be hoodwinked would be completely insane. Like it’s going to happen anyway. “Yeah, sure, just hand over the weapon, I won’t shoot you.” Bang. “But I thought you said you wouldn’t shoot?” And as for it being a license to pirate - well, come on, really. “Great, I’ll now acquire thousands of new tracks and upload them, and you’ll watch me do it!”
Far more likely is that iTunes Match is a step on the way to more comprehensive media delivery services offered over the Web. In this regard iTunes Match looks like a poor cousin of Spotify - with which, for ten quid a month, you can listen to what you like, where you like, online or offline. OK, you could go for the cheaper option offered by Apple, and listen only to the stuff you happened to have on your hard drive at some point in the past. Want anything else and you’ll pay by the track.
In other words, if you’re already bought into the idea of a cloud-based music service, then what’s to stop you going large? No doubt Apple knows this - it’s unlikely that it will stop here. Trouble with streaming models, as with radio, is that they appear to offer a much lower return to the artists than record sales - appear, that is, because they are not subject to the same level of transparency as sales and performances.
Meanwhile, we are yet to see anti-brands of the form of Napster, Kazaa, Pirate Bay, MP3.com and so on appear, offering low-cost or no-cost streaming services from countries in which it is harder to enforce legislation. Even if any such services were ‘clean’ (i.e. they weren’t also dens of malware and identity theft), they would offer no support to artists whatsoever. For the avoidance of doubt, I would recommend giving any such services a wide berth.
The bottom line is that, while we are undoubtedly lovers of the tactile, Apple’s iTunes Match service is one more nail in the coffin of buying music as physical product. It may great for the music industry and telecommunications companies but I fear for artists tied into long-term contracts that majored on record sales; or worse (and I’m speculating here, but only a bit) that have been drawn up more recently to the advantage of labels, in the knowledge that revenue will comes from subscriptions to the channel, not payment for individual content. If it is indeed a transitional model, both legislation and contract terms need to catch up fast to ensure iTunes Match can be the “win for everybody” that the RIAA claims it is.
06-14 – Reflections on Still Canal Waters - Summer Garden Party 2011
Reflections on Still Canal Waters - Summer Garden Party 2011
The only thing we lacked was the numbers. There, I’ve said it. Thirty-forty people came, that’s a few over the number of tickets we sold; plus some local friends popped along for the afternoon to have a beer and listen in to the acoustic sets.
But, despite the turn-out, the Summer Garden Party was fabulous. Magnificent. Stunning. Those who know me, know I rarely get effusive, but this was one of those occasions. It isn’t just me – here’s a selection of reactions:
“Twas excellent, will have to bring a tent next time!” “That was the most awesome day/night I’ve had for a very long time. I don’t use the word lightly…” “Please, can we do it all again!” “Absolutely fantastic! One weekend, two brilliant - but very different - GPs …” And that was just the artists! So, what happened to make the 2011 Summer Garden Party such a great event?
The afternoon kicked off in the garden with Fergy, one man with an acoustic guitar and a bucketful of gentle charisma. To me, he epitomises everything music should be about. “But I only know three chords,” he says, somewhat embarrassed. Yes indeed, Ferg, but you have that indefinable quality known as ‘soul’.
Howard Sinclair was up next. I must get some of the set lists as specific song titles elude me – I know The Beatles were in there, and a number of other hits as well as some of Howard’s own compositions such as Nine Tenths. Howard’s a talented guy, and always worth listening to.
All of this in a beer garden at The Tunnel House, one of the most beautiful settings you could have for an outdoor performance. The beer was Potwalloper, locally produced over the border in Wiltshire – or if that didn’t tickle your fancy there were three or four other brews on draft. The garden itself was full at lunchtime, then emptied and filled again as the evening tide of local folk came for a pint.
For the final afternoon set, Rich Harding and Simon Rogers performed covers of Radiohead, Pink Floyd and others, as well as some Also Eden tracks, finishing with a rousing(ish – it was acoustic!) rendition of Fish’s The Company. “Thank you very much,” said Rich, “I’m now going for a lie-down.”
Of course, the fact Rich was even performing beggars belief – just a week before he was having yet another bone graft operation following his near-fatal motorbike accident last year. It wouldn’t be too far from the truth to say that a generation of Welsh medical students will qualify having used Rich as their worked example. Of this, more later.
Time for a barbecue, and a quick hat-tip to the staff and management at The Tunnel House for being so accommodating, helpful and friendly. Nothing was too much trouble. Meanwhile, in the barn behind closed doors, preparations were underway for the evening performances.
It’s difficult to describe the atmosphere of the barn. A great little venue, for anyone who knows Riffs’ bar, it’s like that only a bit wider with the bar at the back rather than down the side. From the punter’s perspective, on the night, with thirty-forty people stood up and dancing around, it was “critical mass” – any less and it could have felt sparse, but it was enough people to party.
Jo McCafferty had travelled all the way down from Aberdeen to perform. For anyone who doesn’t know her stuff, think a Scottish singer songwriter, Dido with an edge, singing of the joys and disappointments of life. A great singer, a genuine gem who has toured with Steve Hogarth and Midge Ure to name a few. It’s a good job Jo’s voice was so radiant, as the lighting was not – at least the way it was initially set up. A beautiful set, anyone who doesn’t have a couple of Jo’s CDs in their collection is missing out.
And so, to the Skyline Drifters. Five people who last played together seven years ago – Dave Woodward on guitar, Ade Holmes on drums, Tony Turrell on keys, Tony Makos bass, and, yes, Rich Harding on vocals. Seven years, one rehearsal the night before, and in a stone barn in the middle of the Cotswold countryside on 11 June 2011, five musicians blew the bloody roof off.
I can dig out the set list if anyone’s interested – but it was, in a word, ‘esoteric’. It kicked off with Robbie Williams, then mixed Pink Floyd with Queen, Iron Maiden with ELO, and yes, Marillion with Fish. Ade drummed like it was the last chance he was going to have, Tone’s hand was flying round the neck of his five-string bass, and Tony’s keyboard rig (and his playing!) would have put Asia to shame.
Two highlights stand out – Comfortably Numb, where Dave’s bandmates stood back in awe as he pulled off one of the best renditions of his namesake’s solos that has perhaps ever been heard. Even this was transcended by the sheer joy of Mr Blue Sky. And then the laughter at Tie Your Mother Down (Rich reading lyrics with a torch), the passion of 100 Nights… it was all there.
Finally, a double-bill encore of Hooks in You and Market Square Heroes, both of which had the crowd bouncing. Then the lights came up, the adrenalin drained and Rich had to almost be carried off stage. Rich, I take my hat, coat, shoes and socks off to you. It’s not just your talent – the range of material you can tackle, and the way you change your style to suit. Short of Steve Jobs, I’m not sure I can think of someone with more strength of will.
What a night. Snatching the best kind of victory from the jaws of the mundane. Things could have been so different – any one of the thousand tiny details could have tipped things off the edge (dare I mention staging? ☺) – but they didn’t. To be fair, we chose the right crowd – what a great bunch of friends, who really get what it means to party.
We’ll be meeting up – the planning team – in a few weeks to have a cold, hard assessment of the Summer Garden Party. The big question is why more people didn’t come. There’s no right or wrong – nobody should be expected to turn up just because of their musical affiliations, friendships or geography. But the fact is, despite our best efforts to inform people (I hope we kept one step away from cajoling), we were very lucky to have just enough numbers to make the event a success.
Right now, I don’t know if it will remain a one-off. If it does, I think I speak for everyone who came that it was a privilege to be among such fine company, such great, talented musicians in such a great place. There’s an element of magic sometimes, when everything comes together and just works, if I have any sadness it’s only for the people who I know would have got such a kick out of it too.
I have already thanked everyone – but I repeat my unerring gratitude to all that made it such a success – organisers, artists, participants, venue. I’ll leave the last word to the guy who was on the bar in the barn on Saturday night. When I went in to pick up the staging on the Sunday, he looked at me with a big grin, shook his head and said, “You guys know how to rock.” Yes, yes we do.
06-20 – Of no interest to anyone whatsoever
Of no interest to anyone whatsoever
Apologies for this - but I’ve stumbled across the command that enables an SSH session to run on my SS4000-E storage server. With thanks to kevinsloan, the command is:
https://IP-address/ssh_controlF.cgi
From this I’ve discovered it is indeed a Falconstor IPStor disk server, which is based on Debian Linux apparently - according to this chap. I was hoping to be able to Wake on Lan (out of sheer laziness - and to ensure network backups take place)… but it might even be possible to configure the box as an iTunes server.
Update: instructions in German, here.
How the winter evenings will fly by.
06-30 – The future of the publishing industry?
The future of the publishing industry?
Thanks first of all to the organisers of the IC Tomorrow “Meet the Innovators” session organised with the Independent Publishers’ Guild at the BMA this morning. A quick-fire series of entrepreneurs had eight minutes to present their propositions, where possible reserving two minutes for Q&A, so there was no room for slack. Pitches were straight to the point – this is what we do, this is why, this is what we need, to an audience of representatives from smaller publishing houses and literary agencies, and a minority of observers such as myself.
While the offerings were diverse they all started from the same perspective - acknowledging the multi-device, digital, collaborative world we’re moving towards. Most showed facets of products and services that exist already – “Spotify for eBooks” said one tweet, “MMORPG for education” said another. One was built around sharing goals with peers; another, getting new authors to market and disintermediating traditional publishers. Innovation came from how capabilities proven in other domains were being applied to the world of the written word. The ability to search online literary databases from a Kindle for example, or to rent eBooks, or perform semantic analysis, or link QR codes to online resources, or to build communities of readers, each is innovative in its own way, even if it replicates capabilities already used elsewhere.
The fact each inspired feelings of recognition, rather than any Dragon’s Den-esque “I wish I’d thought of that,” is as much an indictment of the state of publishing as an illustration of what could be possible. Don’t get me wrong, I was genuinely impressed by the efforts that have gone into bringing such capabilities closer to fruition - it will be very interesting to see which ones make the leap from concept to mass market business model, and I in no way underestimate the effort involved.
Equally however, there is an undeniable game of catch-up taking place. Publishing industry nnovation in the broadest sense is hampered by the fact that the industry is only just getting going. Let’s face it, it’s only recently that the idea of digital book versions has really been pushed by publishers. While all evidence suggests it is being widely accepted by readers, mass-market digital music has been around for 25 years or more so we really shouldn’t get too flushed with excitement about mass-market digital books.
This isn’t the place to go into detail about why this is, but it’s worth mentioning that eBooks are still a work in progress. Inherent flaws remain – the lack of an agreed eBook standard between Amazon and everybody else, for example, which is confusing readers and adding unnecessary cost into the content production process. This also means that valuable skills are being used inefficiently, in the the publishing equivalent of re-keying as books are (manually) reformatted to suit the different platforms. There’s a way to go.
These publishing industry developments are necessary, both in terms of both catching up with other parts of the creative and media industries, and arriving at a point where digital written content is seen by all publishers as the norm, rather than an adjunct to print. However, in the future the chances are we will come to see them as getting to the starting gates, providing the foundation upon which real innovation can take place. We started to see signs of this in one of the presentations, the final one as it happened, which came from an organization called Space Bar Interactive.
In the presentation, a digitized book was recognized as one element of an interaction between the content creator and the content consumer: while an important element, it was subordinate to the interaction as a whole. We can perhaps see a similar phenomenon with the likes of JK Rowling’s launch of an interactive web site, which brings together elements of the original (printed) books, the world portrayed by the films and additional content, all to create a more rounded – dare I use the word (JK does) – experience.
While this bodes well, the state of the industry begs the question: what would pre-Gutenberg storytellers, educators and journal-ists make of the abundance of tools and capabilities now available, if they were suddenly transported into the now? Difficult to say but I believe they would see print as just one tool, probably inferior to their preferred approach of direct interaction. They might favour the podcast as a way of capturing a tale for example, using print only as a way of archiving. Or perhaps those who preferred putting pen to paper might still do so, nonetheless profiting from the many different ways that ‘paper’ could now be transmitted to their readership.
All options for communicating a story were, are and will remain valid, but the difference is their relative position in the hierarchy of preferred publishing mechanisms we have currently. Print holds the number one slot in the hearts and minds of publishers, even if digital books are currently outselling print copies by two to one on Amazon. That’s not to say publishers are wrong to hang onto print: just as face to face interaction will never be outmoded, nor will the human desire to have a more tactile reading experience.
The only inaccuracy is the idea that one will supersede the other as king of the hill: print is currently there by nature of the fact that there was no other option, and digital will win for a while merely on the strength of pent-up demand, but in truth both models are equally valid. Just as today’s pitches suggest, future winners will be those who make the right choices about clever combinations of content, formats and media choices based on the needs, desires and contexts of their audiences, and not on some arbitrary “we’ve always done it that way,” or “it’s the future, get with the program” meme.
The bottom line is that there is plenty of innovation to be had – but first we need to build a foundation of capabilities and an understanding of the valid part each can play in how writers and authors can engage with their audiences. As a final point, it didn’t go unnoticed on Twitter that most of the people in the room were using pads, that is, pen and paper, to take notes on what the presenters were saying. While this could be seen as a comment on how quickly (or otherwise) the publishing industry is adapting to new technologies, there is a wider point: that no single mechanism, new or old, will ever be suitable for all needs. If we are to be blessed with multiple choices, the skill will be making them work together, whether digital or paper-based.
July 2011
07-15 – A Million Tiny Gestures
A Million Tiny Gestures
Like everyone else in the UK I would imagine, I have been thinking hard about the ongoing News International situation and its ramifications. What with the relationship between politicians, the media and society at large firmly in the spotlight and with personal stories intermingling with demographic shifts, there’s a lot to get one’s head around. Some of it is undoubtedly good, not least the unearthing of illegality wherever it should be buried. Also laid bare has been the inappropriate influence of a few powerful people on successive governments. We can only hope that the institutional cowardice of the past can be replaced by a bit more gumption on the part of our elected representatives.
I confess also to having felt a certain unease in just how simple it can be to broadcast views, however. This discomfort is entirely hypocritical: when the subject of Britain’s forests came up for example, I was one of the many who was able to express my deep concern through sites such as 38 Degrees by the mere click of a button. Campaign sites have become like a giant remote: couch potatoes can collectively email their MP’s with the same ease as changing TV channels or voting for their favorite pop star. Twitter is even simpler: my suggestion (one of many) to boycott the paper required 140 characters and a carriage return.
While those in office may feel a bit miffed at the volume of messages they are starting to receive, both sides (and indeed, those who make it possible) are missing the point to see such communications as a bigger version of what has gone before. We can use terms like “campaign” or “petition” and see the quantity of collected identities (no signature required) as in some way comparable to a quantity gathered by letter writing, complaint calls or people in the street. The results are not comparable however: of course you’ll get a bigger response, if you make the mechanisms easier.
In other words, however excited some may get about how they are changing democracy through the provision of yet another campaign site, they are not. Something bigger is afoot, which may be seen as the conglomeration of all such sites, together with social sites such as Facebook, and the offline interactions that they reflect.
This last piece is important. How easy it is to think that #NOTW was brought down by Twitter, for example, ignoring the fact that just as much conversation was taking place verbally and influenced through the gestures of individuals, both personally and professionally (think: stock prices). A million, ten million, a billion tiny gestures, some counteracting, others reinforcing, all add to the whole, just as they always have.
What’s different is that we now have a series of mechanisms to capture such moments. Each offers a fragment of opinion, partially thought out based on an incomplete understanding of the facts. Less the wisdom of the crowds, then, and more the sentiment of the crowds, which needs to be viewed as a whole as much as individual voices, if not more.
While this may be an obvious conclusion, it’s not currently the case. MPs write boiler-plated letters back to couch-based constituents, the cost of vellum, stamp and postal miles draining resources and missing the point. Meanwhile, social media sentiment analysis is a new field which will no doubt become an end in itself, also completely missing the point that it is measuring another set of measures, themselves based on incomplete modelling of what’s happening “out there”.
The question remains about whether, as Heisenberg might have predicted, the use of social tools or the act of measurement will change the behaviours they support. While this is perhaps inevitable in the short term, it is no recent phenomenon - uprisings such as those in Egypt will make use of the tools of the day, just as did pamphleteers like Thomas Paine in 1776, nigh on 200 years before the Internet existed. (Indeed, it would be an interesting study to see whether a correlation exists between generations of communication tools - printing, telephony, TV etc - and mass behaviour).
However, it’s important not to judge the tools available today with older models - comparing response numbers like-for-like, for example. It is a moot question whether or not the underlying nature of democracy is changing. Rather, given that messages can be passed faster than they were in the past, government needs very quickly to move to a position where it can respond in a reasoned fashion to genuine sentiment, instead of becoming no more than a series of knee-jerk reactions to inadequately expressed opinions.
07-19 – It's just a theory... gurus and mid-life crises
It’s just a theory… gurus and mid-life crises
I had an idea I wanted to test - that business texts and self-help books are written by people who are sufficiently compelled to do so. Here’s the principle: we keep going with our humdrum lives until we reach a point we don’t want to do it any more, for whatever reason. Some of us reach a kind of crisis point - which we emerge from, sometimes feeling all the better for it. An even smaller subset exits from this stage thinking, “Eureka! I’ve worked out the answer!” and feels sufficiently compelled to write a book about what they have learned. On occasion the book gets extremely popular and a new “guru” is born.
Now, I’m not going to say whether this is good or bad - but I thought I would test the idea. First I looked at the ages that people tend to hit mid-life crisis: this chart comes from a “2008 Gallup phone survey of 340,000 Americans” cited here:

As you can see, the “happiness slide” starts at about 34 and troughs at about 50. Now let’s look at the ages of a few popular “gurus”, and when they published what is generally seen as their seminal work:
| Name | Born | Seminal work | Age |
|---|---|---|---|
| Richard Carlson | 1961 | 1994 | 33 |
| Brian Tracey | 1944 | 1981 | 37 |
| Peter Drucker | 1909 | 1946 | 37 |
| Stephen Covey | 1932 | 1970 | 38 |
| Dale Carnegie | 1888 | 1926 | 38 |
| Mitch Albom | 1958 | 1997 | 39 |
| John Gray | 1951 | 1992 | 41 |
| Deepak Chopra | 1946 | 1987 | 41 |
| M Scott Peck | 1936 | 1978 | 42 |
| Susan Jeffers | 1945 | 1987 | 42 |
| Charles Handy | 1932 | 1976 | 44 |
| Robert Kiyosaki | 1947 | 1992 | 45 |
| Michael Hammer | 1948 | 1993 | 45 |
| Eckhart Tolle | 1948 | 1997 | 49 |
| David Allen | 1945 | 2001 | 56 |
I didn’t restrict this list in any way - if I thought of someone (or they were suggested for me), and I could find their date of birth, they they were in. There’s a question on Richard Carlson (Don’t Sweat The Small Stuff), rest his soul starting so early - as he was in the psycho-analytical game anyway - but then so was M. Scott Peck (The Road Less Travelled).
I’m not saying that everyone has to fit - after all, it’s just a theory; there’s also the question of whether one needs 40-odd years’ experience before anyone, including publishers, would take you seriously. But it certainly would be interesting to know the back-story on some of the authors.
August 2011
08-22 – Reminder to self: this is the information revolution
Reminder to self: this is the information revolution
We all love banding around terms like ‘revolution’. It makes us feel relevant and part of something worthwhile. The danger, of course, is that we use them so often that they lose their meaning.
Consider ‘information revolution’, for example. Yeah, yeah, heard it before, seen the video, got the powerpoint. Here’s a prediction however: when the dust’s settled in a few decades’ time, when our realities are augmented and infrastructures virtualised, in other words, when all those things we talk about so much have become the paving slabs and tarmac of the silicon highway, we’ll be able to see that the revolution was about information. Isn’t that bleeding obvious? Well, no, not really, on two counts.
First, it is just one of many statements each vying for success. I’ve sat through plenty of presentations that say, for example, we have moved from mainframe to client-server to mobile, or indeed, from the information age to the collaboration age, or whatever. Marketers like to see revolutions around every corner, for fear that their own products are in some way dull and in the knowledge that the competition are doing the same. Each time, cynics and those who have been round the block more than once like to point out that they’ve seen it before, that its nothing new. In general they are right - we’ve seen it with social networking (“Started in the Seventies”), cloud computing (“Isn’t that just a mainframe”) and so on.
Second, the technology industry is stuffed full of geeky blokes who, to fall back onto unfair stereotypes, tend to prioritise tools over what the tools can do. Technology is a Toad Hall with a surfeit of toads, who absolutely have to have the next big thing even if they never use it to its full extent. We’re all guilty, at least the most of us are - even when we agree that the computer we had 15 years ago was good enough for most purposes. The end result is that, every time a new wave of tech hits, we spend the next couple of years re-learning al the things we should have known already - security, management, you name it.
Against this background, we are guilty of ignoring the very real revolution that has been taking place since the Second World War and the subsequent invention of the transistor. It was always about information, and it will continue to be. From business analytics to Youtube, from mobile point-of-sale to smart grids, our ability to collect, store, process and access vast quantities of information continues to drive us relentlessly into the future.
And so, to the cautionary note. However we think about technology today, whatever urges us to buy the sexy new gadget or transformative software package, as an industry we have a responsibility to transcend our desires to deliver new and improved technologies, and recognise our role in catalysing the revolution that is taking place in front of our eyes, for better or worse. We have seen some great things, and some not so great, happen as a result of our new capabilities. We all have a role to strive for the former, whilst protecting against the latter.
September 2011
09-02 – Helping a friend: software delivery check list
Helping a friend: software delivery check list
“Because I can,” I’ve been helping out a mate who’s outsourcing some development work. I looked on t’ Web for a simple delivery checklist but all I found was DoD standard documents, which I thought were probably a bit over the top! Based on past experience, this is what I came up with - what did I miss? I’ll update with any feedback, gratefully received.
1. Scope of delivery - describing the changes made, or referencing a description elsewhere
2. Content of delivery - describing the modules that make up the delivery and any supporting information (test scripts, documents)
3. Checklist items:
- Test scripts updated - this should be done as part of the development, either by the developer or a third party - User documentation updated - also part of the development - Comments included in code and headers - to indicate date and scope of changes
- Unit tests passed - if there are any units to be tested of course - testing at the debugger level - Functional tests passed - to confirm new functionality is working - Regression tests passed - to ensure that existing functionality is still working - User acceptance tests passed - to ensure package loads, runs and works acceptably
To think I was once a software configuration manager - shocking isn’t it? :)
09-11 – Talking crap is not a crime
Talking crap is not a crime
When I was a child, I remember a television series called Poldark. I say “remembered” but I don’t recall all that much about it - some period costumes, sombre lighting, a few ships on Cornish waves, and the occasional bit of dialogue is about it.
One bit that stuck in my mind for the past thirty-five years is a court scene. In it, a gentlemanly type (I assume Poldark?) is being accused of having shouted, “Pickings for all!” on the tragic occasion of a ship being wrecked on a nearby shore. The witness is a heavily-accented local man: I think I can remember sideburns and a waistcoat, though I could have added them later.
Quite clearly, the courtroom is baying for Poldark’s blood.
“Did you make the cry of ‘Pickings for all!’?” asks the judge (I can’t remember what he looked like).
“No sir, I did,” says the man. The court case crumbles; Poldark is released without charge and the credits roll. Perhaps the villager gets it later, but I don’t think so.
Humans are no strangers to talking crap. To shouting out when quiet would have been better; to making false accusations, dubious suggestions and unnecessary outbursts. No doubt, we have been doing all these things since the beginning of time. “Get him!” “String her up!” “I’m going to have you!” “Let’s go kick their heads in!” and so on - whether or not there was any intent to actually get, have, kick or string. We’re like that - particularly blokes I think, but maybe that’s due to my lack of experience.
Enter the internet, and however bad our spelling might be, stuff we might have said out loud is now being recorded, broadcast, archived for playback at annoy point in the future. On Facebook I see usually-gentle people saying that they’d give so-and-so a good slap. Or that they believe hanging is too good for someone. Or whatever. Do they mean such things? Perhaps - at the time. Do they seriously expect them to be acted upon? Of course not - and indeed, the fact that sometimes people feel their views are not being heard may itself lead to more vocal, and indeed more dramatic expressions of such views.
People talk crap online just as offline - and other people are watching and listening. So we end up with the case of AA, who ‘threatened’ to blow Robin Hood airport “sky high”. Did he mean it? No, of course he bloody didn’t. Was anybody else going to say, “Oh, good idea”? Of course not. Did his sentence send out a deterrent? It’s difficult to see about what, unless it’s to deter people from speaking their minds.
The case of the two rather disappointing “rioters” in Northwich is more complex. Let’s “Smash d[o]wn Northwich Town,” they proposed - but nobody went, other than the police. Were they jumping on the bandwagon? Most likely. Did they succeed in increasing the violence or anything else in any way? It doesn’t seem so, not in their areas.
There’s a straightforward scenario, which starts with someone saying, “Let’s do a bad and illegal thing.” Bad and illegal thing is done, people get arrested and punished accordingly. Indeed, the fact that Jordan and Perry turned up will not have helped the case for the defence. But just how much of the sentence was against the act, and how much was it to do with the hopelessly bungled online post?
Don’t get me wrong, I’m all for criminals being shown the error of their ways, whether they are rich or poor, in positions of authority or on the streets. We all have choices, whatever our circumstances, and we should face up to the consequences. And incitement to crime - where crime clearly taking place, or where its continuation (in the case of hate crime for example) is encouraged as a result, that’s just plain wrong.
On this day of all days however, let’s recognise that not all remarks, whatever the words are, should be interpreted as such an incitement. Yes let’s have a robust legal system, and give our courts the tools they need to separate right from wrong. Let’s recognise our online responsibilities, understand that cyber-bullying and insults to the non-PLU have no place in a civilised society. But let’s not create a world where people can no longer make an online remark for fear of who might come knocking on their door, however stupid it might be.
09-15 – Is Virtualization Evil
Is Virtualization Evil
Is virtualization evil?
With Linux officially (LINK: http://arstechnica.com/open-source/news/2011/08/march-of-the-penguin-ars-looks-back-at-20-years-of-linux.ars) 20 years old, the globally deployed, pan-device operating system is a far cry from the hobby system launched by student Linus Torvalds. While Linux kernel engineers now number over a thousand however, its original developer still keeps a tight control over what goes in and what stays out. So, when the notoriously outspoken Finn speaks, people listen. But just how seriously should his recent remark, “Virtualization is evil,” be taken?
Context is everything: the first thing to take into account is the state of play between virtualization and Linux. Two sets of virtualization code are now included in the operating system – KVM (from Red Hat) and Xen. While KVM’s journey to the centre of the kernel was relatively smooth, misalignments between Xen and Linux code releases led to disputes between the two development teams. “Xen developers listened to the feedback and they are now in the mainline kernel,” explained Linus in his Linux Con interview (LINK: http://events.linuxfoundation.org/events/linuxcon/torvalds-kroah-hartman) last month.
Linus Torvalds’ disdain for virtualization is about more than kernel developer politics, however. “I’m not a virtualization kind of guy… I built a kernel because I wanted to get my hands grubby with things like I/O ports,” he said, which is an interesting stance for the person who keeps the keys to the Linux citadel. Against this background, another vote in KVM’s favour was undoubtedly that it was designed to keep out of the way of the kernel, by assuming the hardware would do the virtualization. “There’s a certain affinity for kernel people to prefer KVM, where the Xen approach came from a different mindset,” said Torvalds.
With these factors in mind we can be a little clearer about what Linus meant by “evil”: first, difficulties caused by multiple developers creating code in ways that didn’t fit with the core team; second, a matter of personal preference; and third, a question of design approach. Linus made little comment about how virtualization is to be used, its huge potential and indeed adoption across enterprise businesses; nor did he bring up management or security challenges and their respective solutions – clearly, his remarks, and his core focus, is on the impact of virtualization on the beating heart of Linux, rather than such broader questions.
All the same, the question of design does set a few alarm bells ringing. One of Linus’ first “flame wars” was with Tanenbaum (LINK: http://www.dina.dk/~abraham/Linus_vs_Tanenbaum.html), the creator of MINIX, who called Linux, “a monolithic style system… a truly poor idea.” Linus was quick to refute that Linux was “a poor idea”, a refutation which has stood the test of time; more recently (LINK: http://www.youtube.com/watch?v=__fALdvvcM0) he said that he saw no reason why the 40-year-old UNIX architecture underlying Linux won’t still be suitable in 20 years’ time.
However, virtualization may change all that. What with hardware supporting an increasing set of virtualization features, and with hypervisors becoming ever closer to the hardware, the need for an operating system kernel that is so tightly wed to the underlying chipsets is increasingly in doubt. It could be that in the future, hardware-based APIs and bare metal hypervisors (LINK: http://www.networkworld.com/news/2010/072210-bare-metal-hypervisor.html) become the default, rather than the exception for all but the smallest devices.
And that, for Linus Torvalds, would be evil indeed.
09-23 – Herding the article cats - a five minute work flow
Herding the article cats - a five minute work flow
A bit of pre-amble. My simple, but surely not exclusive, even if slightly narcissistic need was for a place where people could come to read all my different articles, from different sources in one place. Given the less narcissistic option of having the ability to present multiple articles from different authors, surely I wasn’t alone in thinking of this?
The secondary criteria were ease of use – in terms of getting from new-article-on-web to new-article-in-index – the ability to broadcast from a single place about existence of said, and finally, a wouldn’t-it-be-nice-if I could measure who was reading what? Oh, and finally, zero cost other than my time, and no geekery beyond clicking buttons.
I started thinking about some kind of online index – there must be hundreds of articles out there, so an archiving tool which could then link out to the original sources? Great theory – but outside library management software (maybe a bit OTT) no dice.
After a long trawl around content curation, collation and presentation, a diversion into link sharing (Digg, StumbleUpon etc) and a quick trip through newsreaders and RSS managers, I finally settled on bookmarks and tags. Simply put, Delicious to bookmark articles and tag them appropriately – with “interorbis” say. I could then access the complete list whenever I wanted.
That solved building a list and ease of use, but it wasn’t so hot on presentation. Undeterred, I thought about a different route – to hand-craft a thumbnail of each article and host it on my own blog. Trouble is, Wordpress isn’t really cut out for few-frills blogging, and the off-the-shelf display themes weren’t ideal.
I went back to the link sharing route. Remembering one option was to set up a separate Twitter account and use it to pump out articles (rejected for display reasons), my travels took me to two services that bridge the gap between microblogging and full-fat blogging, namely Tumblr and Posterous.
Posterous initially looked like the preferred option – better sharing capability – but I settled for Tumblr because of the more established database of themes available. It is, apparently, possible to use a Tumblr theme in Posterous, but that was getting all too complicated.
In my travels I also stubbed my toe on Ping.fm, a hub for collating and broadcasting messages to social networks. Add that to Bit.ly for link shortening and (more importantly in this instance) click-through measurement and I had all the tools I need.
So, I now have a solution. Create blogs and articles wherever they need to be; then, at the same time they are published, copy a short section and use it with Tumblr’s share-link capability. Pipe the link through bit.ly first, to get that sharing measurement goodness, and then broadcast the final article via Ping.fm to Facebook, Twitter and LinkedIn.
End to end, the whole thing takes less than 5 minutes. Not perfect as it still requires some manual intervention, but not completely wrong either. And I’ve got a few of my photos in as well :-)
09-26 – Social blood donation
Social blood donation
Quick take from the bed:
The blood transfusion service started using social tools only 6 weeks ago.
Popular with students apparently.
Hashtag but not displayed publicly
Lots of potential!
More soon, now back to clenching fist…
09-26 – Twitter link scams grow in complexity?
Twitter link scams grow in complexity?
It’s always gratifying to have someone reply to your tweets. But not, however, if they don’t actually exist - as I started to find out a few weeks ago. To whit: a response I received to a twitter comment - here it is - was repeated four days later. Another Twitter response has been repeated twice by different Twitter ‘users’.
Now, call me a cynical old hack (it won’t be the first time) but I sensed something fishy going on. When I looked at the accounts in question however, they appeared kosher - at least at the outset. Genuine-ish sounding names, genuine-looking photos and profiles, tweets that looked human. and only a small number of shortened URLs.
Looking more closely however, things became more fuzzy. That @richardarguinem chap - why is the photo of a woman with a baby? And why was @jesusbig4, a Miami resident, tweeting about George Osborne’s benefits claims?
On inspection it quickly became apparent that these were not real people, but rather, Twitterbots that were taking other peoples’ tweets and adding them to their own streams of automated consciousness. And here’s the ‘clever’ bit: shortened URLs were included only every now and then, but when there, they linked to shopping sites.
So, is this just another example of link bait using the latest social tool? Yes, in part. The links I tried (using a sandbox virtual machine) went through to game sites or lifestyle questionnaires, both of which presumably have some kind of affiliate relationship. In other words, nothing particularly illegal, though potentially lucrative.
Is it really that bad, apart from clogging up the twittersphere? Again, yes, in part. A number of risks arise from what might be just an initial foray into even more complex Twitter scams. It would be easy, for example, to tap into popular hashtags or even read users own bios and tweets, and send people in related directions - to download the referenced film, for example, or respond to an associated survey.
Equally, just because linked sites aren’t dodgy now, there’s nothing to stop them being so in the future. I wouldn’t put it past a URL scammer to link to a scareware site -“Your computer has been infected, please download the patch” etcetera.
What to do, apart from being vigilant and keeping genuine ‘shortened URL’ protections in place? Most of all, each one of us has a role in reporting such twitterbots - remember that they rely on looking similar to the real thing, so if one is left unreported for a few weeks, it becomes harder to spot.
No less important is education - for example by broadcasting names of Twitterbots and telling people what they are up to. It’s a shame nobody has come up with an automated tool which enables twitterbot followers (there are some, it’s easily done) to be informed about their error, so they too can do something about it.
The bottom line is that hackers follow the money, so where there’s opportunity, it’ll be taken. Awareness is perhaps the best weapon we have.
Postscript: I happened to leave a Twitter search on the term “touchpad” going as a Tweetdeck column. It’s pretty obvious where the linkbait is happening - and what Twitter could do about it - not least, looking for multiple RT’s on tweets more than ten days old. Just sayin’…
09-30 – I, Technology: Can you turn your back on the Web?
I, Technology: Can you turn your back on the Web?
You’ve had enough. Facebook, Twitter, targeted ads, unsolicited email, the uneasy, nagging feeling that your computer is doing things beyond your control, talking to other systems without your knowledge, the automatic downloads, the quiet information exchanges – is that your password information crossing the ether, in clear text? Or your credit card number? Or your childrens’ personal details?
Perhaps it’s more than a feeling. An incident. Something happens out of the blue – a case of mistaken identity, your access has been blocked, you have received one of those, “We’re sorry to inform you, but you could have been among a minority of our customers who…” letters. Or perhaps you’ve had a minor epiphany, a genuine, joyous moment of truth when you realise real life has so much more to offer beyond the screen.
Whatever. You’ve made the decision – you just want to leave the Web. Maybe you’ll come back – you don’t know. But for now, you’d like to take your information and go. Just how viable is it, however? Can you really just pack up your virtual possessions and walk away? While this would never be a straightforward question given just how complex the Internet has become, it should at least have an answer.
Let’s give it a go. Simplistically, yes, there is nothing stopping you turning off your computer, unplugging the broadband from the wall and dropping any smart phones into a bucket of water. It might be wise to inform friends, family and work colleagues first – for example setting up an out-of-office email that says, “I am no longer here. If you want to find me, you’ll have to go outside and look.” Or posting a message on Facebook to the same effect.
There – that was easy – you’re done. Assuming that everyone you want to speak to is able to speak to you otherwise. Which means you probably don’t have a job, or indeed live in a place where others live, or have any friends at all. Perhaps its just you and your dog on a remote Scottish isle, in which case you are quids in.
Apart from the trace that is, the virtual vapour trails of data you will have left, snapshotted, sliced, diced and stored by every service provider you ever came into contact with. The trails are yours – the principle that personally identifiable data is protected, is enshrined into the UK Data Protection Act and many other national and international laws.
So, yes, you can in effect request the removal of such information. It’s simple. Just go to every Web site you have ever registered with and request that your information be removed. How hard can it be – you did keep a list of them all, didn’t you? It’s a good job you can trust all those big companies to respond to the request.
Ah. What was that? There could be complications. It seems that some companies haven’t been playing ball – Facebook, Microsoft and even do-no-evil Google have all been called out called out as needing to comply with the a proposed right – not a ‘possibility’ – of citizens to say whether their personal information can be stored. And they don’t like it.
Giving both legislators and online providers the benefit of the doubt, let’s say you can erase the online identities you have created across all such companies. As well as e-commerce suppliers, forums, email list aggregators and so on (at least for the latter you can terminate your email accounts, leaving any messages sent to the now-defunct addresses to be mere ghosts in the global machine, passing across the network with no hope of reaching their destination).
Are you done? If only. You go to a party. Someone – a child, say – takes your photo and uploads it to their poorly configured, insecure Facebook account. Somebody else tweets your name; another photo is uploaded, with a timestamp and a GPS location this time. Before you know it, as far as the great data crunchers in the sky are concerned, you might as well be standing in Trafalgar Square with a giant name badge and a megaphone.
Meanwhile, even if you were satisfied with your new status as social pariah (Who was it that said, “Saying ‘I’m not on Facebook’ is the new ‘I don’t own a television’?) you might find it increasingly difficult to get to the services you need. Cash-strapped governments see online services as a money-saver: while the majority of us simply hope they are right, others may feel the squeeze in terms of information access, responsiveness and indeed, cost.
In other words, while the Web can be a wonderful place, an electronic playground full of shiny toys, an endless source of information, a collective environment such as the world has never seen, you’re in it. Chances are, if you walk away from it, you’ll be back before long: to turn your back on the Web is tantamount to turning your back on society, which may not be precisely what anyone had in mind at the start of this journey. Where it will yet lead is anyone’s guess.
[First published on ZDNet]
October 2011
10-06 – Brief thoughts on Steve Jobs
Brief thoughts on Steve Jobs
I’m not sure what happened to my copy of Bob Cringley’s Accidental Empires, so I’ll have to paraphrase what he said about those upstarts who suddenly found themselves at the centre of the computer revolution. There was image-conscious Larry Ellison, the fantastically bright arch-nerd Bill Gates and his business brain Steve Ballmer. After a few chapters talking about the history of computing, their personalities and success stories, Bob turned his attention to the co-founder of Apple Computer. “Steve Jobs is crazy,” he wrote. Crazy, because he wasn’t like the others, following the money, fame or power. He was following an ideal.
As one of Steve’s earliest employees, it’s fair to say that he probably knew Steve Jobs as well than anyone, so I’ll leave the talking – and insight – to him. I am left with one thought however – that he lived up to the words of Mahatma Ghandi as closely as humanly possible, “Be the change you want to see in the world.” While he was in the right place at the beginning, success was never guaranteed; it would have been too easy to become one of technology’s also-rans, invest in a few films and a museum, and watch the world from the veranda.
Steve was the first of his generation to check out, but by the time he did, the empire he created was no accident. My thoughts are not of sadness, but inspiration.
10-12 – I, Technology: 7 reasons why schools need 3D printers
I, Technology: 7 reasons why schools need 3D printers
It is no great disclosure to say I am a governor of a local secondary school, but perhaps I should be honest about my own love of new technology, electronics and gadgetry of all kinds. As a post-modern, dyed in the wool geek, I do sometimes - okay, often - have a habit of getting excited about such shiny things, looking for uses where perhaps none exist.
In the case of 3D printing however, I’m pretty sure the engineers are on to something. I first saw such a printer in action at the Renault Formula 1 team’s headquarters, where it was used to print out prototypes of aerodynamic parts, which could then be tested in the wind tunnel. This particular printer was the flat-bed type, using laser interference to heat up specific points in a bed of resin. Drain away the remaining fluid and the part would emerge, almost majestically.
Other models of 3D printer exist, notably ones that create objects through extrusion of quick-setting materials, which can then be built upon layer by layer. The result isn’t necessarily an object of beauty - there’s only so much you can achieve by squeezing epoxy out of a tube. But both the capability, and the pieces that emerge, open new doors to down to earth, home-brew innovation.
Consider, for example, the RepRap open source 3D printer project that can actually create printer parts. Such a device does require some components that cannot be printed - steel bars, circuit boards and so on - so using the term “self-replicating” is a bit of a stretch. However the majority of pieces can be printed, as can spares, should anything break.
While the range of applications is pretty broad, or indeed, perhaps because of this, one potential “market” for 3D printers is education. Let’s consider some reasons why:
- Low cost of entry. Equipment to be used in schools on design courses takes an inevitable hammering, and therefore needs to be robust - a quality which comes at a cost. Education is facing a financial squeeze as much as any other sector, so 3D printers offer an option for providing lower-cost equipment which can be repaired locally without (expensive) third party intervention.
- Basis for creativity and innovation. While design equipment can be criticised for taking away the need to develop hands-on skills, it also offers the opportunity to develop more complex designs using parts that can be built on.
- Wider world online interaction. Open source 3D printing communities exist which offer, share and advise on design templates - these can be downloaded and used, or indeed uploaded for comment. This enables a shared forum where students can benefit from the knowledge of both peers and more experienced designers.
- Outreach to other schools. Changes to UK educational policy (for example the academy programme) has put schools more in charge of their own destinies, and provided more authority and responsibility. At the same time, this has encouraged more interaction between schools, both at a peer level and between secondary and primary tiers. 3D printing offers an opportunity to both share and enable, for example “printing printers” and taking them into primary schools.
- Business acumen. 3D printers offer an opportunity to produce items that have real-world uses, on more than a one-off basis. The opportunity to create simple toys or ornaments which can potentially be marketed and sold, albeit with the cost of raw materials, offers insight into both manufacturing process, market understanding and business development.
- Genuine real-world skills. 3D printing may be in the domain of the early adopter right now, but continued advances suggest that it will become more of a mainstream capability. For example, in several years’ time 3D printers may be more common in homes and businesses. If so, knowledge of how to create and print appropriate designs will become useful.
- Direct use for schools. There is no reason why parts need to be limited to students’ design projects. A 3D printer could be used to create items which are of real use to the school - for simple examples consider coat hooks, doorknobs and other fittings.
While the future of 3D printing is unclear, already uses are emerging that were not even considered by the original creators. For example the epoxy resin can be replaced by chocolate or sugar paste to make confectionery; printed parts can be milled to add more complex features; no doubt we will see printers with new features and techniques and using new materials, allowing for new types of design.
Maybe we will never be able to print a flat screen TV, or a car, or a house, but right now the low cost of entry opens up a wealth of possibilities. It would be folly not to investigate further - indeed, it’s what education is all about.
[This article first appeared on ZDNet]
10-13 – My Boris Bike questions
My Boris Bike questions
There’s an event next week (Tuesday 18) where you can give feedback to the managers of the London Barclays Bike scheme. I can’t make it but here’s the questions I’m emailing in.
Sorry, can’t make your event! Great service - when it’s good it’s very good, the problem is around “edge cases” - so questions:
* Is there any way of creating more drop-off points based on your knowledge of load? And also, is there any way of a user easily loving that they tried to drop off a bike at a certain point, but couldn’t?
* Could we have a GPS enabled smartphone app which shows (and reserves) the nearest available bike, for 10 minutes say?
* Could we have a PDF map please - or make it easier to find! (I couldn’t find it online)
Kind regards, Jon
10-17 – Cloud Society: Will the consumer cloud finally have its day?
Cloud Society: Will the consumer cloud finally have its day?
Latest blog - http://ping.fm/gP1mY
10-17 – I, Technology: Older generations need innovation, not just internet
I, Technology: Older generations need innovation, not just internet
Today saw the publication of the Times’ “50 ways to improve old age” - the link is behind a paywall, but in summary, it did indeed contain 50 snappy ideas of how older generations could be treated with more dignity, helped across the road and so on. From the paper version I picked up in a cafe, I drew the conclusion that it was not based on any particularly exhaustive study or experience, and so should not, perhaps, be subjected to too much scrutiny or comment.
However I was, of course, interested to see just how much technology was seen to play a part. Of the 50 ideas, just four were either directly, or indirectly related - in order:
- 1 - Get the online literacy rate up to 100%
- 12 - Have an “Ooh, was he in…” app
- 44 - Embrace robots
- 49 - Reward inventions and innovations that make life easier for the elderly
My first reaction, that this was a bit of an odd mix, was quickly replaced by an oh-here-we-go feeling about getting older people online. It has to be a good thing, doesn’t it? In all honesty, I’m not so sure - or at least, I don’t believe that it is possible to distil the question down into a simple, binary decision that being online is better than being offline.
Let me be clear. Some sterling efforts are taking place across the UK and beyond, such as the race online - who should be applauded for their efforts. To not be online can very often be seen as a disadvantage for a raft of reasons - access to services and information, communication, interaction - all very good things.
However getting online should not be seen as an end in itself. I’ve been helping a few older residents in my village with their computers, particularly when they have problems with their connectivity, and what’s pretty clear is that the data-pipe on the wall can be the source of as many problems as it solves. Some people I know have refused to have anything to do with it, as they tried and failed to make it work for them; others access the internet only rarely, or when they are prompted.
The fact is that many of the currently available mechanisms for Internet access were designed by geeks, for geeks. Workarounds exist for many problems, and people with experience know how to avoid issues such as picking up computer viruses and spam. Things are changing all the time, and a minority of the population takes pleasure in keeping up, adopting new things and trying them out, coping with accidental data loss and remembering to do backups along the way. A majority don’t, however, and the proportion of people who actually enjoy tweaking, fixing and generally coping, drops with age.
So, while the principle that online access is a good thing is sound, the way that it is currently delivered is not. Even tablets, smartphones and their respective apps - which could be seen to hold some of the answers here - have largely been designed to satisfy the needs of the younger demographic. Ideas here are welcome - though, one would hope, with wider applicability than item 12 on the list, an app to look up old actors and find out what happened to them.
What we have is as much an opportunity as it is a challenge. Genuinely easy to use, low-cost technologies that have been designed from the ground up to deliver on specific needs of older people - which are often much the same as younger people - bus and train timetables, communication with local communities and distant relatives, keeping up to date, simple access to services and understanding of available benefits.
All this without needing to sit for an hour with the help desk of the broadband provider being told to switch the router off and on again, or having to get the daughter-in-law round to find out why the printer has stopped working even though it worked yesterday. If, as the information generation, we owe older people anything it is that all the clever stuff we have come up with really should just work, at least as well as the capabilities it reputedly replaces.
Scraping in at number 49 on the list are, “Inventions and innovations that make life easier for the elderly.” A catch-all category which, in all likelihood, deserves a “50 ways” list of its own. To suggest that the most important thing is getting people online not only misses the point, it fails to face up to the real challenge of making technology simpler to use for all of us.
10-17 – Tech.Maven: Are companies supposed to be social?
Tech.Maven: Are companies supposed to be social?
Wouldn’t everyone love to work for a cool company? You know the sort of thing – pizza and beer in the fridge, bright colours, bean bags and everyone generally thrilled to be there. Indeed, exactly the sort of company that would welcome all those zany, fun, exciting social-networking-in-the-enterprise tools.
While such a working environment might sound exciting to some, it doesn’t necessarily have the mass appeal that the purveyors of such tools, and indeed the pundits and commentators who embrace them, might believe. “I’m an electronic engineer,” said one person to me when I asked about his company’s use of Yammer. “What time do I have for filling in my profile or joining in some chat?”
According to those who buy into the concept of “Enterprise 2.0” at a more evangelical level, he must be wrong – or at least he’ll need to get with the program sooner or later. There exist “sea changes in the world right now in terms of the way we are globally transforming the way we live and work,” argues Dion Hinchcliffe, one of the “Enterprise Irregulars” – a group whose numbers include consultants, analysts and vendors in the social media space.
The sea changes and global transformations have but one goal, namely to take us to a place where business is done very differently from the past, enabled by a stratum of collaborative technology. Without a doubt, plenty of room for improvement exists in today’s sometimes monolithic, other times fragmented but always sub-optimal organisations. But – to ask the million-dollar question – what if that isn’t the case? What if, shock horror, business in fifty, a hundred years time will look quite similar to how it looks today?
Whatever the rhetoric, it’s clear that social media hasn’t yet made an enormous dent on the enterprise. Part of the reason is down to the lack of obvious impact on real business outcomes, beyond general notions of knowledge sharing and engagement, argues Rob Preston at Information Week. “The movement’s evangelists employ the kumbaya language of community engagement rather than the more precise language of increasing sales, slashing costs, and reducing customer complaints,” he says.
Perhaps the issue is in how the argument is being framed in the first place, once again (and how many times have we seen this) taking a new technology and seeing it not as a response to a specific need, but as the answer, the salve to solve all ills. Organisations are inherently bad, goes the thinking, and now – praise be! – we have the opportunity to change them for ever. Even the term “mainstream acceptance” implies some kind of waiting game, like it’s only a matter of time before even the most cynical succumb to the charms of the social.
It seems highly likely that we shall see social media tools go the same way as all other game-changing, world-beating technologies. The ability to store information in a relational database was once the most fantastic innovation – and perhaps, when we look back on all this in a few decades, we’ll find it still is. But apart from in a few sad corners of the Internet, you don’t tend to find people wanting to be called “database evangelists” anymore.
Technology commoditises, becomes part of the infrastructure, lowers in price as the attention (and the money) moves to the next big thing. In general few technologies ever die, they simply find their niche – that is, uses are found for them that deliver specific value. In social technology’s case, for example, getting a quick expert answer to a customer query, or as Rob Preston mentions, finding a spare part.
To add insult to injury, however, some innovations are never destined to exist in their own right. The humiliating phrase “Is that a product or a feature” could well be applied to social networking capabilities, which are becoming more and more integrated into platforms such as Salesforce.com for customer management and Broadvision for content sharing. Enterprise social networking may well turn out to be one of those things that has succeeded when everyone stops talking about it.
As a final point of course, today’s youth is growing up believing that such tools just exist, there to be used just like the telephone or the video camera. It may well be the case that such technology-savvy youngsters arrive with different expectations of how business could be done. But given the dual pressures of corporate inertia and the need to maintain governance, the kids aren’t going to have things all their own way.
So, yes, it would be great to work in a fun office, to be creative, to share information and change the world. In reality however, no organisation is going to implement a technology simply because it fits with an idea of what business might be like, at some point in the distant future. Even if there is beer involved.
[Originally published on Tech.Maven]
10-17 – Tech.Maven: Stand by while we augment your reality
Tech.Maven: Stand by while we augment your reality
Reality can be dull, so who wouldn’t want to spice it up a bit? The phrase “augmented reality” (AR) distils all that could be hoped from technology - it might as well be called, “reality, just that little bit better,” like a technological pill to brighten the mundane. But does AR sufficiently merit its name? While it is still very much in the experimental stages, we are starting to see some real applications.
To be specific, AR involves capturing a video stream of a physical object or scene, and then adding information to the stream in real-time, displaying it on a suitable screen. Right now the whole package can be achieved with a smart phone with a camera and internet access - the video is captured, uploaded and recognised, then additional information is downloaded to add to what is seen on the display.
Perhaps AR is currently more, then, about “Augmented Imagery” - there’s not (yet) any augmentation of other sense-related information, say aural or tactile signalling. This isn’t a knock - more an indicator that we need to keep things in perspective when we consider examples of AR at work today. Essentially these fall into three groups: symbol-sensitive, location-sensitive and object-sensitive.
While symbol-sensitive AR is the simplest to implement, it is perhaps the most fun. The ‘symbol’ needs to be a pre-defined, fixed image that can be recognised by a program installed on a smart device. The latter can then add information in real time - for example adding a 3D avatar, or replacing somebody’s head with a cartoon image.
This model is not dissimilar to using QR codes - squares of pixels which can be photographed and interpreted to link to online information. Indeed, examples exist of using QR codes as the basis for symbol-sensitive AR. Both have also been used with some success at conferences and events, so it will be no surprise to see these applications growing.
Object-sensitive AR takes things one step further, in that image recognition software can identify specific objects and then construct a virtual world around them. Examples include Metaio’s digital lego box and apps to show how, say, to remove toner cartridges or other products from their packaging.
Finally we have location-sensitive AR, which captures the entire surroundings and adds information prior to displaying both. Google Sky Map is a simple, effective example of what can be done; other obvious applications are for travellers and direction finding, picking up specific street features and identifying the nearest pizza outlet, say. In these cases the video feed is supported by GPS information - so the software doesn’t have to work out which street one is on from scratch!
All of these models are being tested out in various ways by vendors, with AR capabilities finding their way into games, creating “a range of new possibilities” (in vendor-speak) such as incorporating a camera into a model helicopter and turning it into a virtual gunship. While such examples are indicative of what’s possible, AR has yet to find its killer app and has not, therefore, yet seen mainstream adoption.
Even so, it continues to develop. Gravity-related features or face recognition (as recently incorporated in Google Android’s Ice Cream Sandwich) are being used now; meanwhile, sites like Augmented Planet are speculating about integration of near-field communications and heads-up displays to enable more deeply immersive experiences.
Of course it is typical of the technology industry to try technology combinations and see what sticks - and AR is no exception. This is not necessarily a bad thing but neither should it distract from research into delivering AR-enabled capabilities that are of genuine, compelling use - for example educational applications or other places where ‘simple reality’ is not sufficient. It may be that there is no killer app at all - rather, like location-based services (touted as the next big thing eight years ago), AR features will simply find their place and gain adoption as part of other applications and services.
As AR finds its way more into the mainstream, the chances are some of the downsides will emerge - equally, then, it would be worth thinking about these up front. For example there are very clear privacy questions around using facial recognition in conjunction with AR. Right now we can sit on public transport with relative anonymity - but this could easily change if, say, our image could be mapped onto Google images. There’s also the possibility for social engineering, as the AR equivalent of social engineering takes hold.
Augmented reality may have plenty to offer - particularly as it starts to integrate other forms of information and as new applications are brought to the fore. We may yet reach a point where we drop the A and it simply becomes ‘reality’, at which point the bar of what augmentation means has to rise. Meanwhile, as a cross-over capability that can enhance applications and services across the board, it is certainly one to watch.
10-27 – I, Technology: Can I be the first to say the 'social network' is dead?
I, Technology: Can I be the first to say the ‘social network’ is dead?
So Facebook has announced a new data centre, the latest in a series of moves to keep on a par with Google, even as the two companies continue to ladle new features into their social collaboration platforms, all the while trying to steer clear of breaking any patents. While the trolls are probably rubbing their hands together in glee, not everybody is quite so thrilled. From the punter’s perspective it feels like the Microsoft vs Lotus vs Wordperfect bloatware wars all over again - while the gorillas are scaling out and stockpiling functional inventory, those sitting on the other side of the computer screen are left bemused, confused and in some cases downright cross at all the new capabilities they never asked for.
Back then of course, people had to wait for release dates before we could rail against the burgeoning floppy-loads of function. These days, it sometimes appears, capabilities get added or removed on a nightly basis without warning. Insiders talk about this-platform-versus-that-platform and measure popularity in terms of numbers of active users, or home page settings, or messages passed with the inevitable consequence that quantity, not quality is the driving force. Who cares if people don’t like the experience - to win means getting the most points.
Meanwhile, what started as a clever way of interconnecting individuals and passing simple messages is becoming a godforsaken mulch. Pioneer of clean communications, Twitter has maintained a reasonably simple interface, sticking to its microblogging guns and keeping its share of the headlines - but isn’t yet earning megabucks. Facebook and Google, each in their own way masters of experiment, are trying just about every possible combination of options for sharing of messages, content and services, pushing the boundaries of both legality and acceptability along the way.
And meanwhile meanwhile, everybody and their dog is queuing up to offer “new and exciting” ways of connecting, collaborating and generally becoming a downright nuisance to the people they count in their networks. We’re all guilty of a global affliction of over-sharing, exercising the multichannel equivalent of using every shape and sized arrow on the drawing palette, with only the excuse that everyone else is doing it. Me too.
Whatever is emerging from the frog-in-a-liquidizer mess, it isn’t social networking. It’s a cacophony of comments, a fire hydrant of feedback, an endless sump of suggestions for what we should be reading, watching, listening to, connecting with. Current signals indicate that it’s only going to get worse: the lines are drawn for a ponderous march forward, with the big players creating increasingly complex environments which we will all continue to use, despite the growing nag that it wasn’t what we signed up for. Not a Tower of Babel; more the shifting sands upon which no castle can be built.
And then, the oh-so-smarterati will say “social networking is dead.” They’ll cite the losers, referring back to Myspace and Bebo, recognising the falling rolls of active users, and saying it was all so predictable. They’ll be wrong - it wasn’t predictable, and social networking won’t be any more dead than it was when it was the latest big thing. In the tech industry we’ve seen it so many times - the relational database management system; the client server, or the service oriented architecture; the mainframe or personal computer. All were dead long ago, and have been, repeatedly, ever since.
But technology doesn’t die. Rather, it stops offering interesting opportunities for venture capitalists and media types who want to be wed to the next big thing. These are trophy technologies, to have hanging on your arm at parties, wearing something smart and with a good set of teeth. Truth is, to say, “So-and-so technology is dead,” is a euphemism for it finding its place in the substrate of infrastructure. In other domains we might say tarmac is dead, or push-joints for plumbers. They’re not dead, they just got boring when everyone else started to use them.
Social networking will never die, not as such. However, if there’s one thing we can learn from Facebook’s latest move it’s that the future is about providing a comprehensive platform for sharing, communicating and indeed other services, integrating what we currently call ‘social’ facilities along the way. In themselves, such capabilities don’t actually deliver anything - to continue the plumbing analogy, they offer fantastically simple to use, internationally available, amazingly real-time, multimedia pipework.
The rest is down to us - and the irony is, we’re currently spending our days using such tools to talk about what we might do, rather than getting on with doing it. The world hasn’t seen a better procrastination device since the invention of toe clippers. There will come a time, however, where people want to get on with their jobs, and will no doubt be happy to use social tools to support the communications they need to get the job done.
On this basis, what we may soon see the back of is “the social network”. Or at least, the recognition that such a thing exists. It doesn’t - in the hyper-connected global village that we are becoming, relationships exist because they really exist and exhibit social facets by their nature, not because they are programmed into such tool. Or, to put it another way, just because Arnold Schwarzenegger is following me on Twitter, that doesn’t mean I should be waiting by the hyperspatial door mat for a dinner invitation.
In the future, as socially-enabled services start to supersede social for its own sake, Facebook becomes simply another software as a service provider with as much in common with, say, Salesforce.com as Google or Microsoft. Indeed, looking at the latter company’s portfolio and how it is evolving, it’s not hard to see what any such company should look like a few years from now. But while social networking might have been the big innovation of its time, future innovation will take place outside of the technology echo chamber, in a world where social tools are seen as just that - tools.
Don’t get me wrong, high-tech has enormous power to support genuine advances in creativity, healthcare, achievement, wellbeing. Even something as simple as information sharing can have a quite profound impact, as we have seen in several cases already this year. While the social networking behemoths continue their slow march into the sand, we can only hope that a solid platform emerges that truly can be built on in the future.
[This blog was first posted on ZDNet]
November 2011
11-04 – I, Technology: Just how much privacy are we prepared to give up?
I, Technology: Just how much privacy are we prepared to give up?
This should be a short blog, as it asks a simple question, prompted by the announcement that both Visa and Mastercard are looking to sell customer data to advertising companies. However, while the question is simple enough to ask, it is not so straightforward to answer.
While I don’t have any firm statistics to back this up, I can’t imagine that many people are so violently opposed to having their data stored anywhere, that they systematically erase their financial and spending trails as they go along - pay by cash or cheque, seldom give out an address, always check the “no marketing” box, avoid internet spending, use a bank account purely as a repository for money and not as a base for standing orders and direct debits, etcetera. If you’re out there and reading this in an Internet cafe, please do come forward
The majority of us consumerist Westerners - I think - will have some ‘form’ in giving up some information about themselves to purveyors of financial services and consumer products. We mostly have a piece of plastic, indeed - as I found out to my chagrin a few years ago, when trying to pick up a rental car - some services won’t accept anything else. And even if the situation isn’t quite so binary (“no card, no siree”) we are prepared to give up a piece of ourselves for the sake of lower cost, increased convenience or other benefits.
It’s a trade - we accept payment for elements of our privacy. In the case of supermarket loyalty cards for example, we know that we and our shopping habits are being scrutinised, like “transient creatures that swarm and multiply in a drop of water.” But, to our delight, we receive points, or vouchers, without really worrying whether we’ve got a decent return on our investment. I do, occasionally, buy something completely random that I don’t need, just to throw them off the scent.
It’s the same for online shopping. However clunky the algorithms appear to be - just why do we still receive books for nine-year-old girls, some six years later? - be in no doubt that every purchase is being logged, filed and catalogued. It seems such a small step, when you buy a paperback from Amazon rather than paying in cash for the same book from the local shop; the difference, however, is that the purchase of one will remain forever, indelibly associated with your name.
There was a certain gentleperson’s agreement that was in place at the outset, given that nobody ever reads the T’s and C’s of these things. Namely: that the data would only be used for the purpose it was originally intended. Indeed, such principles are enshrined in laws such as the UK Data Protection Act (DPA). But purposes change, and so do businesses, and it isn’t all that easy to map original intentions against new possibilities.
Enter online advertising, itself subject to regulation in various forms including the DPA. It is unlikely that the Web would exist in its current form without the monies derived from advertising, from click-throughs and mouse-overs, to tracking cookies and web beacons. Advertising is the new porn industry, pushing the boundaries of innovation and achieving great things, despite a majority saying they would rather it wasn’t there.
In all probability, we just need to face up to the fact that we are making a trade. If you want to know everything about me so that you can sell me stuff, then you can pay for the privilege. Why not come to my house, see the books on my walls, the food in my fridge and watch my children dancing around in the garden - that way you can work out exactly what it is I want and offer me a highly customised set of products, goods and services. I’ll make you pay - hmm. Five hundred quid for full access? A thousand?
Or, maybe, that’s not what we want at all. More realistically, if someone came and offered such a service, we would tell them where to stick it. To the point: nobody in their right minds would even consider doing it in the first place. But what if, instead, every time anyone came to our house to offer anything at all - a new kitchen or a charity bag - they happened to be carrying a camera, and if I didn’t remember to tell them otherwise, they would be at liberty to flog the photos?
When it comes to the Internet, we are in danger of doing precisely that - evolving into a situation that nobody wanted because the vested interests were moving faster than the regulation, without anyone giving serious consideration to the consequences. I’m all for advertising - bring it on, I say - but let’s not sleep walk towards a scenario where we find we have already given away our privacy, for free. Once it’s gone, it’s going to be a devil’s job to get it back.
[Originally posted on ZDNet]
11-08 – Solid state storage now running at a pound a Gig
Solid state storage now running at a pound a Gig
Just marking the moment really - a cursory browse around UK computer components sites suggests that a 500Gb hard drive is just about available for the same number of pounds. eBuyer.com, for example, is stocking a Crucial 500Gb drive for £550 quid. At the smaller end of the scale, a generic 16Gb Class 10 Micro-SD card from MyMemory is currently costing £16. Sure, you can pay more and there’s a load more factors than storage size, but it does look like quite a watershed moment.
If you’re wondering what to put your 2.5“ SSDs into, here’s a suitable NAS enclosure from Synology - the DS411slim. And meanwhile, another interesting product from Maplin - a 4-slot SD Card RAID controller. One day all storage will be like this :)
11-14 – Tech.Maven: What if CIO's shouldn't have a place on the board?
Tech.Maven: What if CIO’s shouldn’t have a place on the board?
A confession for a Friday - I’ve been struggling with a concept that’s been around for years. It’s the idea that CIO’s (Chief Information Officers, or IT Directors in UK parlance) should have a place on the board. In principle this makes sense, given just how strategic both information and technology are supposed to be these days. Equally senior technologists can hit a glass ceiling in their own careers, which can be frustrating, but perhaps not as annoying as the fact that IT still isn’t taken as seriously as it could be by the business side of the house.
When I worked at Freeform Dynamics, I was involved in studies reviewing the state of IT in organisations of various kinds. In general (though I paraphrase well beyond the bounds of statistical acceptability) we saw three kinds of organisation. In the first group IT was appreciated as being of strategic value, and the business reaped high levels of benefit from its technology estate. At the other end of the scale and in roughly equal proportion tended to be companies that saw IT as a cost centre, perhaps unsurprisingly these organisations derived far less value from their tech.
In the middle were the people who weren’t sure whether IT was of strategic value or not - and in all honesty, they might as well have been in the third camp. Technology requires effort, and if you’re not going to commit fully to delivering high-value IT, you’re unlikely to reap the rewards.
With this in mind, it’s not hard to arrive at a conclusion that a seat on the board for the top IT guy would be of significant value. “Get me at the table,” goes the argument, “and I will be able to argue the case for technology. Get me at the table and I’ll get the clout I need to drive IT into the business and demonstrate real value.” While this is a perfectly valid stance, the trouble is that it is putting the cart before the horse. In the days of the industrial revolution, would the person in charge of all the lathes expect a board-level position?
Another argument for the board-level CIO centres on the ‘I’ - information - which is (we are told) a company’s biggest asset. But again, just because something is a huge asset of strategic business value, that doesn’t make it in itself worthy of the boardroom. Indeed, given that the ultimate responsibility for assets lies with the Chief Financial Officer (CFO), this is perhaps an argument for having the CIO report straight to the CFO. Heaven forbid, though it is quite common.
Once again, the problem isn’t so much that the ‘thing’ shouldn’t be treated strategically - quite the opposite. However the ‘thing’ itself is not the business, any more than the car pool or the buildings. The operations director of a company I used to work for referred to himself as ‘head of bogs, drains and car parks’, and while he was on the board, it was pretty clear in his mind that his job was to keep costs down, not try to overflow the importance of such things.
However, and here’s the rub, it is fundamentally important that both technology, and information, should be treated at board level. Ha-ha! you say, he’s contradicting himself now, we do need a CIO on the board! Well, no, I don’t believe we do, or at least there is the root of my struggle. I actually believe we need technology to be considered strategically by all board members; the act of having a separate individual responsible for IT immediately creates a communications requirement that shouldn’t have to exist. It would be like someone sitting there responsible for the pencils.
Businesses are driven by strategy, and value is perceived by those who drive. Business value based conversations are very much for the board; new opportunities, new products and markets, companies to buy, elements to divest and indeed technologies to be adopted or dropped. These are all part of the same conversation, but it is primarily a business conversation. To have a person responsible for strategy on the board - if not the CEO, a direct report - there’s nothing wrong with that as long as the role is setting business strategy which may or may not encompass technology strategy at any point in time.
The principle is sound, and it is borne out by examples such as ‘The changing role of the CIO’ - which are so prevalent on the Web. Perhaps the landscape will one day have evolved so considerably that, while the title remains, the role will have become far more wed to the business. However, and however competent the people in the task are now and in the future, it may well be that success can only be achieved by having a strategic business role which incorporates IT, rather than a strategic IT role which understands the business. It may well be that we need a new one altogether - the recent emergence of titles such as ‘Director of Business Enablement’ could be a sign that things are heading in this direction.
Or maybe we’ll just stick with how things are - with frustrated senior technologists wondering how they can convince disinterested peers in the business about how important it is to have a dialogue. Perhaps this really is the only possible route - in which case expect it to be long, tiring and without any real guarantee of ultimate success.
11-15 – UK SME Agenda: Too big to succeed?
UK SME Agenda: Too big to succeed?
Almost a year ago, faced with withdrawal of support from banks and general belt-tightening across sectors, the UK government announced a series of measures to help small-to-medium enterprises (SMEs). Most welcome was Cabinet Office minister Francis Maude’s announcement that government would work towards doing 25 percent of its business with the SME sector. But just how has this panned out, 12 months on?
The fact that government has challenges working with SMEs is a simple question of mathematical scale. Central government is large, monolithic and (let’s be honest) bureaucratic - things can tend to happen slowly over a period of years, and projects tend to be large scale and in many cases, fantastically expensive. The result is a general sense of stability and long-termism but by the same token, government can find it difficult to respond quickly to requirements and when things go wrong, they do so in a big way.
Meanwhile (and again by its nature) the SME sector is small, lithe, agile and relatively cheap, without the overheads of larger businesses. Bureaucracy is minimised, meaning that expertise is easier to reach and can be better tuned to the task in hand. Some downsides are, of course, linked to size - smaller companies can find it difficult to scale and, in some cases, tend to see the world of projects as a set of nails to fit their own, metaphorical hammers.
The problems really start when large meets small. For government to engage with any company comes at a cost - the sizeable overhead of simply doing business can make what should be a small project financially prohibitive, which inevitably drives buyers towards larger projects with relatively smaller overheads. By extension, the idea that (say) a hundred projects currently awarded to large companies, should now be dealt with as a larger number of smaller projects, working with a larger number of smaller suppliers, would raise a grimace on even the most stalwart of procurement managers.
Similarly, smaller companies engaging with government procurement can very quickly come unstuck. For all the promise of larger rewards for example in the shape of long-term fees, for a smaller business the tendering process can be as time-consuming, and indeed margin-consuming as the project it represents. Without, however, any guarantee of return - as a result it can be just too much of a gamble to take on.
It is this latter point that Francis Maude wanted to see tackled first, in the shape of simpler, standardised pre-qualification questionnaires that would be mandated across government departments. At least in this way one significant element of the overhead would only have to be done once. At the same time, an SME panel was constituted, chaired by Maude himself, with representatives from smaller companies who could “hold the government’s feet to the fire”.
For all the good intentions, however, the devil could well be in the detail. While tenders may be simplifying, the aforementioned nature of government departments means that changes are slowly working their way through the system - they could still be made simpler yet. Open questions around the nature of contracts, types of business eligible to bid, the need for insurances and so on add complexity which will no doubt be ironed out over time, but time is a resource the government has plenty of, while for small businesses it can be in short supply.
Meanwhile there is the question of skills. Even larger companies can lack key skills in response to a request for proposal (RFP) - at which point they look to associates and subcontractors (and indeed, some government departments and larger service companies are seeing this as a way to hit their 25% target). Smaller businesses are less likely to have the complete set of skills in house, and by their nature, specialists are more likely to already be booked on a job. To respond to this, some organisations (such as Delivery Cell One) are looking to pool their expertise following a “micro-consortium” model.
Overall, it is not hard to see that everyone gains from the government engaging better with smaller companies. The benefits are legion: smaller projects that more quickly achieve their goals, the real potential for lower overheads and therefore reduced costs to the taxpayer; and meanwhile, a shot in the arm to the UK’s comprehensively staffed and highly skilled small business community, benefiting from its expertise at the same time as catalysing growth.
While the principle is sound, questions still remain about the practice. For all its good intentions, it just may be that the very thing that the UK government wants to see changed - work smarter and deliver better for less, by enabling large institutions to work with smaller companies - becomes yet another casualty of the fact that the government is too big to do anything quickly. One thing’s for sure - unlike government, SMEs do not have the luxury of waiting for the problems to be solved: they will move on or go out of business first.
December 2011
12-02 – Inition and the shape of things to come
Inition and the shape of things to come
Last week was a week like any other week, the event like any other. Inition, a Shoreditch-based company providing a range of high-tech products and services to a variety of industries, had opened its doors to prospective customers, journalists and other interested parties. The company’s offices occupy the ground floor of a blocky, nondescript building, whilst the basement houses a range of machines, a screening room and an open space that can be used for studio production purposes or to demonstrate what the company offers and how it works. Peer in the windows and it was just like any other of the tech/creative businesses I walked past on my way there. So why, precisely did I find the company so intriguing?
I was met by Jim Gant, one of the company’s founders, and account manager Kathy Hall, who took me on a tour of the various demonstrations on offer. We worked through the 3D Surface display, which (with glasses) I could view a simple object or a street scene from multiple angles. We looked at the iPad-enabled Augmented Reality setup, headed back round the holographic display via a discussion of 3D object rendering from 2D images, and then moved onto the glasses-free 3D displays one of which, while just a prototype, was clearly the shape of things to come. We then passed the Pepper’s Ghost in a bottle, checked out the advert done for Samsung with Manchester United team players, had a go at driving a tank with spectroscopic goggles, and wandered over to the haptic devices, then 3D printers – one resin-based which could print quite detailed moving parts, and one plaster-based which could print in colour.
While the tour was as mind-boggling as it was whirlwind, it left me with more questions than answers. Yes, quite clearly here was a company with genuine hands-on experience of some pretty leading-edge technologies – some, indeed, that were still to demonstrate their full value. We discussed QR codes and geolocation, film and production practices, current and future opportunities. Yet despite Jim’s own expertise and the company’s obvious competences, and even though business was growing and customer needs were clearly being met, I was even less sure by the end of the tour what the company stood for.
This is not so much a comment on the company, as I have taken pains to point out, but more a reflection on where such technologies as 3D, AR and so on currently sit in the scheme of things. What was obvious to me – and I cite Moore’s law as evidence – was that many of the demonstrations were destined for mainstream use. Equally, however, right now they lie outside of the mass market, which makes them appropriate for three places: domain-specific applications (such as haptic tools and dentistry); bleeding edge adoption by rich people (3D TV) and marketing. Much of the opportunity currently lies in this latter group. Marketing, be it for products or people, music or media, is always looking for something to differentiate what may be a quite dull offering against the competition – at which point new tech can deliver something that hasn’t been seen before.
Inition’s trick, however, is not to simply offer technological bling – whether they’d arrived at that point by luck or hard-earned judgement, that’s what I perceived at the heart of the company’s offering. 3D printing, haptic devices, new display technologies were all interesting in their own right – but they are only enablers of ‘the thing’ – be it communicating a message, connecting with an audience, supporting training or whatever it happens to be. Central to Inition’s wherewithal is a recognition that new technologies will keep on coming, and while somebody needs to be on top of all of that, the real opportunity is to continuously develop and improve the value-added services that run on top.
No doubt multi-dimensional displays will be in every front room just a few years from now, and we shall all be having fun manipulating images on the fly and forwarding them to our friends, while dentists and surgeons become increasingly dependent on haptic technologies. Such capabilities, today so new and refreshing, will quite quickly become old hat. But it is to be hoped that Inition will still be ploughing the front of the furrow, learning how to make the most of the newest technologies and delivering best in class services to their customers so that we can all benefit at some point later on. Whatever space the company ends up in, will be a space worth watching.
12-05 – Inition Tour
Inition Tour
Inition – the shape of things to come?
Last week was a week like any other week, the event like any other. Inition, a company providing a range of high-tech products and services to a variety of industries, had opened its doors to prospective customers, journalists and other interested parties. The company’s offices occupy the ground floor of a blocky, nondescript building in Shoreditch, whilst the basement houses a range of machines, a screening room and an open space that can be used for studio production purposes – or indeed, to demonstrate what the company offers and how it works. So why, precisely did I find the company so intriguing?
I was met by Jim Gant, one of the company’s founders and account manager Kathy Hall, who took me on a tour of the various demonstrations on offer. We worked through the 3D Surface display, which (with glasses) I could view a simple object or a street scene from multiple angles. We looked at the iPad-enabled Augmented Reality setup, headed back round the holographic display via a discussion of 3D object rendering from 2D images, and then moved onto the glasses-free 3D displays one of which, while just a prototype, was clearly the shape of things to come. We then passed the Pepper’s Ghost in a bottle, checked out the advert done for Samsung with Manchester United team players, had a go at driving a tank with spectroscopic goggles, and wandered over to the haptic devices, then 3D printers – one resin-based which could print quite detailed moving parts, and one plaster-based which could print in colour.
While the tour was as mind-boggling as it was whirlwind, it left me with more questions than answers. Yes, quite clearly here was a company with genuine hands-on experience of some pretty leading-edge technologies – some, indeed, that were still to demonstrate their full value. We discussed QR codes and geolocation, film and production practices, current and future opportunities. Yet despite XX’s own expertise and the company’s obvious competences, and even though business was growing and clear customer needs were being met, I was even less clear by the end of the tour what the company did.
This is not so much a comment on the company, as I have taken pains to point out, but more a reflection on where such technologies as 3D, AR and so on currently sit in the scheme of things. What was obvious to me – and I cite Moore’s law as evidence – was that many of the demonstrations were destined for mainstream use. Equally, however, right now they lie outside of the mass market, which makes them appropriate for three places: domain-specific applications (such as haptic tools and dentistry); bleeding edge adoption by rich people (3D TV) and – where much of the money is – marketing.
For a reseller-cum-production-company such as Inition, much of the opportunity currently lies in this latter group. Marketing, be it for products or people, music or media, is always looking out for something that can differentiate what may be a quite dull offering against the competition – at which point new tech can deliver something that hasn’t been seen before.
The trick, however, is not to simply offer technological bling – and whether by luck or hard-earned judgement, that is what I could perceive at the heart of Inition’s offering. 3D printing, haptic devices, new display technologies were all interesting in their own right – but they are only enablers of ‘the thing’ – be it communicating a message, connecting with an audience, supporting training or whatever it happens to be. At the heart of what Inition is doing is a recognition that new technologies will keep on coming, and while somebody needs to be on top of all of that, the real opportunity is to continuously develop and improve the value-added services that run on top.
No doubt multi-dimensional displays will be in every front room just a few years from now, and we shall all be having fun manipulating images on the fly and forwarding them to our friends. Such capabilities, today so new and refreshing, will quite quickly become old hat. But it is very much to be hoped that Inition will still be ploughing the very front of the furrow, learning how to make the most of the newest of new technologies and delivering best in class services to their customers so that we can all benefit. Whatever space the company ends up in, will be a space worth watching.
12-07 – So what is it you do, precisely?
So what is it you do, precisely?
Warning. This post is somewhat self-indulgent. If you’re not interested, please skip straight to the bio.
It took me a year to update my biography. Stepping down as an industry analyst was a tough decision for me: I enjoyed the work, I loved the company and people I was working with yet I still felt I was compromising. Compromising what, precisely, I didn’t know. So I took the choice to deal with some other projects that I was not finding the time for - completing one book and starting another. To pick up a few writing jobs. And, in the meantime, to spend a bit of time work out what it was I wanted to do.
Very quickly I arrived at a quite succinct conclusion - I wanted to get more involved in helping technology make a positive difference. What took the time however, was how to actually enact this. As I spoke to numerous individuals, marketing and PR agencies, IT vendors and end-user organisations, media, music and publishing contacts, the need became pretty clear. Each area was an obvious source of knowledge about how IT could add value - and there were, and are, great things being done. But equally, each was on its own journey, a victim of its own provenance and even, in some cases, distrustful of progress made in other sectors.
For technology to make a genuine difference, I realised, best practices needed to be learned from wherever they were emerging, and shared across domains. The key was not to focus on ‘the thing’ - the domain-specific elements, for example how research chemists use technology to do research chemistry, or how musicians make music. Rather, it was to look at the areas around ‘the thing’ which are common to all domains. To my good fortune, I realised all of these started with the letter ‘C’.
The first of these, then, is ‘Capability’. Technology can have a positive impact simply by being there - but so often, it gets in the way, it’s too expensive to procure and install, requirements get misdefined, the results are inappropriate and therefore compromise any positive impact they might have. Best practices exist in these areas, such as agile software development and value-based project delivery, and technology is evolving to become more usable and affordable in the shape of internet-based services (aka Cloud). New capabilities are emerging all the time, which bring technology closer to people and enable more effective service delivery, and I am watching developments in augmented reality and 3D with interest.
But capability is nothing if it doesn’t support our ‘Creativity’ and innovation. In a world that is increasingly strapped for resources, organisations have two choices - to work within increasingly squeezed margin structures delivering commodity products and services, or to identify new opportunities to create value and deliver services of their own. The publishing industry is a case in point - right now the literary-agent-and-printed-page model which has served so well for hundreds of years is currently being subjected to enormous change. Technology is both the problem and solution - from one perspective it is the great destroyer, but from another it opens up new possibilities. Insight drives innovation, which is why I’m watching recent developments in big data - it remains to be seen whether deriving insight from information remains an elusive dream but I see it as another waypoint on the journey.
‘Communication’ is becoming the backdrop to most human endeavour. Today’s technologies enable collaboration during the creative, research and development process, and then as part of engagement with prospects, customers, music fans, intermediaries, citizens and other stakeholders - on a global scale, and in real-time. Indeed, the lines between development, marketing and sales are becoming increasingly fuzzy, to the extent that technologies such as social and collaboration tools, mobile messaging and so on can sometimes emphasise conflicts between departments more than solve them. Communication goes back to back with privacy and identity and, like all of us, I am a guinea pig in the great experiment we are currently undertaking with our private lives.
Which brings to the final ‘C’ - ‘Context’. It is one thing for technology to add value to market capitalisations of new technology companies, but does it actually make us happier? Are we more productive in our working lives, are our societies more democratic, are we better educated, or financially and personally better off? Is it the right thing to help other people get online, and do computer games help or hinder the development of our young? Are we better communicators, or merely more able to fire random streams of text into the ether which undermine our individuality rather than reinforce it? What’s perhaps more important than these questions is the absence of debate, which is fascinating in itself.
Putting all of this into a bio has been something of a challenge, but given the fact that I don’t have all the answers, the bottom line had to come down to what I can contribute. As a technologist with 24 years experience I have done most things, and seen examples of the best and worst of all IT has to offer. Meanwhile, over the past decade I have been developing my skills both as an analyst and a writer, in both technology and the creative - notably musical - world.
So that’s me. I will continue my research into what’s going on: indeed, as I have found over the past 12 months I have no choice, as it’s what gets me out of bed in the morning. The books and articles I read, the relationships and conversations I have, the part of my brain which continues to machinate, digest and attempt to make sense of what’s going on, is geared towards technology and its congruence (or lack if it) with society as a whole.
In the meantime, here’s the skinny: I offer writing, consulting and facilitation services, helping clients reach their audiences, tell their stories, solve their own problems with information technology. I’m as happy helping older residents in the village understand the Web, as I am advising major corporations establishing a global presence - and in many cases, the challenge is the same: it’s all about dealing with people first, technology second. Am I still an analyst? To be honest I don’t know, nor am I that bothered - I’ve always had a problem with labels but if people want to call me an analyst that’s fine. What I am clear about is that I no longer feel I am compromising, as I am practicing what I preach - for example, I’m very happy to say that I’m becoming a director of a technology-oriented charity, which feeds my need to roll the sleeves up and get stuff done.
That’s probably enough about me. I set up the company Inter Orbis (literally, ‘between spheres’) as a vehicle for what I wanted to do, and as well as my own clients I’m working with agencies, media organisations, events companies and consulting firms to help deliver on the promise of technological impact. My bio is here - if you have any questions, do get in touch (email jon at inter hyphen orbis dot com), and thank you for your time.
Twitter: @jonno
Linkedin: jonnocollins
Facebook: quakerjon
12-08 – Targeted ad serving may not breach privacy laws, but it's bloody annoying
Targeted ad serving may not breach privacy laws, but it’s bloody annoying
We looked for a hotel in Paris. A lovely thing to do - next April, we’ll be going there. How fantastic. we didn’t book anything, but certainly got some good ideas.
And then, for the following week, every web page we went to included adverts about hotels. Short breaks. Booking. Bargains.
I was doing some research into online storage. I looked at the big guys, the small guys, the old and the new. I looked at domain-specific storage companies and those more suited for small companies, big companies. I wrote an article about it all.
And then, for the following week, every web page I visited offered me online storage services. Specifically a service for musicians and creative types. For that was, clearly, who had the money right then.
My wife was looking into insurance for students. Short term, keep the costs down. She found some, and bought it. Paid there and then.
And then, for a week, came the insurance ads. For students. Though she had already bought some.
Let’s be clear. I’ve seen the slides, the importance of closer connections between people and brands, the opportunity for better-targeted advertising, for measurable outcomes, for pay-per-click services that offer far greater ROI.
But then, I’m also seeing the reality.
It’s like a really crap sales guy. I used to work with one. He couldn’t do the ‘listening’ thing, he’d try for a while and then, almost without warning, explode in a flurry of products and services that didn’t quite meet the prospect’s half-formed ideas of what the issues were.
It’s like the bloke at the bar who just doesn’t know how to speak to girls. He’d come up with some really poor, contrived chat-up line and then wonder why she walked away. After all, it worked on the video he saw.
It’s like that annoying person who wants to be your friend. Who won’t go away or leave you alone, who will pick up on whatever you say and twist it slightly as he or she repeats it back, showing that they didn’t really get it.
That’s what it’s like - bloody annoying. And slightly perturbing, when you realise that the annoying person is in your computer, on the internet, following you around and piping up unexpectedly about a topic you thought you’d finished with.
It’s the same on Facebook. Sometimes it’s funny, sometimes inappropriate, as your comments and messages are reflected moments later in ‘targeted’ advertising. Sometimes it’s downright offensive, and I dread to think what the algorithms do on occasions where people put their hearts out, have breakdowns, lose the battle with drink or suffer great loss.
This is not the future of advertising, of audience engagement. This is a cack-handed attempt to make sense of fragmented data presented through a distorted lens with no knowledge of context. It’s crap. But, because it’s all that businesses have and it is better than what they had before, because some poorly designed policy engine works with incompletely specified rules defined from an incomplete understanding, because of all these reasons and more we are served paid ads that, for a second feel half-relevant until we remember they’ve missed the boat, we were already there, the time has passed.
At the moment it just makes us feel… nothing. Or a little uneasy, like we’re being watched. Which, of course, we are, with tracking cookies and web beacons and the like. We know that there’s no such thing as a free lunch, that if we want to make use of ‘free’ online tools, we have to be prepared to offer an ounce of flesh in return.
But how long is it before we just feel fed up that our open-ness is not only subject to abuse, but worse, that we have given all this information to incompetent buffoons who are going to spend the next few decades trying to sell us childrens’ books long after the kids have left home, or medical insurance even when we have, for unexpected and quite upsetting reasons, become no longer eligible?
I don’t know. Perhaps we’ll collectively and quietly accept that the future is to be full of blunted, inappropriate messages being thrown at us wherever we go, most of which will fall by the wayside. Perhaps we’ll simply learn to treat them as noise, to block them out - but I get a nasty feeling they’ll become louder, more gaudy and in-your-face in response.
Or perhaps those responsible for producing such tools will learn, either off their own bats (which would be nice) or through a backlash in which sales go down instead of up, that our screens are not simply advertising hoardings available for rent, and our behaviours are not up for surveillance. That we don’t want poison dwarves following us around asking if we’ve thought about booking that hotel yet, or buying that insurance. That, surprise, our lives are more complex and interesting then can be modelled in some banner advertising business model, and that - even more of a shocker - we don’t want such intrusive company along the way.
I know, it’s wishful thinking. I don’t have a problem with advertising, and indeed, I have even bought things on a whim due to an online suggestion. But I do have a problem with people dressing up poorly construed, badly implemented and inconsiderate ideas as ‘the future’, particularly when the intention is to entice me to do one thing or another. Like Schroedinger and his infamous cat, just because we feel we know how to measure and even influence, doesn’t mean that we can guess what behaviour will occur as a result.
2012
Posts from 2012.
January 2012
01-01 – 2012 Technology Trends In Hr
2012 Technology Trends In Hr
2012 technology trends in HR
The office parties are over, the leftovers are all dealt with and everyone’s had a good break. Before the detail of the daily grind dispels any ideas of strategic thought, the beginning of the year offers a window of opportunity to consider what’s changing in the workplace, what new opportunities exist and indeed, threats and challenges to how we go about our business.
When it comes to technology, traditionally we’ve talked about trends from a top-down perspective - for example looking at large-scale software packages or computer architectures that enable new ways of working. The current wave of change is more of a groundswell however, taking place on the shop floor, on the road or in the office.
Whether we like it or not, employees at all levels are making more use of technology on their own terms. The buzzword is ‘consumerisation’ - that is, rather than being told what systems to use for what purpose, people are more and more taking matters into our own hands. A number of factors are driving this trend - not least, mobility, social networking and the cloud.
Mobility is a catch-all term to take into account technologies that enable work to take place from outside the office. So - mobile phones (which are becoming increasingly smart), affordable desktop and laptop computers, and the general availability of broadband communications make possible the home office, the teleworker and the connected road warrior. While these trends have been with us for some time, the balance has now tipped in terms of what people can afford for themselves, versus what the company can supply them.
The second area is social networking. Again, while it is true that the Internet has long been a place for some people to interact, the balance has now tipped - use of Web sites such as Twitter, Facebook or LinkedIn has become a majority sport. Social tools now also exist for the workplace, such as Yammer, Salesforce.com Chatter and so on, and Web sites and online applications are becoming ‘socially enabled’, for example by the inclusion of Facebook comments on a blog.
Third, we have cloud computing - or more specifically to the general workforce, software-as-a-service (SaaS) applications. Salesforce.com has already been mentioned, and similar line-of-business tools exist for project managers, developers, accountants and so on; applications also exist for security, storage and network management. Two significant differences exist between these applications and traditional, in-house software: first that they are very easy to start using as they require very little configuration, and second, that the de facto payment model is as-you-go, on a credit card.
Bringing the whole lot full circle are mobile apps available for smart phones, for example from the Apple app store or the Android Marketplace. While many of these are standalone, they often offer front-ends onto Web sites - for example the Trainline.com app which shows train times and enables tickets to be purchased. Such apps tend to be cheap enough to be considered a discretionary purchase by many - which means people are buying apps from time to time and installing them on their own, or on company kit.
As well as being frustrating for IT managers and operational staff (who are suddenly expected to support users with non-standard kit), it also creates complications in terms of how equipment and services are provisioned and deprovisioned, policies around employee privacy and working hours, and indeed conditions of employment. “if an employee has been provided with a smart phone so can access e mail at any time, how can an employer monitor (record) working hours, let alone control them,” says HR consultant Nigel Cox. “Intellectual property and confidentiality issues around online tools add new challenges, not least how to enforce restrictive covenants in employment contracts.” As the Web has no boundaries, so such challenges now have an international dimension: “Employment laws, data protection and employer responsibilities will be different in other countries – who has jurisdiction and how will it be controlled?” asks Nigel.
New technologies bring with them the law of unintended consequences, an example of which is the current court case (LINK: http://www.bbc.co.uk/news/technology-16338040) over who ‘owns’ Twitter followers when an employee leaves a company.
It is easy to get into a state of mind where ‘consumerisation’ is a bad thing, or at least should be shackled or restricted in some way. The trouble is, everyone from the janitor to the CEO is doing it - and in any case ‘it’ isn’t a thing, more a desire to use generally available tools and facilities. So, you can install restrictive firewalls on corporate computers, or secure virtual desktops that can be used from home; you can define no end of policies and practices, then do your best to enforce them. But then, something new will come along and you’ll have to start the whole process again.
We shouldn’t throw the baby out with the bathwater, however. Genuine business benefits can be derived from having more mobile staff, who can collaborate better and reach the information they need more easily. A more flexible workforce can enable innovations in working practices, for example around job sharing. And the results can be measured in hard-nosed business terms such as productivity metrics and travel costs, as well as equally tangible, yet less measurable results such as employee wellbeing.
Meanwhile of course, the technologies themselves are a work in progress. Mobile communications, social tools and SaaS and smartphone apps are increasingly being developed to make them more suitable for business use - not least by adding a higher level of measurability. “As adoption of social technologies grows, organisations will increasingly demand tools which help them quantify the ROI on their social media and social collaboration investments,” states Angela Ashenden, collaboration specialist at industry analyst firm MWD Advisors (www.mwdadvisors.com).
We’re also likely to see higher levels of integration with existing systems and applications such as Business Process Management tools, thinks Neil Ward-Dutton of MWD. “Social collaboration capabilities will become embedded in most leading BPM offerings, and better support for mobile participation is a part of this.”
The ramifications, again, could be unexpected - essentially because as business uptake of mobile, social and cloud-based technologies increase, control moves further from the IT department and into the hands of technology users. But if trying to control the consumerisation trend is not the answer, what is?
The key is to think more about how this increase in individual initiative can be harnessed by the organisation, rather than stamped out. This is not just about letting staff do their own thing, but also directing such activities in a way that benefit the business, for example by helping individual employees support marketing activities through their own social accounts, or by creating a secure, business-specific mobile ‘app’ which delivers the information the manager needs to the device of their choice. Undoubtedly, opportunities to reward ‘good’ behaviours will present themselves, and businesses will require new policies that state clearly where the line is drawn for example, abusive behaviours should never be tolerated, whether or not they are coming from an employee’s ‘own’ social networking account.
Meanwhile, there are undoubtedly actions that HR can take to stay on the front foot. First and foremost, forewarned is forearmed. “Book a 2 hour meeting with your IT colleagues and run a SWOT on how these issues affect your organisation,” suggests Nigel Cox. “Then review your contracts of employment – specifically how you handle working time – and take a fresh look at your induction programme – what it says and how you run it. Does it acknowledge, or indeed take advantage of new technologies?”
As a final thought, the changing nature of the workplace (and the people in it) will also have an impact on the skills base and profiles of people we employ. We’ve heard how the next generation of employees have been brought up with very different expectations in terms of the tools they use and the way they communicate to their predecessors – not least, potentially, to the people running the company. No doubt this debate will continue, but in the meantime, there is scope to consider whether or not our organisations have the right balance of communicators versus those whose role is to get the job done. Business in the future will be built upon the technology-fuelled empowerment of employees at all levels. We can look to embrace this, to target its potential in certain areas or minimise its effects in others. The one thing we can’t do, however, is ignore it.
01-10 – Open data: not an open, nor shut case
Open data: not an open, nor shut case
Who wouldn’t want data to be open? In IT circles, the usual opposite of ‘open’ is ‘closed’, but it could also be ‘shut’, ‘exclusive’, ‘inaccessible’ or ‘locked-in’, all of which can be associated with feelings of frustration, of missing out, of being prevented from sharing in whatever are the benefits of getting in.
The reasons why data can be rendered inaccessible are not always simple. For a start, data has an inherent cost to collect, collate and otherwise control. Sensors need to be placed and connected, spreadsheets and databases to be filled and formats to be converted, servers and storage devices to spin up and keep running, people to pay and problems to solve. That cost needs to be amortised somehow. Whatever ‘open data’ can mean, it certainly can never be considered to be ‘free’.
Once data has been gathered, however, it can be given away - either because an individual or organisation wants it so, or because a government (through actions or laws) deems it so. Governmental information sources are obvious candidates for the ‘open’ tag, in the UK’s case with data.gov.uk collating and enabling access to as much non-personal data as is feasible. This is, of course, to be applauded - it serves a very useful purpose, in that government does not, by itself, have the resources to do all that is possible with the data.
Examples such as the Ordnance Survey, roughly 50% of which is funded by the tax payer, show how ‘giving’ should not be a simple, blanket stance however. Data has a cost, and a value that people are prepared to pay for, either in its raw or derived form (such as maps). As such, it is worth considering the commercial value of data, and balancing this with the reality of how that data was funded, e.g. by the tax payer.
Which brings to commercial data sources. Some companies have built substantial businesses on the back of data gathering, interpretation and analysis - indeed, in the IT industry alone the global IT analyst market is said to be worth $2 Billion a year.
Some sectors, such as pharmaceuticals, are highly dependent on data: they spend a lot of money on it, and their products are largely based on it. From a business perspective, profit margins boil down to the amount of revenue that can be created from a new product, minus the amount it cost to create it - in the case of pharma, then, anything that can be done to reduce the cost of creating data has to be a good thing.
This position becomes more tenuous when organisations use tactics to make it more expensive, or even prohibitive, for the competition, for example through the use of patents law – the dubious practice of companies collecting samples form rain forests and attempting to patent anything with potentially healing properties, say, is an example of this.
Data recipients can also misuse information that they have bought, or which has been freely given. A clear challenge is the nature of aggregated data analysis, for example being able to link internet searches information with current events and depersonalised references to derive quite invasive information about specific people.
We’ve heard, for example, how Google can follow an outbreak of an illness by tracing the locations of where medical advice-related searches are taking place. While this is a positive example (in terms of informing local clinics with an anonymised picture), it is quite another thing to then link this to commercially available databases and direct marketing the antidote.
Things can get even more hazy where the data source, the collection organisation and the data customer have different interests. Who owns your data, for example, your blood group or your credit history? Who owns the data about the shape of your garden, or the drainage properties of your fields? Who owns the information about the chip in your dog? The recent example of NHS data being sold, albeit anonymised, to pharmaceutical organisations is indicative of what a can of worms this can be.
The aggregation question raises stark questions about privacy, and while computer company bosses from Scott McNealy to Eric Schmidt have declared privacy to be dead (with McNealy famously saying, “Deal with it.”), privacy might not be as dead as they have declared – at least in the minds of ordinary people. Nonetheless the nature of privacy is changing, in that people have to decide, on the basis of a fragmented and sometimes deliberately obfuscated picture (in the case of social networking site T’s and C’s), what they are prepared to give away.
The question of privacy looms at a national level as well. Julian Assange may have become the darling of the ‘open’ movement for his role in Wikileaks, as the organisation cast a torch around some of the murkier corners of our political systems. But a sword such as open-ness can cut both ways, and needs to be handled with care – without citing specific examples, there are some things it would be insane to publish particularly if they put lives at risk.
Opening data then, like opening doors, requires forethought. As Pandora discovered when she took the lid off her mythical jar, open is not always better than shut but it depends on what lies inside, and how it relates to everything else. Neither are the protections currently available universally good, nor universally bad – we need laws on privacy to respond to today’s realities, and patents and IP, and copyright, but such things can be misused, as can any tool.
So let’s not just band around the ‘open’ tag as though it will always be a good thing but rather, let’s see ‘open’ as an opportunity to decide, for each source, the benefits of enabling access together while keeping in mind the risks. The one thing we should keep open above all, is our minds.
[Originally published on publictechnology.net]
01-10 – The Kernel: A home fit for start-ups
The Kernel: A home fit for start-ups
How would the European technology scene – from London to Berlin – seem to a visitor from another era? His or her experience would, I think, be rather similar to the world seen by the hero of Mark Twain’s novel, A Connecticut Yankee in King Arthur’s Court.
In the book, an American engineer of the Industrial Revolution finds himself, after a bump on the head, waking in the purported age of chivalry and knighthood. Purported, that is, as he quickly finds himself embroiled in serfdom, disease, prison and, above all, ignorance from the ruling classes. “CAMELOT – Camelot,” said I to myself. “I don’t seem to remember hearing of it before.”
Equally, an industrialist from Britain in the 1960s, brought up among the optimism of Blue Streak, TRS-2, and Harold Wilson’s “white heat of technology”, might find themselves just as disoriented to wake up in today’s Silicon Roundabout.
Twain’s hero, unfortunately, was unable to overcome the attitudes of Merlin and his “pretended magic”, even armed with inventions such as the telegraph, the Gatling gun, and printed newspapers. Intellect and technological prowess came to naught when faced with the massed institutions of the time.
By Mark Twain’s own day, the United Kingdom had a more positive attitude to its innovators: Brunel, Watt, the Stephensons and not least Charles Babbage, the father of computing. This continued into the 1940s and 1950s, with Alan Turing’s wartime work, at Bletchley Park, on cryptography, and the world’s first stored-program computer, the SSEM, which was designed and built in Manchester.
Ferranti’s Mark 1 was the world’s first general-purpose computer manufactured for sale. The stage could have been set for Great Britain to be the technological capital of the world.
But even as the information revolution started to gather momentum, the British spirit of entrepreneurship lost its lustre. The UK economy declined steadily in the post-war era, so that by the time mainframes were being fork-lifted into the computer rooms of large companies, UK-based computer makers such as ICL were already struggling.
It was a different story on the other side of the Atlantic: the United States saw strong economic growth, benefitting from the liberalisation of trade it had negotiated in the last days of the war. The first venture capital companies took root in California in the early Seventies, creating a financial engine room that drove Silicon Valley to world prominence.
Californian free thinking combined with New England sensibility and the Seattle work ethic to create and maintain technological superpowers such as Apple, Microsoft and IBM. In the UK, though, engineering prowess failed to be matched by funding, or political support. Venture capital firms were few and far between, with Investors in Industry – 3i – standing out like a light in the darkness even as many of Britain’s brighter sparks were being enticed across the Atlantic.
In parallel, the remaining elements of the UK’s manufacturing industries were diminishing. Manufacturing constituted 40 per cent of the UK economy at the end of World War Two; by the Millennium, this figure had halved. Britain had the intellect, knowledge and skills to lead the world in technology. But under-funded, over-bureaucratic and distracted by disputes, it was not to be.
It would take a period of considerable economic and social upheaval to set the scene for Britain’s entrepreneurial renaissance. The wave of privatisation and deregulation of the Eighties may not have met with universal applause. But it did set the scene for simpler finance and employment law – both important factors in creating an environment more suited to startup businesses.
It was not until the mid ’90s, however, that those changes made themselves felt in the technology sector – as another great British mind, Tim Berners-Lee, invented the World Wide Web and spawned a global boom in software and services. The initial the dot-com boom was driven from the US, but the mad scramble online was highly enticing to British fund managers and private equity firms.
Ben Tompkins, partner and investment manager at Eden Ventures, says, “There was a rush to join the market in 1999-2000. That’s when Britain got into its stride.” It may have been a bubble, but the trick was to get in early and the excitement was Transatlantic.
Early UK startups (for example, online auction site QXL) were often close mirrors of their US counterparts, a trend reminiscent of motorbike and electronics during the Seventies expansion of Japanese manufacturing. “If you had a web site that copied something in the US, it would be funded,” remarks Richard Eaton, a partner at law firm Bird and Bird, who was involved in the dot com sector at the time.
Few existing businesses could afford to ignore the much-hyped “e-business or no business” mantra, injecting further capital into the system and driving demand for software development and web design skills. Of course, the dot-com boom turned to bust. But even while the still-nascent UK venture capital community nursed an extended hangover, an industry subsector had been formed and an ecosystem had been created.
Wounds were tended, and by 2005-2006, investment started to pick up again. Today, VCs are older and decidedly wiser, and a wealth of creative talent is available – possibly a spin-off from the surfeit of media-related degrees that have attracted ridicule in the past.
The wave of interest around social networking has spawned a number of companies, some of which – like Bebo, Tweetdeck and Last.fm – have already achieved “exit” through buy-out. Unsurprisingly, there is less investment in hardware companies than in the past. “We’re seeing a lot of Web 2.0 and social media companies,” says Richard Eaton. “If they get £50,000 to £250,000 that will see them through for one or two years. The only fixed costs are their employees and the computers they use.”
The Industrial Revolution may have started in the Midlands and the North, but, for today’s start-ups, London is the hub of activity, for many reasons – not least because it is a hub of activity. “Interest in Shoreditch started largely because the rents were so cheap,” explains Rassami Hok Ljungberg, a communications consultant working with startup showcase TechPitch 4.5. The Government has capitalised on this interest, with initiatives such as the Tech City Investment Organisation heavily promoting the location.
Experts such as Bird and Bird’s Eaton are unconvinced that Shoreditch is an ideal location. “It’s trying to force something into where you wouldn’t think of having a technology hub,” he says. Shoreditch lacks a major research-based university comparable to Stanford, or indeed the ecosystem of financiers, lawyers and advisers that Silicon Valley boasts.
“All the same,” Eaton adds, “companies such as Cisco and Google have been very supportive – it can only help.” Or, as Georg Ell, general manager of social software site Yammer, puts it: “When Yammer made a decision to start a European headquarters, the first decision was to place it in London, on the basis of the talent pool we felt we could recruit from and also that so many potential customers, multi-nationals, are headquartered here.”
That is not to say that UK start-up culture is a London-only phenomenon. Science parks around the country have also notched up success stories. Often connected to universities, Cambridge is the obvious – but by no means the only – example, spinning out Autonomy, Zeus and ARM.
Meanwhile, regional governments have their own inward investment and development schemes such as Software Alliance Wales and Connect Scotland. In addition, the pan-British Technology Strategy Board, formed in 2008 with a remit to “promote and invest in innovation for the benefit of business and the UK,” regularly funds competitions, matching funds in technology startups in fields from metadata to genetic research.
Another change for the better is the propensity for the British to actually see entrepreneurship as “a good thing”. Traditionally, Britons have been very good at holding innovators with cool disdain. “It’s fine as long as they don’t go on about it all the time,” is the general sense. Recent TV programming such as Dragons’ Den, The Apprentice and even The X Factor should be acknowledged for their reinforcement that hard work deserves not only reward, but also praise.
Why base a company in Britain? A number of fortunate accidents of history and geography suggest themselves. History, because it could never have been foreseen that English would become the de facto language of global business. Yet its country of birth is undoubtedly a beneficiary, helping support that “special” relationship with the US that encourages investment from the other side of the pond.
Geographically, with the British Isles sitting between the US and the rest of Europe, the UK is a good starting point, along with Ireland, for US companies that want to expand onto the continent, as well as for European countries that have their sights set on the US market.
“For Scandinavian startups, their first port of call is the UK once they are developed within-region,” says Rassami Hok Ljungberg. But Britain offers more than a beach-head. “It’s easier to start up in the UK than elsewhere in Europe,” says Hok Ljungberg. “You don’t have to have capital up front, which makes it so much easier – and winding up a company is easier too.”
Unique benefits of the UK include more flexible employment law, tax breaks, post-nationalised industries, and the right kinds of inward investment. These factors lower the bar to entry for UK-based innovation.
But it is not all plain sailing, particularly given the reduced appetite for risk when it comes to funding. “Seed capital’s higher than ever before, but there’s not enough capital in the mid-market,” says Ben Tompkins. No doubt the current global financial crisis is not helping.
“It’s about the risk profile,” says Richard Eaton. “A lot of traditional VC players are just not operating in this space anymore. Some companies will get funded, but that’s where the squeeze is – it’s being called ‘The Valley of Death’.”
For these reasons it is not possible to consider London in particular, or the UK in general, as a replica of Silicon Valley. Quite clearly, it is not. “The culture is very different,” says Eaton. “There’s much more willingness to lose money in the US – it’s generally accepted that not every deal will succeed. European VCs don’t have that – they came out of the private equity market, whereas the Silicon Valley venture capital community was formed from scratch.
“Plus [in Silicon Valley] you’ve a large concentration of big companies all based in the same area, meaning a steady supply of people and a close eye on acquisition targets. It’s like one big incubator.”
Even once a product reaches release, the complex operating environment in the UK does not make it straightforward to do business – although, sometimes, this can be a good thing. “The UK has one of the most sophisticated, mature and diverse markets in Europe,” says Hok Ljungberg. “Anglo-Saxon thinking can be a bit negative but it causes a constant, critical evaluation of everything which is very useful. If you can succeed in the UK, you can take it anywhere else in Europe.”
This makes the UK a better place to start up a business than many other countries. Berlin – increasingly seen as London’s great rival for start-ups – is full of bright, dynamic people but the city is still emerging from the shadow of the Cold Wwar. “A couple of years ago, it was illegal to be an entrepreneur in Berlin,” says Tompkins, just back from a trip there.
“There’s huge enthusiasm but limited capital. It reminds me of the UK in 1999 – a lot of cloning is going on, copying US success stories and creating local language versions [of services].”
In Paris, attitudes may be more positive but employment law makes it tough to afford growth in personnel. In Stockholm, Swedish law has no concept of share options operating on a tax-free basis for capital gains. And in Amsterdam, you have to go to court to change the articles of association of a company. Startup Britain may be hard, but it is fair.
What would Mark Twain’s Yankee think if he came to the UK today? The country has a wealth of resources, both intellectual and practical, that could be better utilised than they currently are. Britain is a long way from achieving the kinds of funding models, legal ecosystems, or indeed the general spirit of entrepreneurship that exists across the US.
But perhaps it will not need them. Start-ups such as Seedrs are questioning the roots of innovation funding itself. “We’re pending FSA approval but we hope by the spring of 2012 we will be providing hundreds, or even thousands, of companies a new way of raising capital effectively, from ‘the crowd’,” says Jeff Lynn, co-founder and chief executive.
Britain does still occupy a unique position in Europe, but again this could change. Berlin may be in a similar position to where the UK was in 1999, but the city will not stay in intellectual clone territory forever. The best estimate is that the UK has a decade’s grace before other European cities catch up and begin forging their own new directions.
The UK is still weak on inward infrastructure investment and current manufacturing forecasts do not bode well. A state of financial crisis persists, and it remains to be seen whether David Cameron’s recent use of the European veto will have any lasting impact on Britain’s business interests – though perhaps the UK’s unique geographical position will trump even that card.
And if there is one lesson to learn from the US, it is the importance of investing in, supporting and rewarding designers, innovators, engineers and technologists. The education system is the place to encourage an entrepreneurial spirit, by building the confidence to step outside the norm, and the necessary self esteem to counter any fear of failure.
Entrepreneurs – and those who back them – warn that UK must review its institutions, governmental and corporate, to ensure these encourage and promote future growth, even if this is at a cost in the short term. If there’s one lesson the UK’s start-up culture can learn from the US, it is the importance of investing in, supporting and rewarding those in the driving seat, so that they no longer feel the need to go elsewhere to succeed.
“He’s one of the cleverest people I’ve ever met,” says Richard Eaton of Dr Mike Lynch, Cambridge graduate and founder of Autonomy, recently acquired by Hewlett Packard for $10 billion. “He’s a mathematical whiz, but equally, he could see that if he wanted to succeed, the US was the place to go.”
Perhaps if Twain’s Connecticut Yankee travelled in time to the UK in 2012, or perhaps more realistically, in 2015, he would feel more inclined to stay.
[This article originally appeared in The Kernel]
01-17 – I, Technology: Is using computers to heat houses such a daft idea?
I, Technology: Is using computers to heat houses such a daft idea?
OK, it’s cold. Winter has finally come - or come back, given the cold snap of early December. And there’s nothing like sitting in a freezing room, worrying about the cost of oil, to focus the mind on alternative methods to heat a house.
Meanwhile, we are told, computers are excessively power-hungry. Particularly desktops with their 200-watt-plus power supplies. All that energy has to go somewhere, according to the law of conservation of energy: indeed, it is largely dissipated through the various processing, memory and other chips as heat, which is then sucked away and blown out of vents.
To all intents and purposes, then, a computer is running as a heater, albeit perhaps a rather inefficient one (though by the same token, the only other places that energy can be released are into noise and light, so it is quite efficient, in its own way). The daft idea is, why don’t we make more of this capability? That is, actually use computers as a source of heat?
A precedent has already been set by data centre designers. Server rooms are notoriously un-green, in that they gobble whatever power is made available to them and spew it out as heat which then requires even more power to carry away. News stories about data centres warming nearby buildings or even greenhouses appear with reasonable frequency - the cynic in me says that it’s an attempt to deflect attention, but equally, it’s good that the heat isn’t being completely wasted.
So, why not homes? Mainframes house chips on ceramic substrates, and ceramics also used in heating elements - I’m showing my ignorance to an alarming extent but could it be that a technology used to insulate, could also be used to distribute heat? Perhaps not enough to warm a whole house, one might argue, but… let’s think about this a bit. Rather than a radiator, would it be possible to create a wall-mounted device, architected to achieve high temperatures purely by performing calculations? Of course, these could be random floating point operations, but perhaps, more usefully, they could be non-random, programmed to achieve a goal - help with finding the cure for AIDS, for example, or supporting the search for little green men,
To take this one stage further (and I know I’m stretching things to the limit here), perhaps such processor time could even be rented. For money. To an organisation that could make use of it. Given an appropriate, pre-tested network connection, maybe Amazon could take advantage of my radiators for burst capacity for its elastic cloud service. At a cost, which could offset, or even pay for my heating bills. I might even turn a profit.
But is this really stretching things? After all, micro-generation from solar panels might once have been seen as mere fantasy, but a flurry of neighbours have bought into such schemes recently. To the extent that passing conversations have moved from being about the weather, to the return on investment that a sunny day can bring.
Home heat through processing might seem similarly out there, but the major vendors are at least thinking about it. Microsoft Research calls the concept the ‘Data Furnace’, suggesting that data center (sic) owners could save hundreds of dollars per server per year in cooling costs, if servers were distributed around willing (and appropriately equipped) households.
While the concept does have associated challenges, such as data security (which could likely be handled with encrypted virtual machines), it does have a lot going for it. Not least that less heat would be generated by centralised data centres, and more heat would be available for households, saving on personal fuel bills.
Indeed, it’s hard to think of one element of technology that doesn’t already exist to make this happen today. Perhaps, in fact, the idea of using computers to heat houses is not that daft after all.
[First published on ZDNet]
01-18 – 87 days to go...
87 days to go…
That’s 87 days of self-abuse and semi-sobriety for me, 87 days of grumpiness for everyone else, but all in a good cause as I’m running the Paris Marathon in April. As previous fundraising has been for overseas causes, I wanted to do something closer to home - so I’ve chosen 2 charities. First MusicSpace, a music therapy charity based in Bristol, and second, Isabel Hospice based in Hertfordshire.
http://www.justgiving.com/musicparis and http://www.justgiving.com/hospiceparis for further information!
March 2012
03-02 – The Facebook IPO, and where it takes us
The Facebook IPO, and where it takes us
The long-awaited Facebook IPO is nearly upon us. While there has been plenty of speculation about the company’s valuation, less has been said about how it’s going to achieve the quoted numbers, and its potential impact on the market. These questions are inextricably linked, so here’s a quick summary of the first.
Based on calculations linked to its SECC filings suggest a ‘worth’ of $5 Billion. However, trading on [controversial?] private exchange Sharespost implies a valuation of $80-85 Billion (understandably creeping up, given the suspense), and the general line is that the company could be worth as much as $100 Billion.
The term ‘worth’ should be used guardedly of course, given that share value is a construct existing entirely in the minds of investors. Some companies achieve the dubious distinction of a valuation that is less than their actual net assets – the stock broking equivalent of being worth more if you were melted down for scrap.
But just how much is Facebook’s valuation actually grounded in reality? The company turned over $3.7 Billion last year, and made a profit of $1 Billion. In other words, to pay back investors, it would take 100 years at current rates. So, there has to be more than it than that.
By way of comparison, Google reported revenues of $10.6 Billion in the last three months, and profits were up 6.4% to $2.7 B. Even with unforgiveably shoddy maths that equates to a return an order of magnitude greater than Facebook. Google is currently valued at $180 billion, which is twice as much as the suggested ‘value’ of Facebook.
Interestingly, at the time of its own IPO, Google was valued at 121 times annual sales but now trades at about 20 times – which is six times less – roughly the same proportion, one could say, that Facebook is overpriced. In other words, based on existing considerations Facebook shares are quite likely to drop to about 20% of their initial offer price.
Where’s the growth coming from?
Let’s go with the former scenario, for the moment. Additional revenues could come from three places – growing the subscriber base from the current model, or providing additional services to advertisers, or selling additional services to subscribers. Facebook currently earns about a dollar a year per subscriber in terms of revenues– so to grow organically from its current base it needs to expand to five times as many users – about 4 billion, that would be, or roughly everyone on the planet who has the capacity to read and write.
Advertising is the company’s biggest revenue stream, but while advertising revenues are increasing, a 500% uplift would result in a similar increase in expectations on the part of advertisers. One can only speculate how that would go down, even if the quality of click-throughs could be increased to reduce advertisers’ acquisition costs per lead. (Note: Ad pricing increased 18% in 2010, the number of ads increased by 42%)
It is difficult to see how this could happen without further erosion of individual privacy. The Facebook Open Graph, for advertisers, is predicated on the idea that not only are people prepared to give up their privacy on the premise of ‘free stuff’, but also that they will shop their mates. Or as Facebook puts it, “Helping brands connect with your friends.” You’ve had a Starbucks or bought some Levis – perhaps your pals would like to do the same?
So, what about selling additional services? Facebook’s biggest area of growth (up five-fold in 2010, to $557 million) is currently ‘payments’, i.e. ‘freemium’ apps such as Farmville, in which you can pay for additional stuff such as a rake or a bale of hay. The explosion in the gaming market overall suggests this could be an area of considerable growth to Facebook – but, as the company itself mentions in its 22 pages of risk factors in the IPO prospectus (LINK: http://www.sec.gov/Archives/edgar/data/1326801/000119312512034517/d287954ds1.htm#toc), much of this growth could be outside of the online platform’s control.
Speaking of the prospectus, the company is a lot less verbose when it comes to future strategy. The model is the model – and indeed, many of the company’s announcements (such as Open Graph, say) do, in hindsight, look like they have been implemented in order to give the IPO a bit more meat. But the model is the model, the plan is to grow by continuing to do what the company is already doing, with no suggestion of other rabbits to pull out of the hat.
“More of the same” includes mobile, possibly the most intriguing area for the company. Facebook is the most downloaded app on smartphones today, which is great news for the company’s footprint. Not so great for advertisers, nor for developers as the mobile Facebook client doesn’t support ads or apps. Indeed, this is quoted as a risk for the company – the IPO document suggests incorporating sponsored stories into the mobile news feed, but again, this is untested.
The broader impact
What, then, of impact on the wider market? All of the above suggests that if Facebook is to grow, it will need to start eating the lunches of other companies. Ruling out the unlikely scenario that existing punters have even more time to spend on social networking sites, Facebook’s required gain will require a level of pain from other providers, in particular Twitter and Google. Twitter, which is probably Facebook’s biggest contender for eyeballs, is not going anywhere, and Google is arguably the most creative of the three.
If the majority of Facebook’s revenues are from advertising meanwhile, the company will have other online advertising models in its sights. Like Google, Facebook operates a walled garden approach. While both offer a more attractive, ‘joined up’ proposition to advertisers, given the wealth of information they are building (rightly or wrongly) about their subscribers, the broader advertising world is also evolving – for example with web beacons and retargeting of ads.
Advertisers are currently investigating every potential alleyway down which they can connect with customers. Whether people like such models is a moot question – the important thing right now is that they work, and that advertisers will be clinical in how they assign their budgets, according to detailed analysis of Cost Per Acquisition (CPA) and other metrics.
Which brings us, inevitably, back to the question of privacy. You may be of the “Privacy is dead, deal with it” school of thought, but this makes some quite dramatic assumptions about both forward behaviours, and the fickleness of youth. Generation Z, we are told, doesn’t give two hoots about privacy. But equally (and following the same, misguided stereotyping logic), it doesn’t give two hoots about anything. Brand loyalty is everything, but loyalties can change very quickly.
Where else could Facebook have an impact? The answer to this question returns to developers. As things stand Facebook offers a highly scalable, globally accessible, advertising funded application platform, suggesting the real battleground for the company is more with Platform-as-a-Service companies such as Microsoft, Amazon and indeed, Google than with social networking providers. The latter have been targeting corporate apps and standalone startups; whereas Facebook’s offer is decidedly consumer (with undertones of Apple vs Microsoft).
Developers lead to startups, and whatever comes out of the IPO, Silicon Valley is likely to gain a number of angel investors (LINK: http://www.kernelmag.com/comment/opinion/1279/the-age-of-angellist/) with positive attitudes about the future success of the company, and hence the platform. Startups lead to diversity, and diversification could well be the key to Facebook’s future – though as the current model goes, it is more likely to bask in the reflected glory of those building on its platform. One can almost imagine a ‘Facebook Inside’ marketing campaign in a few years’ time.
Could Facebook itself diversify?
Given Mark Zuckerberg’s mission statement, “To make the world more open and connected,” there is no reason why Facebook-the-company needs to stop with Facebook-the-social-portal. The idea of Facebook smartphones, or indeed, Facebook SIM cards has already been suggested. However, to spread its wings it will need to enter areas of the market where it lacks first mover advantage. It has not succeeded in winning over email and messaging; it isn’t a player in videoconferencing; it doesn’t have a stack play; it has no enterprise offering. And to put it bluntly, the grudging acceptance of the Facebook consumer brand is unlikely to translate into the corporate space.
Perhaps, quite simply, Facebook’s future remains inextricably tied to the future of social networking. The company was always a bet, and it remains one. Thus far it has paid back in spades. Mark Zuckerberg and his merry men discovered a seam of gold that nobody had yet spotted – whether it got lucky or not is irrelevant. The real question is whether that seam of gold goes even deeper, or whether Facebook will have to compete more directly with other companies in the future.
The fact is that for a generation, together with Twitter, Facebook is social networking. However, while the company has achieved a staggering level of success and defined a whole new method of global communications in the process, the company’s ambitions in terms of the future of social remain relatively paltry.
Somewhere along the way, Facebook seems to have confused personal interaction with genuine, emotional engagement; equally, if the company doesn’t sort out a more solid (i.e. revenue-earning) mobile strategy, someone else will. The future in all its reality-augmented, near-field, sentiment-analysed glory will be device-independent, and Facebook’s business model (while not its reach) is still very dependent on the computer browser which, we are told, is seeing its twilight years.
So, will Facebook’s IPO have an impact? About the only thing we can say with any certainty is that a very short list of people are going to become very rich indeed.
[A shorter version of this article was originally published on The Kernel]
03-08 – Public Technology: CloudStore is only the start of needed procurement reforms
Public Technology: CloudStore is only the start of needed procurement reforms
I understand a collective sigh of relief was heard as the UK Government CloudStore finally came into being this month. The pent-up frustration experienced by government departments when it comes to IT procurement now has a solution, at least for cloud-based services.
A G-Cloud catalogue entry equates to pre-vetting, and in many cases therefore, the difference between procuring a service or not. Whereas in the past it was both too costly and too complex to acquire a cloud-based option, such hurdles have now been considerably reduced.
Elsewhere of course, it’s procurement business as usual. Current conversations with public sector organisations take me right back to my days as an IT manager for a large organisation, where I had to order everything in bulk to counter the overheads of procurement. An order had to be at least 500 quid, or it just wasn’t worth it.
The procurement challenge impacts not just IT but business stakeholders, in that the only conversations worth having are for the big stuff. Need an entire new IT system? Let’s talk. But if you just want a new feature enabled on an existing PBX, sorry, not worth our while.
Whatever the higher-level views around consumerisation or cloud, the fact is that IT costs are becoming increasingly fragmented. In this world where users are increasingly doing things for themselves, it should be possible to buy a tablet, or a printer, or an ‘app-for-that’ without needing to sign forms in triplicate or drive to head office. I know of one example of where staff buy their own printer paper from the local supermarket because it’s just too painful to wait.
The other knock-on is, of course, with the smaller supplier. Despite the fine efforts being driven by central government about providing more business to SMEs, the overheads inherent in the process make this very difficult. Even if the playing field is levelled, small companies simply can’t afford the lead times that government procurement processes still entail.
Equally, even knowing who to talk to can be a challenge. If you’re an IT supplier with a new, innovative solution and you want to find out whether anyone in the MOD, or HMRC, or the NHS might be interested, who should you talk to? In all honesty, even if you find an appropriate person, the chances are they’re going to be too overwhelmed with everything else to have the time.
At a macro level, several procurement efficiency initiatives are underway and frameworks being developed (such as the Public Services Network), both within and across departments, as there have been in the past. I genuinely wish them all the best, with one caveat. Any programme of change that is going to take more than two years to implement will almost inevitably be overtaken by events.
Perhaps one day, all procurement will be as straightforward as the CloudStore is intending to be. It’s still early days, and no doubt lessons to be learned (from experience for example, assuring checks and balances concerning demonstrable business ROI on purchases). But we can hope that it becomes a flagship example of how procurement can be done, if the will is there to change.
[Originally published on publictechnology.net]
03-20 – Towards The Quantum Democracy
Towards The Quantum Democracy
Towards the quantum democracy?
I shouldn’t have been that surprised, when I checked the IP address of online campaign site Avaaz.org and found it terminated at a New York data centre facility. Nor, for that matter, to discover that 38signals.co.uk is using another US-based hosting company (this makes more sense when the search also turns up the company is using the same agency as Barack Obama’s campaign).
I’m not blowing anyone’s cover here, I just typed their addresses into a box and did a reverse lookup. You’d be a poor hacker indeed if you didn’t know how to do that already. (Not that anyone would want to hack campaign sites, would they? Interesting how both are using DNS Made Easy service, which was hacked ‘for unknown reasons’ (LINK: http://www.tcpipworld.com/dns-made-easy-suffers-from-break-in-ddos-attack/216) a year or so ago.)
The more pertinent point is that both sites (and others) are dependent on hosted services. Indeed, you’d be a fool to create such a site and run all the infrastructure yourself, particularly if you don’t know at the outset whether anyone’s going to buy into what you are campaigning about. You might as well use processing delivered by a third-party, to benefit from pay-as-you-use models which only cost more as they scale up.
It’s very difficult to know what’s going to need to scale in advance. The keyword is ‘viral’ – and we’re all well aware of the viral effect a well-planned online campaign can have. For a recent illustration look no closer than Invisible Children’s Kony 2012 video, which has been watched 100 million times on Youtube and Vimeo since it was uploaded a month ago.
For every Kony 2012 however, there will be a thousand videos that didn’t grab the imagination. Campaigns are like pop songs. Anyone can create a ditty, record it and upload it (even if some people really shouldn’t), but that doesn’t guarantee an audience. The caring populace has too many pulls on its valuable time.
But neither do all ideas need to be as expensive to express as this particular 30-minute piece. With the free hosting facilities and blogging tools now available, anyone with the smallest amount of Web literacy can create a page, share it with their friends and see where it goes. Got a concern about the fate of your local armadillo sanctuary? Feeling miffed at the cleanliness of park benches? Want to do something about the demise of the apostrophe? If an issue is burning you up inside but you don’t know whether anyone else shares your anxiety, it takes only a few clicks to find out.
Current campaign sites want more certainty before they launch – which is one of the reasons the campaigns from the bigger campaign sites and so on fit reasonably rigid criteria. For example, Avaaz has adopted a process of peer review, followed by testing suggestions on a smaller poll group, before launching campaigns on a wider scale. Good ideas percolate through, hitting targets of relevance, currency and emotional engagement before making the big time.
The downside with this approach is the perception that supporters are not participating in anything at all – columnists such as Micah White have branded (LINK: http://www.guardian.co.uk/commentisfree/2010/aug/12/clicktivism-ruining-leftist-activism/print) such activity as ‘clicktivism’. “In promoting the illusion that surfing the web can change the world, clicktivism is to activism as McDonalds is to a slow-cooked meal. It may look like food, but the life-giving nutrients are long gone.”
While Micah White may have a point (LINK: http://interorbis.wordpress.com/2011/07/15/a-million-tiny-gestures/), he’s missing another – which is that the campaigns themselves don’t have to start in the headquarters of the organisations currently driving them. Online tools such as Spigit use game theory to let good ideas percolate themselves, and there is no reason why this model couldn’t be extended to armchair activism; meanwhile, it would cost pennies to launch multiple campaign sites, each nuanced differently to see what sticks, even with content changing dynamically according to behavioural measures. Without, Mr White, involving any of those nasty marketing types.
That’s just two examples to illustrate just how far we still have to go with public participation in world events (and I haven’t even mentioned Twitter yet). With the kinds of hosted functionality now available, we already have the tools and services not only to ask the world every single permutation of a question at once, but also to process and act on the answers in real time. When considered against the fickle nature of the human race, the result could prove to be more of a curse than a blessing but whatever the outcome, we are on the brink of finding out.
April 2012
04-04 – The Kernel: The day the music died
The Kernel: The day the music died
Do you believe in rock and roll? Can music save your mortal soul?
When asked what American Pie actually referenced, Don McClean was reputed to say: “It means I never have to work again.” While he may have been among the handful of lucky ones to have achieved hit record nirvana, many pundits still hanker for the days when all hopeful musicians had to do was send a thousand demo tapes, tour their backside off and wait for glory.
Those days are gone, of course. The music industry is dead, gone, shuffled off this mortal coil; mere carrion to be picked apart by maudlin journalists. Behind the pronouncements about the current state of the biz from commentators, analysts and industry bodies lies a conundrum, however. Namely that people insist on continuing to listen to the damned stuff.
Today’s youth appear to be the worst culprits – not content with simply graffito-ing bus shelters or terrorising grannies on park benches, they are doing so against a back-beat of new (and un-danceable) tunes. And their generation-punk parents are no better, fathers and sons glued to YouTube and Spotify.
So, what’s really going on? Have the harbingers of industry doom got it nailed? Are we heading towards a world where creative acts return to busker status, mere sideshows in a Simon Cowell theme park or earnestly grateful participants in some free music utopia?
As with all such questions, the answer lies somewhere in between. For sure, there’s real pain being felt by many participants in the current music business, and potentially real gains to be had. If only the participants could work out where, and how.
Make the bad noises stop
Take a certain perspective on music’s recent fortunes and it is easy to get morose, particularly if your annual bonus, or indeed your mortgage depends on them. Total album sales plummeted steadily from 1999’s figure of 940 million to only 360 million in 2010, with only a minor blip in 2004. Not a great story to tell the bank manager.
The industry has put the blame firmly at the door of torrent sites and other music piracy tools, claiming up to 95 per cent of all music is illegally distributed and therefore unmonetised. While “protectionist” music publishers have been painted as the bad guys in the affair (they can, at least, be censured for their outrageously wishful thinking that every single illegal download corresponds to a lost sale), it’s difficult to imagine anyone in the same position taking a more positive stance.
Indeed, many artists have expressed similar anxiety at this erosion of revenues. Not just Metallica; many smaller, independent bands have suffered all the pain without having a major label’s lawyers to defend them. The comfort blanket of a record deal has proved scant comfort for many a good act, as cash-strapped majors have made efficiency savings by removing all but their fattest cash cows (such as Pure Reason Revolution, culled by Sony/BMG in 2006) from their rosters. To no avail. Of the big six, once-proud behemoths of music, two have already been assimilated and a third (EMI) is in the process of being, pending antitrust checks.
Digital music business models continue to baffle, meanwhile. Spotify is repeatedly being put under the spotlight about whether it pays decent royalty rates to artists, a topic on which it prefers to keep stumm (the answer: probably not, but have you any idea how much those negotiations cost?). Meanwhile, Apple’s iTunes has gone from industry darling to monopolist by taking a 30 per cent cut of every song sold though its store, even as it “owns” 70 per cent of the legal digital download pipe.
And still, hopelessly hopeful young bands hold on to the dream of getting signed by a record company, even as they upload their crown jewels to the free-for-all that, YouTube and SoundCloud and their ilk. What hope does anyone stand?
It’s not all awful
Of course if you are Coldplay, Cliff Richard or Kasabian, life remains a peach. You can record an album, head out on tour or release a DVD in the knowledge that each will more than cover its costs. “The people making it tend to be those who have already made it,” says Ed Averdieck, formerly of Nokia Music and OD2. Frustrations about artistic freedom and unfair contracts aside, today’s Don McLean types remain in a strong position.
As for the biz, recorded music sales are actually up, for the first time in seven years. Finger’s crossed that it isn’t another seven-year blip, like the one seen in 2004. Other segments of the market are also doing well, including live music revenues which increased year on year until a fall in 2010 (which could be put down to stadium-filling bands simultaneously not touring).
Move outside of the traditional music business, and a wealth of new possibilities abound. Radiohead has shown all and sundry how it is possible to be a commercial success while still cocking a snoop at the establishment; many bands have proven beyond doubt the pre-sale model to fan-fund albums or tours; the market for premium versions of albums such as Pink Floyd’s Immersion editions confirm a continued appetite for both physical music and associated flammery.
That’s before we even get to the rosy-cheeked optimism of music-related start-up businesses. It’s hard to believe that Spotify only opened for registration three years ago; even more astonishingly, YouTube (30 per cent of downloads are music videos) has been around for a mere seven.
It all adds up to evidence, surely, that the industry is alive and well? Perhaps. The trouble is that nobody seems to know what’s going on, nor have any real control over the direction things are going. “Music is messy, organic and human,” says Dan Crowe, CTO at live music site Songkick. “You can’t force it to happen.” But with business being all about a guaranteed return on investment, where exactly can more certainty be found?
Speculate, accumulate?
We can all speculate and philosophise of course, throwing out concepts and theories like long-tail or crowdsourcing, drawing Dilbert-esque bar charts and pie charts, linking boxes and arrows. We can represent or publicise the interests of one group or another, be they rights holders, publishers, retailers or industry players, individually or in combination. And we can all hope, in the process, that simply saying these things makes them so.
Or, indeed, we can just give it a go. Many start-ups and large company initiatives appear to be games of “what if”; a random progression of combinatorial experiments, each testing out new selections of features and services on an unsuspecting public, to see what sticks. And when we find, in hindsight, that only one out of a hundred ideas had any legs, we claim the experiment to be a huge success.
Perhaps that’s a bit harsh. Individual success stories shine out as clearly as the problems they are trying to solve, such as Songkick, an online service that helps people log their musical interests and sends an alert when a particular band is playing in the local area. “Our mission is to get more people going to see live music,” says the company’s CTO, Dan Crowe.
A straightforward ambition; and when faced with the number of half-empty venues at so many live events, one that clearly needs tackling. The ticketing industry itself is in turmoil, as agents, sponsors and venues test out the business models (including the nauseatingly oxymoronic “convenience fee”), balancing their own interests with the need to get punters through the door. “There’s up to six layers of intermediary between the artists wanting to perform live, and the fans wanting to see them,” says Crowe.
For every Songkick, however, there exists a field of wannabes and also-rans, as well as the Cinderellas that are yet to find their golden slippers. In the live events space alone, for example, there’s LiveMusicStage which claims to be focusing more on the social side (like Songkick isn’t… well, yes it is), Clubbillboard which is more about clubbing, the recently deceased Plancast… the list goes on.
Don’t get me wrong, I have nothing but praise for anyone who chooses to try to build something new and fresh, and there’s more than enough cynicism (particularly in the UK) to go round. Pretty clearly however, not all such ventures stand a chance of succeeding. It is also difficult to shake the feeling that many such setups are features, not companies. “We’ve created a mechanism for jazz listeners to share their favourite moments on Facebook,” for example.
Okay, I made that one up. But given the cauldron of opportunity, the danger of looking too closely is that all you will see is diced carrots. To understand the future, it is important to go back to the source.
Going back to basics
At the heart of all music-related sales lie six fundamentals. First, that people will continue to make, and subsequently listen to music. A raft of speculation exists on this topic, which all seems to reach the same conclusion: we do it because because it satisfies some basic, Mazlo-esque urge. So, one way or another, we’ll carry on doing so.
Second, despite common sense suggesting the contrary (“The natural inclination is to sell direct,” says Averdieck), intermediaries provide vital links between supply and demand. Managers, labels, distributors, retailers, broadcasters and agents all play their part in helping music reach its hungry audience.
The idea of disintermediation may offer a debating point but in reality each role serves its purpose. Some have discovered that they can perform certain functions themselves – self-managing bands, for example – but the management role doesn’t go away, it simply moves in-house.
Trying to remove a link in the chain can have disastrous consequences. “The music labels tried to cut out retailers, but that failed spectacularly, as retailers listen to their customers,” recalls Averdieck, which illustrates a third point, that vested interests will continue to skew the model.
Forget romantic notions of altruism: all parties are human, and therefore subject to the deadly sin of greed. Equally, grey areas of favour exchange, horse trading, back scratching and transactions that don’t make it onto the books will always exist, particularly where both money and creative types are involved. Indeed, in its worst excesses, the music biz is almost as bad as politics.
Despite this the fourth truism is that, contrary to the popular myth, people are genuinely prepared to pay for music. Sure, we are lazy and opportunistic, taking the lower road where it exists, looking for a bargain and casting a blind eye on our own hypocrisies. But as shown by Spotify’s impact on BitTorrent, we’re perfectly willing to take the legal route if it presents itself.
Turning to the artists, the fifth principle is that having a big break was never easy, and never will be. All the technology in the world will not make every hopeful ensemble successful overnight, however talented. Apart from the lucky few (and we can all win the lottery as well), most fledgling artists will still have to do their time playing basement bars, handing out flyers and taking down their own kit and if they want to make it big.
Finally, the notion of celebrity, or brand association, or whatever you want to call it, will continue to colour the musical business model. People pay a premium for Nike, Hollister and SuperDry because they want to wear the badge, and it’s the same for artists. Fans do like music, for sure, but they also like to be associated with their heroes, and are frequently willing to bang the proverbial drum in support of their favourite acts.
Musical information age
While all the technology in the world can’t change these fundamentals, it certainly offers an opportunity to do clever things. Whatever you want to call it, the interactive web, social networking or music 2.0 all try to capture the “new” ability we have to inform, share and comment on each other’s musical preferences. So many start-ups are based on the simple premise that people like talking to each other about music.
Tools from Soundcloud to Bandcamp facilitate and simplify the online experience for bands that don’t have major label marketing muscle behind them. What they don’t do is replace the need for direct interaction on the one hand, and a concrete rationale behind it on the other – as illustrated, for example, by the patronage model employed by artists such as My Life Story’s Jake Shillingford.
The two-edged sword of technology also serves in music’s publishing, delivery and monetisation. While it could be argued that YouTube, iTunes, Spotify, Pandora et al have already cornered the market, there appears to be plenty of room for others to try their luck at “reinventing music delivery”, such as the now-defunct mFlow and its successor, Bloom.fm.
The opportunity for bands to promote themselves today appears greater than ever, which raises an interesting spectre. Traditionally, artists could get on with being creative, leaving all that grubby marketing stuff to the labels. Not only might a virtuoso guitarist believe that self-promotion was for others to do, he or she might also be rubbish at doing so. In this brave new world, is there actually room for softly spoken musical genius, or will only the loud survive?
Even once an act has seen success, it will still be faced with the familiar challenge of keeping the back catalogue monetised. Familiar because, in the old days, records (like books) were subject to a certain production run. Today’s digital models enable a production run of “one”, but only (and this is the weakness at the heart of the Long Tail model) if people know where to look.
This issue is tackled by recently launched CueSongs, a rights buying portal started up by the aforementioned Ed Averdieck and musical pioneer Peter Gabriel. The platform makes it easier for corporate music buyers (think films and advertising) to access content which, when faced with the idea of having yet another Moby track on an advert or, please God no, not *that* Elbow riff as used by the Royal Wedding, is welcome news indeed.
“We’re looking to move artists higher up the value chain, connecting the growing market for licensed music tracks with commercially released repertoire,” says Averdieck. “Licensing a track for commercial use was entirely manual. Our aim is to streamline and simplify the process.” And, as a result, to also help rights owners get back catalogues into the sunlight once again. “All these rights are sitting in the basement in contractual straightjackets. We thought, if we could only give them a canvas…”
Technology has a role to play when it comes to enabling traditional music models to happen, according to understood principles and business models. The bar isn’t so much raised as moved to a different, online playing field. While the internet isn’t paved with gold for musicians and their representatives, the opportunity is at least to go where the action is.
But surely technology can be doing so much more for music? Well, yes, maybe it can.
The new frontiers
The sheer volume of information being generated around music is as staggering as it is unquantifiable – or at least, nobody has yet done so. By way of illustration, however, four billion videos are watched on YouTube every day, and we know that roughly a third of these are music related. Meanwhile about five million viewers a day are commenting on other social sites, comments that beget other comments.
Add to this the tracks played on Spotify or via Soundcloud; add all other related social networking activity, on other sites; add data around actual plays via connected devices; add commentary about who’s been to see whom and where; add metadata, advertising click-throughs and historical sales information; and you would start to build a very interesting picture indeed, if only you could make sense of it.
The self-effacing term is “big data”, which gives quiet mention to both the staggeringly vast pool of information, and the enormity of the challenge of making sense of it. Record labels are already throwing global sales information into the number cruncher, using the results to forecast demand and organise distribution.
And meanwhile, some start-ups are tackling the challenge of mining online sentiment from accessible social sites and generating useful results. For example, We Are Hunted and The Sound Index, both of which generate “charts” based on what people say they are actually listening to.
The opportunities are legion, not least in the live arena, thinks Songkick’s Dan Crow. “There’s a huge opportunity for a much more holistic, data-driven approach based on real evidence,” he says. Not only could this benefit performers and fans, but also owners of clubs and concert halls. With the right data at their fingertips, they would be better able to pitch to performers to come and play at their venues.
Thinking about the experience, plenty more can be done with all that data. Time for another buzz phrase, this time “augmented reality”, or the ability to add information to a current situation (as viewed, say, through a video camera on a mobile phone). Google Goggles is one indicator of the shape of things to come. Meanwhile, applications such as Soundhound and Shazam (which, in the simplest terms, can “listen” to music and tell you what the title is) are also examples of apps that combine captured information with that available from an online database.
Clever as they are, capabilities from We Are Hunted to Soundhound are only starting to scratch the surface, providing a rear-view mirror onto what has already happened. Things start to get exciting when they are seen as the basis of new decisions, or indeed experiences.
A purely speculative example involves putting together a touring plan. Yes, information about where the fan base was stronger would help the band make better decisions about where to play. But equally, if venues knew of the tour plans, they might also want to propose the band stop to play in their own town, en route? And, potentially, stream it live, if the facility to do so was available to them?
And new experiences? We’re already seeing bands experimenting with audience involvement in the creative act, for example, simply letting fans get on and record shows. “We’re seeing bands experimenting with more open policies around the recording of gigs,” says Songkick’s Dan Crow. “The line between recording and live is blurring, we’re going to see far more of it.” Bands like Marillion have taken this one stage further by recording a gig through the camera phones of attendees, then editing the footage together, into a music video.
Enough speculation, but for a final acknowledgement: that creativity doesn’t have to stop with guitars and keyboards. The DJ/producer phenomenon started in the era of Frankie Goes to Hollywood has evolved to such an extent that today’s chart-toppers are just as likely to be DJs (think David Guetta and Skrillex),something which the industry as a whole is struggling to et its head around. “DJs and producers were never really invited to the rock and roll party,” says Karl Nielson of Dubstep/D&B media company AEI Media. “The industry was built for white guys with guitars.”
Similarly, it becomes harder and harder to distinguish the boundaries between music and other spheres of creativity, entertainment and engagement. Examples from The Great Global Treasure Hunt to the multimedia stage adaptation of Howl’s Moving Castle by Davy and Kristin McGuire are indicative of both the shape of things to come and the kinds of revenue models that become possible, if only the industry itself chooses to pick them up and run with them.
There’s all to play for
Despite all the possibilities that technology now affords, we are perhaps not all that far from the fundamental principle of lobbing a shilling in a bucket in gratitude for being entertained. “If you can please the artists and the fans, the rest will follow,” remarks Songkick’s Crow. And all that “the rest” implies – managers, labels, broadcasters, marketers and everyone else involved in the production, delivery and performance process.
Perhaps the trick is to recognise that the traditional music industry was ever only a chain of intermediaries that became too powerful in one ecosystem, itself currently being dismantled to make way for another. Nobody has a monopoly on the future, but rest assured that major players will emerge, and in twenty years’ time will be the established music business.
While on the surface it may appear that incumbents are being hoofed out of the nest by a new generation of cuckoos, in truth, the last labels standing have as much chance as Songkick and the rest, if they choose to seize the nettle. “Music should be challenging, it should be entrepreneurial, it should be pioneering,” says Karl Nielson. “Record companies should be the most exciting places to work ever! Musicians are already going out on a limb, we should be as brave as the artists we are representing.”
Technology offers a wealth of possibilities, but more important is how it feeds the human desire to make creative use of such tools and perform the results to the largest possible audience. Nor will it change the business imperative to “become number one in the space”, or to overcome the nature of a few bad apples to upset the whole barrel. These are the ingredients we are given.
The winners in the music space will be those who crack the code, either through careful assessment or by stumbling upon the magic key, and then (did anyone mention MySpace?) not killing the golden goose.
In the meantime, just because a certain business model has existed for a few decades, that doesn’t make it the best way of doing things and should rightly be dispensed with if it gets in the way. Music might not be able to save any mortal souls but right now, it certainly offers a host of opportunities to any party with the gumption to get in the driving seat. In the words of Ed Averdieck: “Someone else was going to crack it – why not us?” Why not, indeed.
04-16 – Race report: the Paris Marathon
Race report: the Paris Marathon
It must have been the beetroot detox smoothie. After 6 months of what I believed was less than adequate preparation, including a pulled hip muscle in September and a fortnight of man-flu in February, and carrying a stone more than last time, I took on the 26 miles of the Paris Marathon and emerged relatively unscathed.
The race itself – suffice to say, if you’re going to do a marathon as a one-off, it’s a good call to do it in a relatively flat, major city with lots of landmarks. Starting on the Champs Elysees, heading past Concorde, the Louvre, Place de la Bastille, the Seine tunnels, le Tour Eiffel, l’Arc de Triomphe, plenty of sights kept the interest levels up.
Only the two parks at either end of the course dragged a little. And even these had their advantages, particularly for the weaker of bladder. Indeed, I’ve never seen so many people dart in and out of the undergrowth. So, yes, thoroughly recommended. But enough about that.
Meanwhile, I was keeping that certain sense of dread at bay with a concerted effort to keep performance expectations very low. My last (and only previous) marathon in Brighton didn’t go so well. I started with best intentions and very good company, and I confess, a feeling that I could beat the odds. I ran too fast for 15 miles before realising the mistake as, over the next 11, I experienced what it might feel like to have iron nails slowly inserted into my thighs.
So this time I was determined to keep things slow, consistently sticking to between 10 and 10:30 minute miles. All the way round. As a result, without hitting the wall, and without any real pain until the final half mile. I even came in a bit faster than last time – 4:42, rather than 4:45. Without feeling in the slightest bit smug – I’d been in the same position myself – I continued at the same loping pace, passing innumerable people in the final 3 miles.
Highlights: just doing it, heading down the Champs and other long, straight avenues with thousands of like-minded people; seeing my lovely family at frequent vantage points (hurrah for the Metro); the tunnels, difficult to explain but it felt a bit sci-fi; the Eiffel Tower; the buckets and sponges; the foolish but “heck, why not” snifter of wine 2 miles before the end; the innumerable brass bands, drum troupes and rock groups; the massages.
Less good, while I thoroughly appreciated the orange quarters and banana sections, the resulting mash of skins and the chaos trying to pick them up resulted in at least one bad fall that I saw.
Overall, if you’re going to be mad enough to run an entire marathon, then Paris is as good a place to do it as ever. That, coupled with the simplicity (and cheapness!) of a €70 entry fee rather than having to enter a lottery or guarantee £1000-plus charity fund raising. If it was my last (I have a funny feeling it won’t be) then I will have finished on a high.
P.S. If you did have any spare change, I was raising money for a music therapy charity and a hospice so feel free!
04-18 – The Kernel: London Book Fair
The Kernel: London Book Fair
On the table lie a banana, a note pad and a business card. Nobody is seated, which makes it an exception; most other tables at the Penguin Books stand are occupied by earnest young things and mature, sober types, chest-deep in conversations and scribbled notes. It’s much the same at Random House and Harper Collins, each but a carpeted corridor away, back to back with a host of smaller publishers of every kind of book imaginable.
Each stand is lined, sometimes top to bottom, with books. Books are everywhere. Books, books, books, in all shape and sizes, on every topic imaginable. The collective offerings from publishers and resellers, industry bodies and government agencies stretch across both halls of Earl’s Court. Come to the London Book Fair and at first glance, you would think the information revolution never happened.
“It’s just a façade,” one person (who’s been coming to the show for 18 years) tells me. “Everybody knows that the world is moving on.” E-books may be the future yet real business is being done at these tables, slots allocated according to the carefully planned schedules of publishers, retailers, rights agents and other intermediaries. One seller told me that diaries fill up months in advance, for what is a highly significant element of the publishing business calendar.
So, where precisely is the future of publishing to be found? While a scan of the floor plan reveals a Digital Publishing section, it feels more like the naughty corner than the brave new world. Activity is decidedly muted in the Digital Lounge – indeed, an empty booth offers the perfect place to write this piece, far away from all the hubbub.
The twenty or so vendors and device manufacturers occupying the digital enclave project the feel of a technology-based event. Banners proffer world-changing mantras, while heads of marketing and pre-sales engineers engage with passers by. No real business will be done here; success will be measured in badges scanned, conversations had and leads generated.
It’s a conundrum. If anyone had told the publishing industry that its models were dead, it clearly forgot to listen. Or perhaps, it is so deeply ensconced in wheeling and dealing that the geeks can’t get a word in edgeways. A few interactions (notably with a particularly disdainful head of commissioning, though I did nick a plastic bag from the stand shortly beforehand) left me feeling here was a closed shop, insiders only need apply.
Or perhaps, given that time is money in any business, rapidly compressing margins have left publishing industry decision makers with no time but to do more of what they have always done? “No time to say hello, goodbye! I’m late, I’m late, I’m late!” exclaim the notepad-carrying white rabbits as they hurry between half-hour appointments.
A small part of me – the techno-evangelist which I generally try to keep in check – wants to run through the hall screaming at everyone, like the character in the film who has had advance warning of the impending disaster, the epidemic or the tidal wave. But a bigger part whispers in my ear that the situation deserves quiet reflection. “Don’t do anything hasty,” it says, Ent-like. My head is buzzing with literary references.
So, where’s the truth? All is not necessarily well. A section of remaindered book companies bears witness to the shotgun models still employed in modern publishing: their tables yawning under the weight of celebrity bios, pasta cookbooks and obscure histories. “That’s not a good sign,” says a lady at the desk of The New York Books Review stand. “I never expected there to be so many.”
I am told, by others, how the show is a crush of egos, carried forward by its own, historical inertia. How the publishing industry could learn so much from the music industry, which (despite its own inadequacies) is so clearly ahead of the game. Ironic given how much gloom (LINK: http://www.kernelmag.com/features/report/1740/the-day-the-music-died/) pervades the latter.
But meanwhile, there’s something strangely compelling about the sheer volume of books carried by the show. Some are in nondescript or tacky covers, but others have been produced with careful attention to packaging, to tactility and tangibility. Indeed, whole companies are dedicated exclusively to the cover rather than the content. “What’s the future of the publishing industry? It’s us!” they tell me.
As I head back to collect my coat, I stumble across a ‘tweetup’ – an eyeball opportunity for the Twitterati – at the Illustrators Bar and find myself plunged back into the digitally enhanced world. Some familiar faces, such as the founders of audiobook streaming service Bardowl, are present as well as both authors and publishers. I’m told that the digital track of the London Book Fair has certainly progressed, moving from vision a couple of years ago, to discussions around workable strategies.
Whatever the future holds, whatever new possibilities offered by e-books and apps, streaming tools and social networks, certain principles hold true. Sure, the industry will undoubtedly change, alongside habits of both writing and reading. Sure, new opportunities for (that horrible word) monetisation exist, even as some older models wane.
But it is difficult not to maintain a sense of optimism about the publishing industry. Even the most cynical seemed positive about the future, for example in terms of the increasing quality of writing, the broadening opportunities for distribution, or the growing market for ‘special editions’, with digital capabilities augmenting, not replacing what has gone before.
A final view over the stands, stretching away from the upper-floor cloakroom, allays any fears about all that tangible goodness becoming no more than pixels on a screen. For all its precious egos and old habits, its bluster and self-importance, the future of publishing is bright. And it will involve books. Lots of books.
04-19 – Cloud Computing Will Not Create Jobs
Cloud Computing Will Not Create Jobs
Cloud computing will not create jobs, good business will
Technology is great, isn’t it? I mean, where would we be without all these computers? I’ll put my cards on the table: I’m delighted to be living in the here and now, in the midst of the information revolution.
More than that, I feel profoundly lucky. Every morning I jump out of bed with an extra spring in my step just contemplating all the exciting ways in which our business, cultural and social lives can be improved through the use of such powerful capabilities.
I really believe the potential impact of IT to be profound. Which is why I am genuinely astonished when I see poorly constructed research, based on a tenuous premise, seeing the light of day. Like, for example, IDC’s recent study (LINK: http://www.microsoft.com/en-us/news/download/features/2012/IDC_Cloud_jobs_White_Paper.pdf), sponsored by Microsoft, about Cloud Computing’s role in job creation. Read it and weep.
The “rationale”, we are told, “is that IT innovation allows for business innovation, which leads to business revenue, which leads to job creation.” From this simple statement is derived pages and pages of extrapolation based on another assumption – that jobs created “match the industry mix by job function.” We are told about “equation with offsetting elements” but we’re not told what it is.
Now, one could go into detail on specific figures based on industry sectors, regions or designated market areas – New York is going to see six times more growth in cloud-generated jobs than Los Angeles, apparently. But there really is little point, you see, as the basic starting point for the whole discussion is complete and utter bunkum.
Outside of startups, those hundreds of companies employing hundreds of staff, there will be no jobs created as a result of adopting cloud computing. None. Zero. To think that there will be, displays a completely skewed and naïve view of both how business actually works and the role of IT within it.
Think. You’re a general manager at Wal Mart, or Pfizer, or SNCF or the NHS. You’re making strategic decisions about what needs to be done to get more competitive, grow new markets, deliver services more efficiently. Then the CIO tells you that you can use Amazon’s infrastructure, or Salesforce.com. Whatever your first reaction is going to be, it’s not “Wow, that means I can get rid of all that legacy computer power, become more innovative and take on a bunch of new hires.”
Business life is much, much more complex than that. And Cloud Computing is still at an early stage. If it even exists – few with any technical nouse believe ‘private cloud’ to be any more than a marketing rehash of dynamic IT management mechanisms that have been espoused for at least a decade. Sure, the capability to provision virtual machines gets us a whole lot closer to the dream, but few IT departments are sufficiently dynamic themselves to fully exploit the benefits.
Despite these inconsistencies, according to the report, the cloud model has already accounted for the creation of nearly 7 million jobs worldwide – that’s right, we’re almost half way to the headline figure of 1.5 million new jobs by 2015! Oh, if it were only true in these troubled economic times! Sadly, it’s simply not, unless we start counting every single intern who has been allocated a Basecamp login.
The reality is that Cloud Computing is a good thing, presenting a new set of procurement and management options that organisations large and small are still getting their heads around. Meanwhile, savvy business leaders are looking to grow their revenues, taking smart decisions and the occasional gamble, building relationships with key customers and developing new products to meet emerging needs.
Within that framework, of course technology plays a role. The financial services sector has huge opportunities to use information for both better governance and more innovative products. Manufacturing, pharmaceuticals, retail and all other sectors may be just scratching the surface of how they use technology. Government stands to benefit both in how it uses services, and the services it delivers to citizens and agencies.
But to suggest that Cloud Computing has a direct role in job creation is a massive distraction. Worse than that, it’s a clear illustration to business decision makers that IT is still completely out of touch with how business actually works, and the role of technology within the financial and operational framework of the enterprise. If we really want organisations to reap the rewards of the cloud, we’re going to have to do better than this.
04-24 – Cleaning lady’s response to Werner Herzog
Cleaning lady’s response to Werner Herzog
The universe. Profound in its enormity. The detailed atomic structure of a particle in the depths of space, simultaneously illustrating both immeasurable breadth and infinite detail. And likewise we are specks, motes, purposeless fragments floating in our own, great beyond like those very flakes of skin and soil, buffeted by Brownian motion and attracted by still-unexplained laws of gravity and static onto your shelves and the inner surfaces of your electronic devices.
And yet even as we, these miniscule scattered, inconsequential shavings, mere powder filed from that universal grand design, even as we somehow avoid the inevitably grim consequences of our improbable existence, marking each cycle of our tiny planet round the sun as a notch on some celestial stick of achievement, we still strive for explanations, from fulfilled prophecies to punctuated equilibria. We share this common urge for meaning and yet we have different priorities, each of us loosely woven filaments in the threadbare carpet we call civilisation.
Speaking of carpet, you might want to try vacuuming yourself sometime. You might enjoy it.
04-26 – Another one bites the dust - Roadrunner Records
Another one bites the dust - Roadrunner Records
And once again, a bijou record label that has been developed from the ground up through hard work and dedication, and which then sees the next logical step as accepting the advances of a larger, more powerful and seemingly equally dedicated organisation. Perhaps indeed, the initial intentions are sound, and new artists join the fold in the knowledge they’re getting the best of both worlds.
But then comes the reality - that despite an upturn in fortunes, business is tougher than expected, there’s an economic downturn don’t you know. The finance people juggle the figures, which senior management are forced to acknowledge mean tough decisions. Which are taken, to the detriment of both staff and artists large and small.
Farewell, Roadrunner Records.
Addendum - here’s the final section of Cees Wessels, RoadRunner CEO’s original statement:
“…We can take Roadrunner to the next level by focusing our resources on marketing our existing line-up of acclaimed artists as well as discovering the stars of tomorrow.”
May 2012
05-21 – Why aren't you passionate about open standards?
Why aren’t you passionate about open standards?
A central government consultation on ‘Open Standards’ is underway right now, as instigated by Francis Maude MP and the Cabinet Office. The goal, as stated on the Web site, is to support ‘sharing information across government boundaries and to deliver a common platform and systems that more easily interconnect.’
Obscured by the quite animated debate taking place both online and offline lie a number of quite serious issues that could affect both government departments and citizens alike. So, what’s it all about? The rationale for the consultation is based on UK.gov’s current mantra of ‘better for less’, which means putting less money in the hands of suppliers, whilst improving the delivery of public services.
From a tax payer’s perspective that sounds like a goal worth striving for, so just how can open standards save money? Taking the two words in reverse order, let’s start with the easy one – ‘standards’. The history of computing is a tale of getting things to connect together, pass information or share a common data store. It’s a fair assumption that interoperable systems and services run more efficiently, and therefore cost less for the same result, than their isolated or difficult to integrate equivalents.
Of course, not all prospective standards make the grade. X.400, that super-resilient messaging standard, has shuffled off to the elephant graveyard, as has Open Systems Interconnection. Some get superseded, some (like punched card layouts) now lost in the sands of Internet time. And others become de facto even though they’re not the best – or example, rather than adopting X.400 we slog on with the less reliable SMTP. And it seems to be doing alright.
So, yes, standards are A.Good.Thing. A bigger challenge appears to be around that delightfully vague and multi-faceted word, ‘open’. Ignoring the publicly broadcast debate for a moment, the battle lines seem to be drawn around a single core meaning of the word – that the cost of storing, transmitting or accessing public information in any given format should be zero. Zilch. Nada.
Which sounds reasonable. If you’re wanting to file your tax return, you shouldn’t have to pay for the privilege. Equally, if you are a researcher wanting to access census data, a healthcare worker wanting to access medical guidance, or a commuter checking bus times. The problem isn’t so much the actual information – the mapping co-ordinates, meeting minutes or contact lists – but more the formats in which such information is stored.
Commercial organisations are currently making quite a success out of selling (aka licensing) certain information formats, and/or ensuring that their own tools are the de facto access mechanism. There’s nothing wrong with this in principle - a TV manufacturer for example will have a complex set of arrangements with its suppliers, licensing a video compression standard here, buying in chipsets and other components there. They’ll then build the TV and flog that, and expect to make a reasonable sum on the deal.
But what about public information? Governments are moving service delivery online because of all the supposed benefits – cheaper, less paperwork and so on. Indeed, the recently launched ‘Digital by Default’ initiative is focused on exactly that. But is it reasonable to expect departments or citizens to pay a premium to certain companies, simply to do the things they used to be able to do via paper? Similarly, if a government can adopt (or otherwise come up with) a standard for document storage that is free-to-use, should they be spending tax payers’ money on licensing formats from computer companies?
The answer, of course, is ‘it depends’ – but while every case needs to be considered on its merits, a number of gating factors exist. The first concerns just how many people are impacted, and by how much – for example, filing tax returns affects every adult in the country. The second concerns the benefits which may be achieved by using a certain pot of data, or a certain format, over the costs to the nation of doing so.
Third, we have the question about whether any alternatives exist – there is no point (particularly with technology) sticking with a certain way of doing things simply because that’s the way it’s done at the moment. This does, however, have to be balanced against the costs of making a switch. Finally there’s the question of how much the absence of open standards get in the way of other activities, such as starting a company, delivering new products and services, providing jobs or otherwise influencing the competitive landscape.
That all sounds pretty reasonable, and well worth discussion. However, the only topic that appears to be debated is the last one, and this from the perspective of vested interests. Incumbent vendors are acting like protectionist oligarchs, complaining how they will be unable to innovate if their own approaches (which involve paying them money to license formats) are not adopted. No doubt this argument is partially true, but their real panic comes from the potential of losing a lucrative revenue stream. From a hard-nosed taxpayer’s perspective the response is, clearly: tough. Or to put it more politely – as a nation, let’s pay for things that add genuine value, not those that don’t.
Meanwhile we have ‘open’ lobbyists who seem to think that everything commercial companies say is immediately, and irretrievably corrupted by the forces of capitalism. That may also be philosophically unassailable, but the resulting polarisation has led to much of today’s confusion – particularly when more time appears to be spent on the suitability of those involved, or on the meaning (all parties are guilty of this) of terms such as “free” and “reasonable” when it comes to licensing models, than more important questions that ensure such standardisation activities lead to UK-wide value.
Perhaps the reasons such narrowly focused discussions have come to the fore is because, quite simply, nobody else is doing any talking. Given that the consultation has been extended until June 6th, let’s finish with a call to action: for public sector organisations and their broader suppliers, for strategists and front-line staff, for citizens across the board to add their voices to the debate. Only by understanding the broadest range of views will the Cabinet Office have the information it needs to make a decision. And saying nothing is tantamount to accepting the outcome, whichever party ends up benefiting the most.
[Originally published on publictechnology.net]
June 2012
06-22 – I, Technology: Surface and Voice – a marriage made in geek heaven?
I, Technology: Surface and Voice – a marriage made in geek heaven?
Microsoft’s launch of its Surface Tablet was bound to whip up a storm of controversy. However good the product was going to be, the cries of “rip-off!” from the loyal ranks of Apple users were as inevitable as the game of, “No, they didn’t think of it first, it was…” backwards leapfrog. My take: the clipboard thought of it first. Or Moses. But anyway.
Despite this, first reports give the impression that the company really has pulled a rabbit out of the hat. While the hardware spec certainly does give Microsoft’s partners a run for their money, the tale contains a couple of clever twists – not least, that the operating system is being standardised across both phones and computers. Barring tweaks and chip-specific limitations, that’s one heck of an app store.
I’m grinning as I write the next bit. The one (more?) thing, the as-yet unmentioned consequence is that full-fat versions of voice recognition tools from the likes of Nuance will, from November, be available on a robust enough tablet device to make them sing. For the record, Dragon Dictate remains the only real contender in this space, simply because other products don’t work as well.
“Sure,” say the fans, “We have Siri.” Mark my words well – Siri is a gimmick, as is the version recently announced by Samsung for its S III. A good gimmick, but a gimmick nonetheless, the fat finger equivalent of a voice-enabled input device. Which requires an Internet connection to work – so there go applications for planes, trains and cellar bars.
Apple no doubt knows this – it also knows that future versions will have far broader capabilities and applications, so it doesn’t mind too much. For all the glitz with which it was launched, Siri is a straw man, thrown over the castle walls to get the peasants used to the idea before the gates open and the real deal is marched out.
We don’t have to wait more than a few months, however. For all its training requirements, fully-fledged voice recognition has been raring to go for years. With one limitation – nobody, outside of films, has ever linked the capability to an appropriately formed device. Enter Surface, the Intel-based Windows tablet that can. And on stage right, voice recognition, its perfect marriage partner.
Now, let’s not let things run away with themselves. Nobody (least of all me) is suggesting that voice recognition will cause us to dispense with all other methods of human-computer interaction. Also, both Apple and Microsoft will bring their own recognition tools to market – we’ll see how those get on. And finally, nobody should be all surprised when new form factors (such as those made possible by projects such as Google Glass) move the tablet back down in the rankings. The time has come, however, for this particular input option to finally deliver on its potential.
[Originally published on ZDNet]
July 2012
07-16 – The Kernel: Fresh fields and pastures new?
The Kernel: Fresh fields and pastures new?
Britain is a great place to do business – or so we are told. But Government and media attention tends to focus on London – perhaps not surprisingly, given that, despite the relatively recent moves in the broadcasting industry to Cardiff, Salford and so on, a great deal of the Government and media is in London as well.
The regions have – or did have, as in the case of One North East – their own development agencies, and larger provincial cities such as Manchester and Leeds have made a reasonable stab at drawing non-manufacturing businesses to their metropolitan bosoms. But more rural areas have had only limited inward investment, and even less publicity. So just how realistic is it to set up a high-tech business in the hinterlands?
An example of a company bucking the trend is Cotswold-based mobile gaming company, Neon Play. Business success is never a given: clearly there is more to starting a company than setting up and pressing a big, fat “go” button, whatever the location. All the same, it seems we can learn something by teasing out the more geographical threads from the Neon Play story.
The first thing to note is that founders of Neon Play, Oli Christie and Mark Allen, were not in the area by accident. The disturbingly languid Cotswolds plays host to a sizeable creative and marketing community, much of which has been driven out of EHS Brann, now EHS 4D)=, a digital agency established 25 years ago that helped Tesco launch its Clubcard.
Cheltenham is another centre of creativity, illustrated by its recent design festival and the fact it is home to SuperGroup, best known for its Superdry brand. Other parts of the country have their own specialisms, built around similar success stories: for example, Bristol has animation (Aardman); Newcastle has software (Sage); Cambridge has silicon (ARM).
As concerns evolve and people leave their respective motherships, setting up as freelancers and forming alliances, an ecosystem of smaller firms can begin to grow organically, operating locally and often under the national radar. Like breweries on the Trent, Sheffield Steel or Kentish market gardens, it was ever thus.
An ecosystem is one thing, the logical extension of which took people into cities in the first place. But simply following the crowds to the epicentre creates its own challenges – not least, it can be the enemy of innovation. “Working independently from the big players gives us freedom and autonomy,” says Neon Play’s Oli Christie. “It enables us to come up with genuinely new ideas and drive the destination of our games.”
Deciding not to go with the pack comes with concomitant trade-offs, however, not least of which is attracting skilled staff. Ageism aside, it’s a fair bet that the designers, developers and other creatives at the heart of a gaming company will tend to come from a younger demographic – which won’t necessarily view the nightlife of a historic market town with untrammelled enthusiasm.
It’s not just the social challenges: larger centres come with a purpose-built community and work ethic. It’s impossible to walk through Soho or Shoreditch, for example, without getting a whiff of industriousness. “We knew that being in Cirencester wouldn’t be easy. Bristol would have been easier,” says Christie. “The culture is massively important. We had to think hard about making it stand out. We wanted to make it small, fun and passionate, a genuinely nice place to work.”
It’s not window dressing when the company offers “ten reasons to work with us” on its website, a list that covers everything from a quarterly bonus and extra days off to an inspiring office environment, no doubt taking a few leaves out of Google’s handbook. There’s beer on Fridays and “guaranteed posh bog roll”. (These things matter in the country.)
Another challenge is building the right relationships to both win business and maximise publicity – both important factors in the lottery-like mobile apps market, in which only about 1 per cent of apps make any decent money for their developers. To adapt the phase incorrectly ascribed to Willie Sutton, people do business in London because that’s where the money is.
While other developers have moved lock, stock and barrel into the smoke, Neon Play has settled on a compromise, involving frequent commutes for its more publicly-facing executives. “The new intermediary is the App Store,” says Oli. “It’s difficult to stand out against half a million other apps – we’re constantly trying to stand out on Twitter, Facebook, YouTube, you name it.”
Indeed, while it may not be true for all businesses, it is difficult to imagine how a social gaming company could be successful today without an equally social online presence. The upside is, of course, that the provinces are not quite as disconnected as they used to be either, even at developer level. Employees can benefit from lower costs of living and genuinely reduced levels of stress, at the same time as interacting – at least virtually – with their city-bound peers.
Ultimately, the country idyll isn’t going to be suitable for everyone. But everyone has a choice, thinks Oli: “You have to decide where you want to work and start from there.” Neon Play and companies like it are more than bucking the trend: they offer stoneground proof that tech startups do not have to feel restricted to a few higher-rent and lower-air-quality patches of our green and pleasant land.
[First published on The Kernel]
07-29 – RIP Claus Egge
RIP Claus Egge
Claus Egge was one of the first analysts I met, and one of the first I admired. Calm, analytical (it helps), Claus never needed to be highly vocal to impress. When he spoke, he spoke with quiet authority. While I never knew him outside the analyst circuit, I often had the pleasure of his company and, from time to time, the benefit of his advice. RIP, Claus. You will be missed by many.
August 2012
08-24 – Sleeping rough, but not in that way
Sleeping rough, but not in that way

In about 6 weeks’ time I will be sleeping rough, under the banner of Byte Night. I won’t really be sleeping rough, of course. I will take my old, but functional sleeping bag, carry mat and reasonably healthy self to London for the day, bed down in the evening, have an awful night’s sleep (but think of the camaraderie), then head back the next day for a decent shower and that inner warm feeling that I’ve done something useful.
Call me old fashioned, but that’s not sleeping rough. Sleeping rough means the feeling of nowhere else to go, no money to pay for the fares, no hot shower to look forward to. Sleeping rough could mean sofa surfing on occasion, or being offered a place in a hostel but lacking the wherewithal to respond. As a genuine rough sleeper, chances are I would have psychological challenges, demons in my own machine. I wouldn’t have asked for these, they may be in my DNA. Or, perhaps, I’ve faced, but not faced down some traumatic experience, such as domestic violence, loss of livelihood or family. I might well have ended up with a drugs problem: that wouldn’t have been in my life plan either.
So, if I’m not really sleeping rough, why am I doing it? First, to help raise awareness of a growing problem. The credit crunch has hit people from all walks of life – since 2010 the amount of homelessness in the UK has continued to rise. According to charity Crisis, in Autumn 2011 the number of rough sleepers in the UK averaged over two thousand every night – an increase of just over a quarter from the year before. A sharp increase in a tip-of-the-iceberg figure which hides the true scale of the challenge.
Sleeping rough isn’t a simple decision, it’s a bottom point on a complex, downward-spiralling journey. A number of organisations are working independently and together to minimise the causes of homelessness, maximise the options for people who want to get out of the rut, and provide any palliative help they can for people sleeping on the street. These organisations need money and resources to do their work, and Byte Night is at its core simply a call for funds for Action for Children, which targets young homeless people - all donations gratefully received.
Finally, I have a lot to learn. Quite recently I became involved in a community interest organisation that is being spun out of homeless charity St Mungo’s. We don’t have a name yet (unless you count ‘Social Inclusion Enterprises’) – but the remit is simple – to employ low-cost technologies to respond to the needs of people at the periphery of society, be they homeless, long-term unemployed or otherwise disadvantaged for whatever reason. Services such as Voicemail4All for example, which can offer a communications lifeline, helping its users find a job or reconnect with family members.
So, no, I will not be sleeping rough on October 5, not in its truest sense. But nor should anyone else, not in this day and age. As long as ‘modern society’ feels it is acceptable to leave individuals curled up on street corners, we still need events like Byte Night to ensure that they, and their needs are not forgotten.
08-30 – Landlines Set To Disappear
Landlines Set To Disappear
Landlines set to “disappear”? Don’t be silly
The gospel truth – or a staggering misuse of data? You decide.
Four years ago, journalist Nick Davies released a book that would turn the UK media world on its head. Called Flat Earth News, it exposed some of the more unsavoury habits of the UK press, most notably the practice of hacking into voicemail – a revelation which has led directly to the ongoing Leveson enquiry.
The book also made a point about wholesale duplication of press releases, presenting them as ‘news’. Use of less experienced journalists, a shortage of time to ‘dig out’ stories or check facts, and weaknesses in the editorial process led to chunks of PR being quoted or even used as the headline. Smart companies and their agencies recognised this gaping hole in the armour of the news desk, exploiting it relentlessly.
The message that those days are behind us doesn’t yet seem to have reached some parts of the IT press. “Landline telephones set to disappear from UK offices,” states the press release (LINK: http://www.virginmediabusiness.co.uk/News-and-events/News/Landline-telephones-set-to-disappear-from-UK-offices/) from Virgin Media Business. “Landlines to … disappear/be obsolete/become redundant,” repeat the headlines of a number of popular IT news titles. It’s not worth singling anyone out – you can Google for yourself and read the articles concerned, including their cut-and-paste quoting of Tony Grace, COO of Virgin Media Business.
Which would all be fine, of course, if the CIOs surveyed said any such thing – a fact which cannot be read directly from the 380-word press release. Sure, the headline says “landlines set to disappear” but the lead-in paragraph is more vague – 65% of those surveyed say that the landline will disappear “from everyday use.” Which, if I’m not mistaken, means a completely different thing.
Read a little further and we find the PC is similarly threatened – according to 62% of respondents. And (here’s the killer), “in contrast, smartphones (13 per cent) are seen as the least likely devices to be abandoned.” That’s all we have from the research, but it does beg the question – what precisely were the CIOs asked? If 65% of CIOs feel that landlines are going to become obsolete, do 13% also feel that smartphones will become a thing of the past between now and 2017?
This is unlikely, just as it is unlikely that those surveyed will be ripping out their desk phones. Indeed, from the text we have, nobody actually said that – the key phrase is “everyday use”. Ask me a question about the device that I’m going to use for the majority of calls, and I’ll probably say my mobile. Ask me whether I would dispense with an office landline entirely, given potential issues of coverage, call quality and so on, and I would hedge my bets.
In the alternative universe where journalists didn’t simply quote press releases verbatim, we might have ended up with some more reasoned debate about the role of desk phones versus mobile phones, the relative advantages, the need for integration between the two and so on. But we don’t. And as long as we don’t, we will be fed a diet of hyperbolic statements that bear little resemblance to the world around us.
September 2012
09-14 – Update to Separated Out
Update to Separated Out

Has it really been ten years? The first edition of Marillion/Separated Out (The Complete History) was released in the autumn of 2002. As I had absolutely no idea what I was doing when I started, I wrote the story as I uncovered it, gleaned from interviews, references and anecdotes.
What emerged was a tale of band members making the music they wanted to hear, as a result developing a unique relationship with its audience. Since Marillion was first formed, the band’s music has lifted spirits, offered support through personal difficulties and become otherwise woven into the experiences of fans.
It is appropriate that Separated Out ended with the band riding the wave of affection of the first convention weekend. While the event was highly successful, Pontins offered an unlikely foundation stone for future success. The book’s conclusion was equally non-committal: “Extrapolate a few years more and all the dreams will come true: the hit single, the radio play, the household name across the globe. Or will they?”
We now know the answer. Not only has the band seen an unprecedented resurgence in its fortunes over the past decade, it has also produced some of its best music – from ‘Marbles’, through ‘Somewhere Else’ and ‘Happiness is the Road’ to the just-released and already acclaimed ‘Sounds That Can’t Be Made’. So much water has passed under the bridge, it seems hard to believe that there was any uncertainty.
Quite clearly, an update to the book is long overdue – indeed, nearly two years have passed since Lucy asked when this might be available, finally prompting me into action. When I reviewed the original content, suffice to say I wasn’t particularly happy with it – some sections were notably clunky. So I set about working through the text, tweaking, culling, nipping and tucking as I went - this process is now complete.
The next step is to get the story up to date. I will be looking for fan feedback on their own feelings about the band and its music over the past ten years - the highs, the lows, the touching moments… Please do email me at jonc (at) separatedout (dot) com or leave a comment at www.joncollins.net if you have any ideas or thoughts.
And meanwhile, in the words of the profoundly wise Karin Breiter, I shall “shut up and write.” Thanks for reading, and I hope you will enjoy the new edition.
09-30 – My Desert Island Dozen
My Desert Island Dozen
Every now and then I read a book which makes me feel simple, profound gratitude to the author. For having the gift of being articulate in the first place, but then, spending the time to pull together a work which could give me so much pleasure. As I’m just finishing such a book, I thought I would list the titles that have pushed the ‘profound gratitude’ button in the past, all of which would be great companions in the unlikely event that I should be washed up on a desert island with a crate of books! Here goes…
The Player of Games - Iain M Banks
While not all of Iain Bank’s books are as engaging as his best writing, they share a remarkable breadth and scope. Despite the aliens and artificial intelligences, they are profoundly human tales - of strength and weakness, power and intuitive wisdom. And when he does manage to crack the code, like in The Player of Games, the results are stunning.
Jude The Obscure - Thomas Hardy
My Thomas Hardy phase was an eye-opener, as I discovered not only a fascinating insight into still-recent rural life, but also that I could be interested in ‘the classics’ - indeed, what earned them this term. There is not much to be happy about in this book (though it is almost jolly compared to Tess of the D’Urbervilles), but a moving read nonetheless.
The Baroque Cycle - Neal Stephenson
Simply, wow. This many-thousand-page tale about the origins of currency and the dawn of science, written across three hefty volumes and with its multiple, intertwined plots, left me with the mother of all bittersweet feelings when I finally turned its last page. I very much doubt that Neil Stephenson is for everyone - he writes with a level of detail that could put many people off. I’m not one of them, clearly… more a welcome passenger on a continent-spanning journey with a master narrator. Start with the single-volume, wartime tale Cryptonomicon perhaps, and if that floats your boat, a world of wonder awaits.
The Seven Basic Plots: Why We Tell Stories - Christopher Booker
Mr Booker might have some dodgy views about climate change but from my standpoint he’s nailed this one - a book, researched and written over thirty years, about the underlying archetypes and plotlines within both traditional and modern tales. His conclusions offer a profound insight into the nature of consciousness, following as comprehensive a treatment of the novel you are ever likely to get. It may not be the only explanation and is open to dispute but it is as good an answer as I have ever read to the question - why do we tell stories?
Coming from quite a sheltered, home counties upbringing, this book deeply affected me, wrenching my eyes open to the realities faced by those who have grown up in parts of the world where politics, prejudice and power plays become more important than the basic rights of individuals. I can’t remember if I saw the film ‘Cry Freedom’ before the book or the other way round, but together they forced me to think about just how lucky I was.
I probably should have been in lectures when I devoured a near-constant diet of fantasy and science fiction novels. Each volume of The Belgariad (and The Malloreon after it) was eagerly awaited, as several months would pass between volumes. And meanwhile, Stephen Donaldson’s Chronicles of Thomas Covenant filled any time I had left - it’s a wonder I got a degree at all. I re-read the entire set a couple of years ago and they had lost none of their easy-going sparkle.
The Shockwave Rider - John Brunner
Don’t listen to what anyone else tells you. John Brunner invented the Internet, in all of its cyber-criminal, virus-ridden (Brunner called them phages) glory. And he did so in the mid-seventies. This short novel definitely falls into the category of ‘very important books which must be read’. A bit dated now, but still up there.
I hated history at school. Actually that ’s not true - I really enjoyed the first two years of study, when we were told stories of battles and kings. Then the teacher changed and we were fed a diet of dessicated facts and figures. This book documents the early history of Britain, written by people who were there. I picked up a copy when I started to get into the Dark Ages, and it remains one of the best historical sources.
The Road Less Travelled - M. Scott Peck
“You must read this book!” is a reasonably common request, usually when someone has had a life-changing experience reading a self-help book which really seems to talk to them. I have two theories about such books - first, they are written by authors post-crisis, and second that they work when they fit with the personalities of those reading them. With Tuesdays with Morrie a close second, The Road Less Travelled is the one that worked for me, back in my early thirties when I thought I’d have my own mid-life crisis early. Nice to get it out of the way.
The Diving Bell and the Butterfly - Jean-Dominique Bauby
Also to be filed under “gratitude for life” is this book. How a man, paralysed with malady, could contemplate ‘writing’ a book using his only remaining physical ability - eye movement - beggars belief. A deep insight into what it means to be human, a quiet struggle against serenely uncaring adversity. Let them take everything away but my mind, I hope I respond with as much dignity.
Girl in a Swing - Richard Adams
If had to pick one modern novel, it would be this one… a tragic tale about an Englishman who finds himself completely out of his depth in a relationship with a troubled heroine. Just the right mix of literary and real, and up there with the best that Faulks and McEwan could conjure. I think it’s a masterpiece, not everyone would agree but that’s what makes books so, well, personal!
Fugitive Pieces - Anne Michaels
This is a profoundly powerful book, written about Jewish immigrants in Canada who are still coming to terms with their recent past. Beautifully written, deeply emotional yet at the same time gentle, quiet, measured and all the more moving because of it.
In a similar vein but a very different style, Pat Conroy’s book traces the story of a man coming to terms with losing his wife. The story exposes the lives of the people around him, each layer more traumatic than the last, right up to a no-punch-pulling finale. Heavy stuff but no less brilliant because of it.
October 2012
10-05 – A Passage To India
A Passage To India
The weekend before I was due to fly to Mumbai, I was starting to feel quite excited. A whole new continent and a whole new experience - having heard various things (positive and negative) about the place, I was keen to know in which camp I would find myself. And of course, travelling to any new place is always going to be a thrill.
On the Sunday night, twenty-four hours before I was due to fly, I went through some last-minute checks. “I know,” I thought, “I’ll take a look at the Indian Embassy web site to see if there’s anything I need to take.” Sure enough, there was something: a visa. I checked the Embassy opening times - visas were issues 11am to 2pm - not a problem, I decided, I could get the later train and arrive about ten minutes to eleven. Plenty of time.
The next day, well-rested, I caught the expected train and headed into central London, arriving at The Strand at precisely 10.50. The Indian Embassy was at one end of a D-shaped block, about 200 yards in diameter. As I approached, I started to focus on, then tried to suppress an emerging feeling of panic as I realised the queue stretching all the way round the ‘D’ was probably the queue for visas.
For a few minutes I stood near the head of the queue in a state somewhat resembling shock. An exhaled string of semi-expletives drew me to the attention of a tall Rastafarian who was leaning against a litter bin next to me. “What’s up mate?” he asked, and I explained my predicament. “Not a problem,” he said. “See that bloke there, near the head of the queue? That’s my mate, he’s got all our passports, he can take yours as well if you like.” A few seconds was all it took to realise this was the only option I had, so I handed over my passport nervously (he was a total stranger, after all) but with equal relief.
Tens of minutes passed; progress was glacial. After a while, the people at the head of the queue started to disperse. I wondered whether they had just got fed up… but my new companion they were just getting tokens. “They’re closing for lunch,” he said. “Only those with tokens can come back after.” My gorge rose, then quickly subsided as I saw his friend walking towards us, beaming and clutching a handful of tokens. Giving one to me, the pair wished me luck and went on their way.
Suppressing a desire to raise my arms to the heavens and shout “Alleluia!” I decided to check I was absolutely prepared - I’d made one mistake and I didn’t’ want to mess up again. I headed across the road to a branch of Prontaprint, which had wifi (and, as it happened, print services), and logged back onto the Embassy web site. Ah. “Don’t forget to bring a letter of introduction,” it said. No problem I thought, printing off the invitation email I had been sent. I was all set.
An hour later I queued, clutching my token like a schoolboy with a shilling, and before long I was granted entry to the dimply lit, teak-lined room that was the visa issue bureau. It looked like a cross between an old bank branch and a visiting room at a prison. No matter, I was in - I sat on the hard bench and waited my turn. Suddenly, so it seemed, my number was called and I stepped forward to the counter, all forms and paperwork complete.
So I thought. “I’d like a business visa, please,” I said, handing everything over. The man on the other side of the plexiglass had clearly been to international clerical bureaucracy school, that’s the only place I can imagine to learn the nuanced slowness that is the same the world over. Eventually, he paused. “Where is your letter of introduction?” he asked, so I pointed him to the email. “But this isn’t…” he spluttered. I was in trouble. I could feel the muscles in his back starting to knot in unison with mine. “You can’t… there is no way…” I was completely helpless. “What am I to do?” I pleaded, my face a picture of desperation.
He exhaled deeply and his shoulders slumped, then he rose again in his chair (“Here it comes,” I thought…). “I am very sorry, I have no choice,” he said, authoritatively now. “I’m going to have to… (exhale) …I’m going to have to give you a tourist visa.” Bang BANG went his stamp in my passport, which he slid back towards me. “Thank you. Good bye. NEXT!”
I nearly ran out of the place. The sun may have been shining, but if it wasn’t I probably would have displayed an unearthly glow. By now it was about 3pm, ample time to head to Heathrow, check in and make my flight. What about Mumbai? Suffice to say, I loved it - the people, the sounds, the smells. Brilliant.
A few months later, I happened to be talking to someone else who was heading to India. “Not going for a few months yet,” they said. “Off to the doctor’s next week, to get my jabs.”
“Jabs?” I said. “What jabs?”
November 2012
11-07 – Feedback to a budget hotel owner
Feedback to a budget hotel owner
Dear Sir,
We all love a bargain, and it can be quite galling to pay the kinds of rates some hotels see as normal for very basic facilities, so I applaud your pricing strategy. I also very much appreciated the warm welcome on arrival and being asked about my day, though perhaps then asking for an in-depth breakdown of my weekend was a little excessive.
You asked for feedback on how my stay could be improved, so I thought it was only right to offer this, in the spirit of goodwill you already displayed. Here are a few suggestions for future guests.
1. If you are advertising an en-suite double room, the term tends to be used when the bathroom has a toilet as well as washing facilities. Indeed, it more normally applies for hotel rooms which have a separate shower and wash room, rather than a shower cubicle and sink squashed in the corner of the bedroom.
2. Indeed, on the subject of ‘double room’, this normally refers to a room large enough to accommodate two people and not a small room with a three-quarter sized bed squeezed up against the wall leaving little space to access the bathroom facilities.
3. Which reminds me – perhaps you might consider more than one toilet between 14 rooms? While it was nice to get to know the other guests, some were a little agitated and not so inclined to conversation.
4. Many hotel residents, including myself, like to be able to leave a window ajar at night. Sadly this was not possible, for fear of dislodging the toilet roll stuffed in the cracks whose purpose, I can only assume, was to prevent drafts. You might wish to think about the logistics around that one.
5. I need to mention the pillows. I understand you may be responding to customer demand by providing harder pillows, though perhaps you may have tended to one end of the scale, if not beyond. Less attractive to guests might be their infusion with cigarette smoke, though whether this occurred recently or before the smoking ban was difficult to tell.
6. Thank you for the clean sheets and (eventually) hot water, which are ultimately the most important things of all. Thank you also for the sachets of shampoo and the bar of soap. I do wonder whether the latter can really be categorised as ‘gentle’, however, given that it was in fact a hard block of nondescript substance which refused to lather.
7. For reasons of simple flood avoidance, it is useful if shower doors can actually be shut. I did manage to dislodge one of them after a struggle, but the flakes of material this generated left me loath to attempt moving the other. This was a particular issue given the propensity of the shower head to flick off its mounting.
8. It is always a bonus to have coffee and tea facilities. However, given the small size of the room, I would question the merits of having a full-sized fridge, which limited access to the wardrobe. It might also help to have a kettle that can be filled at the sink without resorting to filling it using the tooth mug.
9. On the staircases, I understand that hanging pictures on the walls is intended to add to the ambiance, but their positive impact can be diminished if the pictures are left to slide to the bottom of the frame.
10. In the breakfast room, it can be helpful to label the vessels containing coffee and tea, particularly when they taste quite similar so that guests can get a better mental image of what they are supposed to be drinking. The label on the “scrambled eggs” was much more helpful, as I wasn’t sure.
11. Finally, please do not leave pairs of underpants on the flat roof above the breakfast room, as this might be off-putting for diners.
I hope these suggestions for small improvements might benefit your establishment, and wish you every success in the future.
Kind regards.
11-22 – I, Technology: When is a spy-pen not a spy-pen? Don't expect a politician to know
I, Technology: When is a spy-pen not a spy-pen? Don’t expect a politician to know
You’ve got to hand it to whoever decided to use ‘spy-pen’ to describe the device that led to the recent resignation of the chairman of a Scottish college.
The term has everything: popular relevance, gadget credibility and just that frisson of edgy uncertainty. Damn right, it suggests. Whoever’s using one should either be saving the nation from evil tyrants or banged up.
The trouble is, the device at the centre of the controversy is no such thing. The Livescribe pen has been around for a good few years. Yes, it can act as a notes and audio-capture device, in conjunction with special sheets of paper. But calling it a spy-pen is tantamount to calling the average tablet device a spy-pad.
Kirk Ramsay, the chairman in question, stepped down after a row with Scottish education secretary Mike Russell, over a recorded conversation. “It’s quite a clunky kind of thing — not the sort of thing you can use without folk knowing,” Ramsay told The Scotsman. “I have had it for three and a half to four years — you can buy it on Amazon.”
The episode is a good indicator of the attitude to technology displayed by our heads of government. Note that no information was leaked, or intended to be. Merely the use of such a device was enough for Ramsay to have to consider his position.
In the worst case, it suggests that Arthur C Clarke’s tenet that “Any sufficiently advanced technology is indistinguishable from magic” holds true even for mainstream device use. While we no longer burn witches at the stake, it appears that practitioners of such magic should still be treated with the kind of distrust usually reserved for travellers and vagrants.
A more generous observer might consider such remarks in the context of the bumbling judge in the 1980s TV series Not The Nine O’Clock News: “Digital watch? What on earth is a digital watch?” he asked, before expressing similar incredulity at a series of innovations that would today only be found in a museum.
What’s particularly disappointing is that the Livescribe is actually useful, particularly for those who need to sit through long-winded meetings, which sometimes ramble off the point. The audio function — which is not enabled by default — can be very handy when, weeks later, notes that meant something at the time cease to make sense.
[Originally published on ZDNet]
December 2012
12-12 – A new kind of loyalty card
A new kind of loyalty card
12-13 – Marillion / Separated Out - Redux
Marillion / Separated Out - Redux
Sloe gin in hand.
Twelve years ago, I had an idea - and we all know how dangerous those can be. Still, I went with it. I was never 100% happy with the result, which is unsurprising given that I had never done anything quite like it before.
To have the opportunity to revisit Marillion/Separated Out was one that couldn’t be missed. There’s an element of déjà vu; the process wasn’t always straightforward, they never are. But the only thing that really matters is the content, which will still be around long after we’re all gone.
A big thank you to all at Foruli for making this edition possible, and for working so hard behind the scenes on the copy, the artwork, the whole package. To all at Racket of course, for initiating the project in the first place and for their support throughout. And to everyone involved in the text, the fans, the collaborators, and (it goes without saying) most of all Marillion, a band which has created the soundtrack for the lives of so many. To quote the preface: “This is your story: I hope I’ve done it justice.”
For more information www.foruliclassics.com
2013
Posts from 2013.
January 2013
01-03 – Superdry - a non-technological success story
Superdry - a non-technological success story
Note: I drafted this last year, having had the good fortune to talk to Richy at the Cheltenham Design Festival. For one reason or another it was never published. And while the share price has had its ups and downs, the Superdry brand still holds its own!
Richy Baldwin stands next to a coat rail of t-shirts. Wearing jeans, a checked shirt and baseball cap he could be the bloke who brought the t-shirts in. Or a part-time musician. Or a sign painter. He is, in fact, all of those things.
He’s also Head of Graphics at Cheltenham’s success story, Superdry. Which means he’s responsible for every single logo and design that has appeared across the clothing range since its inception, 8 years ago. “How do you draw them,” I ask. “With paper,” he says. “And a pencil.”
While this may sound like a dialogue from the church of the bleeding obvious, it makes a welcome change from the world of Wacom tablets, airbrushing and CGI that seems to fill the pages of today’s creative magazines. We look at oil cans, made up to look like they’ve been filched from the back of some Grandpa’s garage, complete with bashes and brown stains. “Black coffee and a paint brush,” says Richy.
It’s all such a long way from technology evangelists and their wild claims about how the latest big thing is going to change the world. Superdry’s story is about a couple of blokes and a minor epiphany that inspired a clothing brand, marrying Eastern graphics and lingo with classic fashion.
Richy was one of the first on board, and has seen the company grow from a back room to a global brand. The well-documented story includes how David Beckham’s love of a certain Osaka design undoubtedly catalysed the company’s success.
What the story doesn’t cover so much is how the business continues to be run in much the same way, with the same people. And without the need for all that clever gubbins around social media monitoring or targeted ads. Indeed, the company doesn’t advertise at all - it doesn’t need to, as its customers are its best advocates.
All startups can learn from this. Not that technology isn’t useful - it can be fantastic. And not that the company ignores the potential of the Web - it employs a social media manager, for example. But that when business is done properly, based on an innovative idea executed properly, technology takes a supporting role.
A bit like a piece of paper. And a pencil.
01-16 – What does “sustainable income” mean for authors?
What does “sustainable income” mean for authors?
This is a question that has been puzzling me for some time (and I understand that I am not the only one). I started to look into it but couldn’t find an exact answer - so I put the following together for the purposes of discussion.
Link here for a printable PDF. Feedback welcome, in any of the usual places starting with @jonno on Twitter or by emailing jon at this address.
Introduction
A burning question for many prospective authors is – what is a valid, sustainable level of income, and how does that translate into the output of authorship, i.e. the books themselves? Answering this question requires certain assumptions to be made:
- First, that authors – people who make money from writing books – need to eat. We have choices in this – of course, we could leave authorship to the domain of the independently wealthy, which would make for quite a limited group. But, it is assumed, that is not the preferred route.
- Similarly, if there will be books in the future, printed or electronic, it will have to have been worth the authors’ time. Which leads to the second assumption, that people will continue to be prepared to pay for ‘content’ – including books. Writing is a trade, so any other way lies madness.
- Third, that there will be a need for supporting services, an evolving industry of providers will emerge. Today’s publishers retain an inexplicable level of authority, which is fine for those who succeed. But new services are emerging around self-publishing, coming from traditional houses and elsewhere.
With these assumptions in mind, read on. First question – are we talking writing books as a full-time career?
Full-time vs part-time models of authorship
Authorship can only follow one of three models: full time, part time or hobbyist. Outside the bestseller lists, a good question is exactly how many established authors would qualify as ‘full-time’. Very few, is probably the answer (for reasons which will quickly become apparent).
A spectrum of part-time authorship exists, from people who have written a book in their spare time or to “tell their story” – these one-off authors had little intention at the start to write books as a career. Equally we have struggling writers, who need to supplement the income from their passion with ancillary work. The stereotype is working in a restaurant, but good money can be made from writing; indeed, numerous writers see books as simply one format they can work with.
Meanwhile we have the hobbyists – people who feel the need to scratch the creative itch by writing a book. Nowt wrong with that but the impetus to make money may be less than the desire to write. Or indeed, a subset of such writers may spend money to get a few hundred books published specifically for family and friends – somewhat disparagingly labelled as vanity publishing.
Suffice to say this piece is more targeted at people who are considering the first or second option – that is, who need to see a return on their investment of time and resources.
How much does an author need to earn?
A starting point is to think about what might constitute a “minimum necessary” salary for an author. Considering the full time model first, this could again divide into two – whether writing is an over-riding passion which trumps all other money-making opportunities, or whether it is simply an option – attractive, maybe – to be weighed up.
Let’s consider first the author doing it because they are driven to do so, in which case they will be prepared to compromise on salary. By how much? The minimum wage is £6.19 per hour. So, let’s say working 40 hour weeks as a writer with 5 weeks’ holiday plus 12 bank holidays – that’s 223 working days, or £11,000 per year, or (after tax) about £800/month. It may not be enough to pay all the bills, but that’s what the maths says.
In the second example, we might consider writing as an alternative to other professions which require a certain intellectual bent. it is not unreasonable for a new author to aim at earning – what shall we say – £25,000 per year – which is, roughly speaking, what a teacher could earn as a starting salary? After tax, that equates to £1,600 per month.
So, while the sky may be the limit, we can see £11,000 as an absolute minimum for someone that has bills to pay, or £25,000 to meet more reasonable expectations. So, how does that translate into book sales?
Using a publisher vs self-publishing – the financials
To state the obvious, the publishing world has transformed over the past five year. Not just in terms of the different methods available, but also the way they are perceived. Until quite recently, self-publishing was seen as a euphemism for vanity publishing, with all the baggage that contains. These days, it’s an acceptable alternative to publisher deals, and many established authors are following that route.
Looking at the traditional publishing model first, an author royalty equates to between 5% and 15% of net sales, depending on the type of book and the leverage the author can exert. Meanwhile, a printed book can cost between £5 and £15, depending on format. Going for the lowest-end of paperbacks, 5% of £5 is 25 pence, so to achieve £11,000 would require 44,000 sales: that’s a lot of books.
Of course the publishing model is more complicated than that. Special contractual terms for discounted sales, licensing, foreign editions, returns all add to the difficulty of allocating a single figure. The best anyone can say is that the figure will be plus-or-minus a certain percentage (25%?), the probability of which goes up with the success of the release.
There’s also the question of an advance, which once again comes in two kinds. The first, reserved for celebrities and literary big hitters, is the kind which may never be expected to be paid back. When Wayne Rooney receives an advance of – well, whatever it was – it wasn’t on the basis that he would sell that many books, more that the publisher wanted to be seen to have him in their catalogue. The second kind of advance is, notionally, a repayable loan at zero-percent interest, paid to cover the bills of an author while writing. The downside is that the first few royalty statements will earn the author nothing.
The self-publishing model requires different maths. Start-up costs are zero (nominally, if you already have a computer), and the royalty model flips around – for example, Amazon and Apple iBooks charge 30%(?) of cover price for each sale. So, if you’re charging £5 for a book, you will see £3.50 of it, which suddenly makes lower sales figures matter less. Back at the minimum wage for example, you’d need to sell just over 3,000 books in a year – if you did all the work yourself, that is.
What work is there? Copy editors can handle about ten pages per hour, at rates about £25/hr. So that 80,000 word novel, at 300 words per page, would require 27 hours’ work (or £675). Then layout. If you’re going for an e-book, there’s very little to do to simply keep up with the major publishers – a “free” tool such as Sigil (the developer accepts donations) is all you need. For print, quotes for layout run at about £300 for a book of this length, and £275 for a cover. You may have artwork, which will add to the costs.
So overall, you’re talking about £1,250 minimum to get your book into a fit state for delivery. You could do these things yourself (though proofing your own work is never advised), or ask favours from friends and family – but seasoned people will do the job faster, with less aggro. A more realistic figure for self-publishing, then, is that you would need to sell a further 350 copies of each book to cover its costs. That’s £3,400 in total.
Quantifying time and resource
While that still looks like a big number, these figures don’t have to be achieved with a single volume. There is no lower time limit on a book, beyond an absolute minimum of 10 days, which is about how fast a human completely in the zone can bash a typewriter keyboard.
More realistically and to follow Stephen King’s ‘On Writing’ advice, a reasonable writer should be able to churn out at least 1,000 words per day on average. It could be more – but on some days, it may be less, plus there is all that pesky research, structuring and characterisation/angle to be worked out, interviews to be done, places to visit.
All the same, with a target length of 80,000 words (again, there’s no rule here), that makes 80 days’ effort, or 4 months. Which, in principle, leaves the rest of the year for another two books. If following the traditional publishing model, this still means each book needs to sell 15,000 copies just to achieve minimum wage – which is still a bit of a choker when starting out, though not infeasible.
For self-publishers, the situation becomes even more viable. If you were able to write only two books in a single year, selling “just” 2,000 copies of each would start to pay the bills. Which is a good starting point and sounds achievable, though it may still be beyond the reach of many would-be authors.
What of the part-time model?
All of the above has assumed the full-time model, of course. Many authors work part time, with writing supplementing their ‘day job’, or vice versa. Indeed, while it may only be feasible to write 1,000 words per day, their actual writing doesn’t need to take that long in hours, minutes and seconds. And indeed, the required ancillary tasks may not fill the rest of the day.
The part-time model reduces financial risk and enables books to exist where it simply would not have been possible otherwise. To reference Stephen King again, he pointed an aspiring Neil Gaiman towards writing a page a day – to paraphrase, by the end of the year you look up and you have a book. Spending an hour or two every day for a year may sound onerous, but equally, it is a good test of whether it was sufficiently important in the first place.
Equally, the model guards against the lead times of actually seeing a payment. A book is highly unlikely to deliver a return from Day 1. The traditional publishing model pays royalties only every 6 months, and advances are becoming harder to come by so relying on book income alone is a high risk strategy.
Increasing the chances of success – targeting and self-promotion
The final reality check is that nothing is guaranteed, in authorship or anywhere else. While the baseline figure of selling 3,400 self-published copies of one or more books may appear feasible, the fact is that many self-published books fail to sell even a tenth of that number. And meanwhile, following the traditional publishing model requires stamina, not least to cope with frequent rejection.
We can all learn a good lesson from publishers – that they are only prepared to publish what they believe will sell. The same discipline should apply when self-publishing, in that prospective authors owe it to themselves to have a clear view on the saleability of what they are writing. This includes the topic – while books do follow fashion, it remains easier to target an under-exploited niche with a ready, affluent audience.
Much of marketing is not rocket science, but it does require people to do things that feel terribly discourteous. Such as asking people whether they would like to buy something (and if not, why not). Most people don’t like selling, and creative people more than most. But it stands to reason that a self-published book will remain in its carton unless somebody (if not the author) is actively looking to shift it elsewhere.
Which brings to the other reason to use a publisher – marketing. Publishers may feel like a hard nut to crack but once they have signed a contract they will act in their interests to make sure it achieves its objectives. However, non-bestseller authors who are already active on social networks, who have their own web sites and author pages on Amazon, might quite rightly ask what the publisher can add.
Conclusion – start from a realistic base
Over time, successful authors should be able to develop an income stream which can tide over the lean periods and enable more writing to happen. The financials are most difficult at the start of the process – when a prospective author needs to fund their existence for the months researching, writing and then waiting for the first payments to come through.
Most of the calculations here were based on the minimum wage, rather than the equivalent-wage scenario. To achieve the latter, an author would require 100,000 sales in a year. To put this into perspective, Nick Hornby’s About A Boy has sold 800,000 books, which is only eight times more. Bluntly, while you don’t need to reach the top 100 bestseller list, you do need a bestseller to break even.
In other words, authorship is not for the faint hearted and anyone that starts writing a book for commercial reasons needs to understand just how difficult this can be. Through simple economics, which are faced by the biggest publishers as much as the one-man-bands, the cards are stacked against authors who are outside the elite.
This doesn’t make it impossible to break through. Success begets success – not only will a publisher want to back a horse with ‘form’, so will readers. While the old adage does apply, “We all have a book inside us – and for some, it is better kept that way,” a successful book doesn’t have to be a work of creative genius, but merely pique the interest and make someone turn each page to the end.
Ultimately, people who really want to write will get on and do so. From the figures, the best advice whatever model is chosen is “don’t give up the day job” at least until another revenue stream starts to kick in. Becoming an author is fraught with risks – most important is to be realistic about marketability, and to recognise that everything comes at a cost, even if it isn’t paid for. That being said, nothing is stopping anyone putting pen to paper.
References
Referenced with thanks:
http://from.io/UtRWo6 What is the typical royalty rate for an author on book sales? – askville.amazon.com
http://from.io/V8C7Hc How Much Should You Write Every Day? – Write to Done
http://from.io/W7zct4 SfEP suggested minimum freelance rates – Society for Editors and Proofreaders
http://from.io/13EP0Jo FAQs: Using copy-editors and proofreaders – Society for Editors and Proofreaders
http://from.io/Ya5iuv The top 100 bestselling books of all time – Guardian
http://from.io/ZYKWzG Stop the press: half of self-published authors earn less than $500 – Guardian
http://from.io/U1V4bI John Scalzi’s Utterly Useless Writing Advice – whatever.scalzi.com
http://from.io/V5NtrJ And You Thought a Royalty Involved a Crown – editorialass.blogspot.co.uk
http://from.io/V5Nzj5 Publishing Money Myths | Frost Light – jeanienefrost.com
http://from.io/S4PiGJ How Much Money Does An Author Make? – KalebNation.com
01-27 – An Origami mobile phone / smartphone stand
An Origami mobile phone / smartphone stand
Ever been stuck, miles from a gadget shop and needing to prop up your smartphone to watch a video or do some typing? Well, worry no longer - for here is the solution you have been waiting for - you can simply make one in a matter of minutes, no tools, no fuss. Read on to find out how.
Start with a piece of paper - in this case A4, straight from the printer tray.
Fold the paper across…
Then, fold the remainder backwards to make a triangle.
You can then cut/tear off the excess to make the square you need.
Now the real fun begins. Fold both corners into the line to make a kite shape.
Then fold one end over to make a triangle.
Fold both corners in and the triangle down. Note you will have to open the corners out again for the next stage - it’s just to get the creases!
Now’s the only, slightly tricky bit. Crease from the point downwards by folding each side in - then crease from the bottom cornes as well. You should end up with what’s shown in the picture. 
Fold the corners back in now you know where the creases go, as shown.
Now fold the top corner down about an inch. Ready for teh next, even more slightly tricky bit?
First fold the top over to make what looks like an elephant. Well, a little.
Then, take the “trunk” and tuck it into the neck. make sure the end fits firmly as far as it can go. This is important.
Fold the whole thing flat…
Then, turn over and flatten.
We’re nearly done! Fold the lower edge (with the fingers) up about a centimetre width. You’ll notice that it’s quite hard to fold in the middle because of the end of the “trunk” but this is what gives the whole thing strength.
Then fold about a quarter of that up.
Pull the back out a bit and you’re about done - here’s the finished article!
That’s it - happy watching/typing!
September 2013
09-16 – Reasons why I may have just unfollowed (or followed) you
Reasons why I may have just unfollowed (or followed) you
In no particular order:
Because I’m not absolutely sure who you are Because you haven’t tweeted in over a year Because I didn’t recognise you from your twitter handle Because you’re famous now and too busy to chat Because I know you socially on Facebook Because that’s not really you, it’s your news stream Because I reached out a couple of times, but… no Because you don’t follow me, and I just noticed Because you “follow” 100,000+ people – really? Because you do come across as a little bit of a knob Because we met and we got on, but we don’t share much Because my interests have drifted from your topic area Because I made a mistake – kick me!
Reasons I will follow you, if I don’t already:
Because I unfollowed you by accident and just noticed Because we have met and I enjoyed your company Because you are interested in having a conversation Because you have interesting or funny things to say Because I value your opinion and want to engage Because your areas of interest overlap with mine Because I have worked with you, or may do
But most of all, because I see Twitter as a place for conversations, not a listening post, and I want to engage better with the people I do follow.
October 2013
10-03 – Another Morning
Another Morning
It starts just like the rest of them Clothing shrugged and bleary eyes Doubting Thomas, slippering awake Stops and stairs, not yet for the day
Half-made choices mingling, mangling Teaspoon stirs, marmalade jars Shrieking gadgets sound the charge Curtains swish, to let in the day
Flick the latch, hair, crisp air Coat and keys, shoes now tied Practiced purpose, best foot forward Long stride, now set for the day.
December 2013
12-13 – Isn't a Rush-Chemistry paperback the perfect Christmas present?
Isn’t a Rush-Chemistry paperback the perfect Christmas present?
Well, ho, ho, ho, it’s that time of year again! We all know that a rock biography the first thing on everyone’s Christmas list, so now’s your chance to pick up a paperback copy of Rush-Chemistry for £10 plus £3 postage. Just message me and we can sort out the details.
With only five biography-shopping days to Christmas, you don’t want to leave things in a…
12-31 – Happy New Year! And Baking.
Happy New Year! And Baking.
What a very interesting year this has been. Without going into painstaking detail, it started with me wondering if I could really hack it as an independent soul (I even applied for jobs - quelle horreur!) and ended with the thought that I wouldn’t have it any other way.
For the record I’m now working as an analyst for the illustrious crowd at GigaOm, chairing a fantastic technology incubator not-for-profit called Reconnectus, writing books and working in a variety of consulting, advisory and writing roles. Diary management can be a challenge.
But enough about that. If there’s anyone left on the planet that I haven’t bored rigid, I have been baking. Every weekend since September, in fact. For the record, baking has to be one of the oldest applications of technology conceived by the human race, taking us from mere hunter gatherers to, well, something more.
One of the fascinating things about baking is what it tells us about ourselves, not least our post-industrial-revolution, reductionist, misplaced view that technology can make everything better, and better, and better. To whit Chorleywood, the village which gave its name to the process now applied to the majority of bread making in the UK.
(he paused, to put bread in the oven)
By the late 1980s only 3,000 independent bakeries existed in the UK - a by-product of our mistaken view that what comes in packets is intrinsically better than what is made by hand. The good news is, this number has now increased to about 7,000, and continued to rise. Why? Because, ultimately, bread tastes nicer when it is not subjected to mass production.
Technology is fantastic, isn’t it? As we live in the middle of some of the most dramatic changes ever experienced by humanity, we cannot but stop and wonder. Earlier today I documented my baker’s dozen (really) of technology predictions, ranging from the groundswell of smart to the orchestration singularity. Coming soon to a blog near you.
At the same time however, it presents a double-edged sword. Snowden and the NSA, hackers and spyware, email overload and online addiction, the challenges faced in numerous sectors not least music, publishing, retail and indeed the technology industry, all with a seemingly complete absence of control as to where it is all going.
I remain optimistic and stoic in equal measure, in the face of this increasingly data-oriented future. We need a new rule book - existing governance and legislation is repeatedly proving itself woefully inadequate, be it for corporations, individuals or indeed governments. Never has the proverb “May you live in interesting times“ been more accurate.
Equally, we can learn from the bread industry that technology sometimes enables us only to rob Peter to pay Paul, sacrificing quality or experience for lower prices or broader reach. It may take us another two hundred years, but we are smartening up - learning that the X Factor’s quick hit is ultimately unsatisfying. Simon Cowell could have come from Chorleywood.
The bread’s nearly done, so it is time to sign off for another year. Of all of our most ancient industries, perhaps the second oldest is potentially brewing - it isn’t hard to imagine how early tribes discovered other qualities of yeast, quickly realising that Man cannot live on bread alone.
On which note, the village hall and an evening of wine and song beckons. It just remains for me to me to wish you a delightfully relaxed New Year’s Eve and a successful, fufilling 2014.
2014
Posts from 2014.
January 2014
01-01 – A Virtual Bill Of Rights Is Needed To Guard Our Data
A Virtual Bill Of Rights Is Needed To Guard Our Data
Rant: A Virtual Bill of Rights is Needed to Guard Our Data
Whether it’s internet giants or governments, a governance framework to curb information gathering is a must
Jan 24, 2014 10:30 am PST
While the origins of the Magna Carta are subject to some dispute (the original agreement was between the monarch and only two dozen barons, for example) few would doubt the influence the document, or its offspring, has had on the histories of the English-speaking nations. Whatever the scope of the original, the joint principles — that all people are ultimately accountable and that nobody should hold a position of absolute power — form the basis of UK, US and many other democracies.
We are generally comfortable with the idea that society before the Industrial Revolution was more primitive than today, at least in our wonderfully advanced Western economies, so the historical existence of despotic ruling classes that needed to have their wings clipped comes as no surprise. How content we all are to know that modern society is so much more developed, so much less like a fairy story than in days of yore.
Things were simpler then, or at least appear to have been from our technologically advanced, increasingly globalised vantage point. King John was clearly a bad ‘un, ruling in his brother’s absence like the entire country was his plaything. And so, as we combine history and myth, throwing in a Robin Hood here and a Friar Tuck there, we simplify and construct a reality to give ourselves the impression that we are better off.
History tends to repeat itself, however. Containing the First Amendment stating, “Congress shall make no law… abridging the freedom of speech”, the US Constitution and Bill of Rights were formulated in response to the perceived threat of tyranny coming from both outside and within the still-fledgling country. And in living memory (1950), the Council of Europe created the European Convention on Human Rights in response to both the horrors of the Second World War and the looming spectre of the Eastern Bloc.
We shouldn’t be perturbed that more recent events are harder to decipher than those of a thousand years ago. Information may be power but it also creates complexity, nuance, uncertainty… particularly when it keeps on changing. More challenging is that much of the problem is of our own making. Over the past five years, we have seen computer systems become powerful enough to enable more than a billion people to communicate ‘in real time’, to support an array of breakthroughs from identifying new particles to discovering cures to diseases.
Simple extrapolation suggests we haven’t yet even scratched the surface of what we are only beginning to make possible. We really are living in highly interesting, nay fascinating, times, as we swim amid oceans of information, the likes of which has never before been seen. As we splash and dive, however, we remain decidedly, even determinedly ignorant of the dangers that lurk beneath, particularly if information is left in the hands of people who are able to act above, or beyond existing law.
The fact that our governments have been collecting data on just about everything they can is, in hindsight, as inevitable as Facebook or Google’s trawling of our messages for details of personal interests they could sell to advertisers. As, indeed, is the fact that the authorities have proved themselves completely incapable of stemming the outflow of information as to what they have been gathering. Some commentators have been shaking their virtual heads in disbelief at the lack of public response — testament to the pervading sense of passivity.
And what of the whistle-blowers? Edward Snowden, Chelsea Manning and Julian Assange are not mega-minds holding the world’s data mountains hostage, but ordinary folks who, for whatever reason, decided to flick the software equivalent of a switch and who have gained notoriety as a result. They are not alone: a broad range of people, from Mark Zuckerberg to the NSA analysts who have spied on ex-partners, have been pushing at the boundaries of both equality and common decency as they discover what they can do with such a wealth of information.
The fact we need new frameworks for governance is so obvious that it hardly needs saying, and indeed, efforts are underway. However, ongoing initiatives are fragmented and dispersed across sectors, geographies and types of institution. For example the UN’s resolution on “The right to privacy in the digital age” overlaps with proposed amendments to the US ’Do Not Track’ laws, as well as Europe’s proposed ‘right to be forgotten’ (which has already evolved into a ’right to erasure’) rules.
Chances are that all such attempts to legislate will be superseded as new forms of information gathering and analysis develop. One only has to look at the number of cameras being installed on next-generation cars, or the fears around utilities using smart grids to switch off energy without the home-owner’s consent, to appreciate some of the difficulties which lie ahead. The debate becomes even more complex when metadata (data about data, such as phone call records), data aggregation and anonymising are taken into consideration.
Wherever the answer lies, it is unlikely to be found by trying to solve each problem individually. Instead it requires a profound rethink as to how we consider our new abilities to quantify, monitor and capture everything we say, do and touch. The information age has brought an additional dimension to our existence which we would not want to be without. We have moved, over the past 200 years, since the discovery of electricity and the capabilities of semiconductors, from seeing in black-and-white to colour. However, to continue the analogy, current legislative approaches are trying to apply three-dimensional thinking to a four-dimensional space. The UK’s Data Protection Act, laws around cybercrime, even areas such as intellectual property and ’digital rights’ all consider digital information as something separate, adding to, as opposed to augmenting, what has gone before.
Information is indifferent, even oblivious to our attempts to control it as an entity, a fact that the darker elements of our governments and corporations are exploiting, even as they profess the opposite. As are we all, potentially, as we watch Blackberry Messenger become the anti-establishment rioter’s preferred mode of communication, or participate en masse in click-rallies aimed at influencing corporations and governments, or benefit from Freedom of Information requests. As many have said, information wants to be free.
The information revolution has already changed the world, but in many ways we are still acting as though it hasn’t. This disconnect creates an opportunity for all, but more so for the powerful than the average citizen — it was ever thus. Stripping away the silicon-and-polymer trappings of our technologically advanced culture reveals an issue as old as the Magna Carta: that inadequate governance allows for a minority to act with impunity, even as the rights of the masses are abused.
Is there an answer? Yes there is, by no longer thinking about information as a separate element of our existence, but as an immutable part of us. Some technological circles, including gaming, social networks and the Internet of Things, already accept the notion of a virtual representation of something physical — an avatar, for example. Today, our virtual representations are fragmented and of poor quality, but already corporations such as Axciom are looking to change that. Hackneyed phrases like “if the service is free, you are the product”, bandied around as if saying them often enough makes them acceptable, reflect how corporations already ‘get’ the notion of the virtual, exploitable self.
We all know that our very real lives are being intruded upon — we can feel it in our bones. Just as we would find it unacceptable to be body-searched for no reason, or for a telephone engineer to walk into our front rooms and start flicking through our address books, so are we experiencing the discomfort of having our online lives mined for information, or being ‘followed’ by over-zealous and poorly targeted banner advertising. To dislike feelings of intrusion is the most natural thing in the physical world, and so it should be in the virtual world.
The information revolution is far from over. New opportunities to breach privacy continue to emerge, with ethical consequences that go far beyond the questions of legality - such as the case of direct mail targeting the recently bereaved, or the now-banned ’smart’ bins which track people using their Bluetooth identifiers. We have not yet fully grasped that information about ourselves doesn’t just belong to us - it is us, and as such needs to be considered within the rights that we already hold as fundamental.
Our virtual, digital, quantified selves should be afforded the same rights as our physical selves. Until we get this, and national and international legislation reflects the principle across the board, we shall continue to be beaten down by the feeling that, in information terms, we are giving away more than we are getting. As we add layer upon layer of detail to everything we see and do, the level of discomfort will only increase until such time as we, as a race, reclaim our information-augmented humanity.
February 2014
02-23 – Making Big Big Bread
Making Big Big Bread
Some may have spotted that I have a thing about baking at the moment. As it happens, I also have a soft spot for the music of Big Big Train, and a repeated theme across the past few decades has been, believe it or not, beer.
By happy coincidence, the band members of Big Big Train appear to have a similar taste for the latter, to the extent of licensing a chocolate porter in their name, from the appropriately named Box Steam Brewery.
An obvious next step is to combine all three, isn’t it? To whit, I proffer below a step by step guide to making Big Big Bread. You will need:
800g of strong flour - I used 550g wholemeal and 250g white, to keep it light. If you use more wholemeal, you may have to add a bit (50-100ml) of water.
12g salt - I have some sea salt with seaweed in, which seem to make sense for reasons I cannot fathom.
12g dried yeast - not the “fast action” stuff
If it takes your fancy, a tablespoon or so of light linseeds and, potentially, some raisins.
2 bottles of Big Big Train beer.
For equipment, a variety of bowls - a smaller plastic one for measuring, a larger one for mixing. An accurate scale. A cheap plastic bowl scraper. A thermometer with a metal sensor you can poke in the bread. And the “English Electric: Parts 1 and 2” albums.

First, weigh out the flour and mix with the salt and linseeds. Make a well in the middle, and add the yeast, followed by one bottle of beer. Then push a bit of the flour from the sides into the beer mix - enough to make a liquid mush (the technical term is a sponge).


Cover with plastic, and leave for 20 minutes or so - until there’s been some sponge action - enough to make you think, “Ooh, look at that.“ You should be about finished with ‘Judas Unrepentant’ by now - great track. About an art forger.

Once you’re happy/bored, use one hand to mix the flour and sponge together, you’ll need the other to hold the bowl. Turn the rough dough out onto the surface and start kneading. Don’t worry if it’s a bit sticky; add a little water if it’s a bit tough.

Knead for ten or so minutes (pretty much ‘Summoned by Bells’), folding the dough towards you and turning 90 degrees. The time is less important than the result - you want to be sure the dough is stretchy and supple at the end, so that when you fold it over it is like a taut belly after a good meal.

Put the result back in the bowl and cover with plastic for a couple of hours or so at room temperature, until the dough has reached twice its size. Be careful with airing cupboards, as they can dry the dough. (You could always *gently* warm the beer to 30 degrees or so; I didn’t.)
At this point, sprinkle a bit of flour onto the top of the dough and “knock it back” punch it down with a fist. Then scrape the dough out (using the oh-so-clever plastic thing) onto a floured surface.

You can cut the dough according to your needs - two thirds would make a standard loaf plus three rolls, or the whole lot could go into a large tin.
Flatten and fold each piece of dough - both sides into the middle, turn and do the same again (like a sheet). Flatten and repeat, leaving you with nicely rounded balls. Oh, stop it.
Now’s the moment where you can use the raisins, if you are making a second loaf or rolls. Roll out the smaller piece of dough with a floured rolling pin, then cover the dough with raisins before rolling it up like a Swiss roll. Roll out again and up again before forming back into a ball.

Then - here’s the clever bit - get the air bubbles out of the dough by stretching the ’skin’ of the dough and tucking it in underneath. Put one (floured) hand either side of the dough and stretch it towards the work surface, tucking and turning the dough at the same time. By now, you will be gasping for a drink. Which is where the second bottle comes in.

Aaaand - we’re back. Upturn the ball of dough onto one hand and pinch together the resulting seams before putting back in a bowl/on a tray and covering with plastic. Again. Leave for half an hour, for a second prove. Now you can shape the dough ready for putting into tins.

Oil the tins with a butter wrapper or an olive-oil-on-kitchen-roll combo. Flatten and shape each ball as before, then have a final shape by rolling it up while pushing in with the thumbs. Fold the ends underneath and pinch any seams before putting in the tin. Dust the tops with flour, then cover in plastic. Again.

Leave for fifteen minutes to rest before putting the oven on. Five minutes later, use scissors or a knife to slash, slice, cut or otherwise live out your vengeance fantasies on the top of each loaf or roll.

And into the oven they go. For about 25 minutes. If you have a thermometer, you want the bread to reach 94 degrees C, at which point it will be done - you’ll know as much if dough comes out on the sensor. If the top starts to burn, turn the oven down and put tin foil on top.
And if there’s more room in the oven, you might as well use it!

And if the time says this, the bread may be very nearly ready.

Take the bread out of the tins and put onto a cooling tray to, er, cool.

Try to resist cutting off a sumptuous crust of freshly baked bread, smothering it in butter and biting into… oh, never mind. You’re done.

August 2014
08-12 – The Pit
The Pit
This much I know. There exists a dark pit in all of us, the blackest of black places, deep enough to feel bottomless. How fortunate the few who never have to fathom its depths, but most will, at some time and without warning.
Some enter never to leave, reluctantly languishing, having lost the energy to fight back. A tragic few lose the battle and pay the ultimate price. Others continue their struggle without a murmur, their despair and anguish visible only to those closest to them.
Those who are able may choose to ignore, or deny its existence, though the pit lurks within them as well. Many try to help to no apparent avail; some simply offer comfort and solace, which is all anyone can really do.
Eventually, after an age (the pit has no notion of time), its hold might loosen allowing light, once again, to shine into the depths. As the longest night gives way to day, for a while it may appear the pit is no longer really there. When it does return however, it is just as deep, just as black, just as indiscriminate.
All anyone can really hope, if lucky enough to emerge, is that moments spent outside the pit will grow and expand: happier hours, days, weeks spent without teetering on its brink or sliding, hopelessly, into its maws. Perhaps such times will extend such that, one day, the whole experience becomes no more than an unhappy memory.
When it returns, as it surely will for so many, may we all have the strength and knowledge that the experience can be but temporary, however permanent it feels at the time, and however disappointing it is to learn that the pit will never be completely vanquished.
Eventually, a fortunate few may nod in wry understanding of what it means to be human, imperfect, with baggage and with certain areas of conscious existence that can never be fully controlled, even if years pass before their deepest reaches manifest themselves again.
2015
Posts from 2015.
January 2015
01-01 – Climbing Aboard The Internet Of Things
Climbing Aboard The Internet Of Things
Climbing aboard the Internet of Things
Plan now for the IoT, or risk missing this fast train
Apr 13, 2015 5:00 am PDT
If the Internet of Things was a train, it would have sensors on every moving part - every piston, every element of rolling stock and coupling gear. Everything would be measured from levels of wear to play, judder, temperature and pressure.
The information would be fed back to systems which analysed, identified potential faults and arranged for advance delivery of spares. It would be passed to partners about the state of the track and considerations for future designs. The same feed could be picked up by passengers and their families, to confirm location and ETA.
Meanwhile, in the carriages every seat, every floor panel would report on occupancy, broadcasting to conductors and passengers alike. The toilets would quietly report blockages, the taps and soap dispensers would declare when their respective tanks were near empty.
Every light fitting, every pane of glass, every plug socket, every door and side panel would self-assess for reliability and safety. The driver’s health would be constantly monitored, even as passengers’ personal health devices sent reminders to stand up and walk around once every half hour.
The very fact that our train systems don’t yet incorporate all such capabilities is indicative of just how much potential the Internet of Things offers. Indeed, it is difficult to imagine a future where the above scenario doesn’t become commonplace.
As reports Kalman Tiboldi, Chief Business Innovation Officer at Belgium-headquartered spares and service company TVH, “It’s not a question of whether, but when.” Equally, for many organisations the range of options appears so overwhelming, it can be difficult to know where to even start.
To resolve this conundrum, I was lucky enough to sit down with Kalman for a panel session at last week’s Cloud Expo conference in London, as well as with Adam Dunkels, founder and CEO of Swedish IoT integration company Thingsquare. Together we came up with a five-point plan:
First off, don’t be afraid to start small. Some of the best examples of IoT innovation are where a simple idea has far-reaching impact - such as the use of sensors in farm gates to detect whether they are left open, and send a message - for example via a simple SMS - if so. Network bandwidth clearly does not have to be a constraint: “We still see high demand for GPRS,” says Kalman.
Second is to exploit the power of brainstorming. “We can learn a lot from sites like Indiegogo,” says Adam. However, companies don’t have to leave all the zany ideas to the startups, particularly as the costs of IoT - sensors, hardware and software - continue to drop.
Third, the corollary to this point is to look for the highest value opportunities. Given that sensors and measures can be attached to just about anything, that doesn’t mean organisations need to attach them to everything. The output of any brainstorm should be a shortlist of options which can be given a hard-nosed assessment.
Fourth, with every IoT opportunity comes security risk. Not least can the IoT be hacked to cause disruption - and it will - but also the data it creates has intrinsic, raw value. “More than ever, organisations need to recognise the value of the data they are creating,” says Adam.
Fifth and finally, start to think about how the IoT needs to be managed. Says Kalman, “Managing IoT devices will require new management tools, very different to mobile device management.” Some experience now will help organisations get ahead of the game.
These are early days: over coming years, new sensor types, new software platforms and ways of managing devices and data will emerge. As the IoT train starts to pull away from the platform and gather speed, there is no time like the present to start understanding what benefits it can bring.
01-01 – Data Deluge Poses Ethical Conundrums On Privacy
Data Deluge Poses Ethical Conundrums On Privacy
Data deluge poses ethical conundrums on privacy
We don’t yet know how the ability to capture and inspect more information will affect us all
Jul 16, 2015 7:30 am PDT
June was a good month for Data Protection in Europe, what with the approval of the draft EC law. With luck it may be ratified by the end of this year, to much applause from the ramparts of Brussels and relief from the general populace.
We will all be able to sleep easier in our beds in the knowledge that our privacy rights are protected. Armed with this sense of comfort, some will confirm our newly established sleep patterns by checking our Fitbits or Jawbones, uploading data via mobile apps to servers somewhere in the cloud.
Indeed, in our delight, we’ll share our stats via social media — probably using a Facebook login to avoid all that rigmarole involved in remembering usernames and passwords. At which point, of course, we have entirely given over any rights we might have had on the data, or conclusions that could be drawn from it.
Of course, what is there to be read from sleep patterns? It’s not as if we’re talking about drinking habits or driving skills, is it?
Don’t get me wrong, the law is very good as far as it goes. Its associated handbook is well thought through and considered, based on real-world cases, tested in court, which go to the nub of issues such as balancing protection with personal privacy. Such as the suicide attempt thwarted through the use of CCTV (a good thing), but then the footage being released to the media (a bad thing).
Indeed, the law is pretty comprehensive as far as it goes, with a fair amount of recourse should things go wrong - the right to be forgotten, for example, i.e. for data to be removed from particular databases. So, what’s wrong with it?
This is, absolutely, the information revolution and as such nobody has much of an idea what is going to come next. In such a fast-changing environment, framing the broader issues of data protection is hugely complicated; the complicity of data subjects, their friends and colleagues is only one aspect of our current journey into the unknown.
But do so we must. Where to start? A greater challenge could be considered in terms of aggregation, a.k.a. the ability to draw together data from multiple sources and reach a certain conclusion. Numerous demonstrations exist of how seemingly innocuous data sets have been used to identify specific individuals.
But even this doesn’t really tell the whole story, and neither could it. We are accumulating so much information — none of it is being thrown away — about that many topics, that the issue becomes less and less about our own digital footprints, however carelessly left. Looming without shape and form — yet — are the digital shadows cast by the analysis of such vast pools of data.
Profiling and other analysis techniques are being used by marketers and governments as well as in health, economic and demographic research fields. The point is we don’t yet know what insights these may bring, nor whether they might be fantastically good for the race nor downright scary.
Examples are difficult to pin down — this is a journey into the unknown, after all — but in essence reflect the question, “What would you do if you found you only had five months to live?” In this context, more important would be, what would your insurer do? Or your housing association? Or your travel agent?
The Act does provide for cases with legally negative ramifications (which make sense, it’s a legal document) but it doesn’t take into account situations operating within existing laws which nonetheless erode personal rights. A seemingly innocuous data set might be quite revelatory — we know that soil data can be used as an indicator of vine disease, for example. But what if it revealed your smoking habits?
While you might be able to ask for your own data to be removed from a data set, you couldn’t ask the same about data relating to the soil in the field next to your garden. This is the real danger caused by aggregation - that it is possible to operate entirely in the shadows cast by the context of human behaviour, without treading on the toes of anyone’s ‘personal’ information.
Equally, the draft law is structured on the basis of an exclusionary, “if in doubt, take it out” model — this doesn’t resolve the potential for prejudice caused by the absence of a necessary piece of data, or even an entire data set. We may need a “right to be remembered” in some cases, with an inclusive response to an inaccurate ‘insight’.
I am wary of appearing like Chicken Lickin’ here — I don’t believe the sky is going to fall in, and I don’t want to stand in the way of innovation. However I do believe that our current push to create larger and larger data sets will have consequences, both better and worse, and data protection is only one of the tools we will need in the legal tool shed.
One such tool is an increasing requirement for metadata. It should not be enough to know that I was moving at 100 miles per hour, having consumed five units of alcohol, if indeed I was on a train rather than in a car. A little extra contextual information is vital. As the number of sensors around us flourish, they should be fingerprinting their own data so that it can be traced to the source.
Data needs to know its own provenance and if it cannot, it should potentially be discarded — this could be considered the missing 8th principle of Privacy By Design, which implies that designers can’t be held responsible for subsequent use of data.
A second tool is, to coin a horrible yet fitting phrase, data-driven legal agility. The Information Age is in a brainstorming stage, as businesses try to combine data and services in new and interesting ways and see what insights emerge. The business mantra is “be agile” — as Edison once noted, it’s not the 10,000 failures that matter, it’s the one success.
That one flash of brilliance might create a hitherto unknown, completely legal, public, non-specific, yet damaging stereotype, such as “cat owners are dangerous drivers”. Once such an insight has been discovered the damage may already be done. As we become better at data analysis such micro-prejudicial examples will become the norm, rather than the exception.
As a result, if businesses need to recruit data scientists, so do our judiciaries and our lawmakers. Our legal systems need to operate in as agile a manner as our businesses and startups, quickly considering the consequences of the retrospective application of an unexpected discovery.
Perhaps the biggest beef I have about the data protection law is that it still treats data in a one-dot-zero way — it is the perfect protection against the challenges we all faced 10 years ago. Over the coming decades however, we will discover things about ourselves and our environments that will beggar belief, and which will have an unimaginably profound impact on our existence.
Like water, data will engulf everything that we do - it cannot be held back. Like fire, it will spread uncontrollably, however much we penalise those who drop the occasional lighted match. Like the air on a cold day, we will breathe it, and it will retain an image of our breath which can be captured, with or without our knowledge.
Against this background we have some fundamental questions to consider — accountability and responsibility, exploitation and recourse, personal and public protection. However, too many elements of existing law are based on a balance of past probabilities and, in the absence of hard data, an underlying acceptance as to what constitutes right and wrong.
This model is crumbling to dust in front of our eyes. The analysis threshold is lowering, opening the door to data economy that trades in shadows, and which will continue to grow. Protecting data is not simply “not enough” — in a world where anything can be known about anyone and anything, we need to focus attention away from the data itself and towards the implications of living in an age of transparency.
01-01 – Why Do We Bother With Technology?
Why Do We Bother With Technology?
Why do we bother with technology?
Everything is becoming digital but the non-digital world gives us our humanity
Jun 16, 2015 5:00 am PDT
Give us a smile. Not a big, beaming smile but… let’s say, a slightly embarrassed one, like you remembered doing something you shouldn’t. Or an eyes-widening, still-can’t-quite-believe it smile, about a remembered piece of unexpected good news. Or a wistful shrug of a smile, as if you knew it was too good to be true.
So many smiles, so subtle, so different but each one distinctly recognisable. The human face has some 43 muscles in the face, from which we can represent a plethora of emotions to others, and indeed to ourselves.
The wonks who study such things have distilled our expressions into six categories — happiness, surprise, sadness, anger, fear, and disgust — but research a year ago acknowledged that at least 20 distinct facial states exist, distinguishable by computer at least. According to the report, computer analysis revealed compound emotions - for example, happy surprise was different to angry surprise.
Which, it has to be said, will come as no surprise to any human being. A kind of technological hubris exists around all such studies, which suggest that only compute-proven features of any complex system are real, or at least, worth thinking about. While such an attitude has its place, its nemesis is the law of diminishing returns — that is, any over-simplistic response will eventually prove inadequate.
And so, to digital. That overloaded, over-used word which seems to have taken the tech-business world by surprise. Not that long ago I was in a room full of journalists from a variety of countries, speaking to a software company that had ‘digital’ written through its marketing materials like a stick of rock. One journo, a Russian, asked whether it was appropriate, “After all, isn’t it all digital?” resulting in a “Well, yeah” explanation from the otherwise well-briefed spokesman.
Digital is simply the latest attempt to define the world in technological terms. Sure, mobile and social technologies have fundamentally changed the way we interact; Moore’s Law and dark fibre have created a massively scalable processing bedrock upon which we can perform phenomenal numbers of calculations. It’s an absolute thrill-ride for anyone involved in tech; a genuine, global revolution.
But this does not mean that the old, ‘analogue’ ways are done with, far from it. For one reason, the world is infinitely more complex than the processor power available to model it. While we are only scratching the surface of tech’s potential, we are also still only scratching the surface of the complexity we are dealing with. But even more fundamental is that often-ignored question — why are we bothering with technology at all, in terms of the real benefits it brings?
One way we can look at tech is that it gives us super-powers — the ability to communicate across long distances or leap over tall buildings — capabilities that should not be ignored. In business terms they are directly and repeatedly leading to disintermediation, as some unassuming startup recognises that it can do something, supply a product, reach a customer base, in a way previously ignored by the incumbents. We’ve seen it from Amazon to, well, Amazon.
But what the information revolution has not yet done (and so far has not shown any potential to do so) is change what it means to be human, nor catalysed any desire among members of our august race to do away with the largely non-digital world we inhabit. A cup of espresso, served on a veranda on a balmy day in southern Italy, still gives more pleasure than watching the same in a YouTube video. The chances are, it always will.
Indeed, when things become too digital, people are just as likely to respond with a backlash. Consider the resurgence of vinyl records, for example. Retail analysts report that the high street is becoming a social space, where people like to meet and communicate, and that shops which offer both keen pricing and good service are more likely to thrive (a great UK example is John Lewis, with its JLAB initiative). Perish the thought that the shops of the future will be the ones that balance the power of digital with traditional mechanisms for product and service delivery!
As I write this, in a cafe on Denmark Street in London, a man walks past with a notebook in his hand. Around the notebook is an elastic band, perfectly tensioned using the laws of physics to hold the pages together without being too taut to remove. In much the same way, when the digital dust settles (and assuming we don’t mess up and turn the whole planet into grey goo) we will all get on with being human, without thinking too much about the technologies that surround us. Who knows, if we are still around, some of us might smile, wryly, in the knowledge that computers can finally recognise the subtlety.
01-01 – With Iot, We Are Only Just Brainstorming
With Iot, We Are Only Just Brainstorming
With IoT, we are only just brainstorming - what comes next?
The Internet of Things is seeing frenetic activity - some of it useful and some silly
Feb 23, 2015 6:00 am PST
Two announcements caught my eye recently, and not just because they both came from French companies. The first, for comedy value alone, is ‘Belty’ — a ‘smart’ belt that can sense when its occupier is being a bit too sedentary, or even when it needs to loosen during a particularly hefty meal. While I found the idea initially intriguing (who hasn’t considered unbuckling a notch) I then realised I have one of these already — it’s based on the elastic principles of, well, elastic. The more tension that is applied, the more it releases. Crazy stuff.
Perhaps more useful though less probable (in that it doesn’t yet exist) is ‘Cicret’, a bracelet-based projector which can shine your Android screen onto your wrist. Various reports have pointed out the more obvious weaknesses in this model, not least that it needs a perfectly sculpted forearm and ideal lighting conditions. All the same, there might still be something in it, even if your wrist is not the perfect rendering surface for your screen estate. After all, the icon-based approach was designed to make the most of a limited-size, flat display. Chances are we’d think of something else when projecting onto a more flexible, less reflective surface.
In both cases one has to ask about the contexts within which these ideas are being created. The crucible of innovation, it would appear, is an environment warm enough to have bare forearms, where people spend a lot of their time sitting around and eating. Or perhaps I am just projecting! Whatever - more important is how someone thought these use cases were sufficiently compelling to be a product idea, and that others saw them as viable enough to add them to the general buzz.
The fact is, anyone can now pick up a few bits of technology and ‘invent’ a whole new product category. We are right in the middle of the Internet of Things brainstorming phase and, as everybody knows, there is no such thing as a bad idea in a brainstorm.
“The amazing thing is that we haven’t invented anything new,” remarked Cicret founder Guillaume Pommier. “We just combined two existing technologies to create something really special.”
It is impossible to keep up with all the combinations. A few years ago I remember an article about how innovations have come from taking two disparate ideas and linking them together, such as the microwave oven. Today everything in the physical world can be equipped with sensors, interconnected and remotely controlled, making such examples legion — when I ‘predicted’ the smart plant pot a while back, I didn’t expect it to be presented at this year’s CES. As advocates of Geoffrey Moore might add to their marketing, “Unlike other smart plant pots…”
Equally clearly, it doesn’t make sense to talk in terms of an industry or market. People are trying to do so, of course. Analysts are setting out their stalls, talking about market sizes and so on. History suggests that most predictions will be wide of the mark or too specific to be useful. They continue because it is a human trait to need to show we have thought about things such as Things. Just as we tried to size the nebulous market for Cloud, or the overwhelming, though vague, tendency for Data to be Big.
All the same, it is possible to gauge where such trends are taking us. The most important criteria are economic — the spin-off impact of Moore’s Law on price points, a.k.a. the level at which something becomes affordable. For example GPS-based pet monitors have been available for a while, albeit bulky and expensive models. Now that they are reaching tens of pounds, they make sense.
Also at the ‘tens of’ mark is the cost of a multitasking compute platform, as illustrated by the Raspberry Pi. Here the threshold is about familiarity - while many could program a lower-level device such as Arduino, it may be seen as a step too far. Better to give someone a windows-icon-mouse-pointer interface, some familiar open source (therefore free as in both speech and beer, as far as developers are concerned) software, online tutorials and community support, and away they can go.
So we are currently in a brainstorming phase, now that familiar technology has reached an accessible price. But what can we say about two years’ time? First that the costs will have dropped another order of magnitude, opening the floodgates to yet more innovation on the one hand, or techno-tat on the other. Yes, it really will become quite difficult to lose your keys, or indeed your child.
In parallel, we can expect a level of consolidation. Current Internet of Things examples tend to be proprietary, closed-community or application-specific, built on open platforms and protocols but not sharing common management building blocks beyond connectivity. By management, think data management, configuration management, event management, service management, security management, dashboards and rules — the range of capabilities we need to enable the device swarm to be husbanded.
This absence of a common management framework is not through lack of trying — numerous providers such as Thingworx and Xively are positioning themselves as offering the backbone for the smart device revolution. No doubt attention will swing to a handful of such platforms which will then be adopted wholesale by the majority, before being acquired by the major vendors and, likely, almost immediately being open-sourced.
The first effect will be that we will all know more about ourselves, our stuff and our environments, for better or worse. The technology to determine whether an aged parent has had a fall will be misused in the accelerometer-based version of happy slapping — you heard it here first. Our brand-obsessed consumer culture will create, for our benefit, the cyber equivalent of the Jack Russell, snapping and yapping at our heels with product suggestions.
Such knowledge brings power: the bad guys will continue to do bad things, even as the good guys get new capabilities for dealing with them. It also threatens our abilities to structure and deal with the world around us - from current fears of internet addiction we shall move to the psychological ramifications of sensory overload, even as our offspring learn to live in a world where everything is measurable.
Should we worry? As humans, as a majority we seem to have a propensity to run with such changes. Indeed, given what we have dealt with over the past few decades, it is unlikely that another order of magnitude will make any difference. It is as if we can think logarithmically, or indeed in terms of octaves — the song will remain the same, even if it is playing at a higher frequency. In the meantime, we can only hang on, enjoy the ride and see what they think of next.
August 2015
08-05 – How to choose wine
How to choose wine
For anyone that was confused about this complex subject, I’ve prepared a handy guide.

08-12 – Digital Tech Lets Musicians Play On
Digital Tech Lets Musicians Play On
Music is not dead!
Why digital technology may be the best thing that happened to the music business
On the surface, it’s not hard to see why there should be consternation about the demise of modern music. It is an established fact for example (click for a chart) that US recorded music revenues are down by almost two thirds since their height at the turn of the millennium of $71, to $21.50 per capita.
The fault has been squarely placed at the door of digital technology, first in terms of CD ripping, then file sharing and torrenting, and most recently streaming. But just how much of this view is hype, and what is reality?
Myth 1: Technology is killing music
Reality 1: Technology is one of many factors changing the music landscape
The impact of filesharing has often been cited as being overstated — “Empirical work suggests that in music, no more than 20% of the recent decline in sales is due to sharing,” says a paper from Felix Oberholzer-Gee and Koleman Strumpf, for example. According to music economist Peter Schmuck, delve into the figures and a more complex picture emerges, which set the scene for music supply in the 1980s and 1990s.
First came marketing. Music sales were stagnating — wrote Pekka Gronow in 1983, “Perhaps records, as a mass medium, have now reached the saturation point.” Uncoincidentally, use of tape recorders was on the rise, first to capture singles from the radio and then to record whole albums.
In response, as the music biz became better at understanding demand, it increased focus onto a reduced set of genres and indeed, artists. Each label had no more than a half dozen performers that it would put all of its weight behind, and many less profitable bands and musicians were removed from the rosters.
A second, equally business-smart move was to move production away from singles towards more profitable albums, seeing the selective release of the former as a good way of testing the waters for demand. As a consequence, albums were often seen as being “more filler than killer”; meanwhile, sales of compilations soared.
A bonus was that technology — in the shape of the CD — helped catalyse a massive upswing in music industry fortunes and cemented the role of the album. “The CD-boom was mainly fueled by the re-release of repertoire still existing on vinyl. The superstar-orientation as well as the CD format ensured that the album became the main source of sales in the industry,” remarks Tschmuck. “The ‘single market’-thesis contributes a much better explanation for the declining sales in the recording industry than the ‘filesharing’ thesis.”
CDs became a temporary injection of cash into an already broken system. This influx of capital, coupled with controlled restrictions on demand, remained unchallenged until 2004, when digital formats once again enabled consumers to access music they actually wanted to listen to. By 2008 single sales had once again overtaken album sales, and the industry was in freefall.
So we return to the two-thirds drop in recording industry revenues. Which, of course, would result in a two thirds drop in artist incomes. Wouldn’t it?
Myth 2: Music had a golden age
Reality 2: Making music has never been the easy path
Establishing musician incomes is a recognised challenge, due to a lack of industry transparency around what artists actually earn. Says Imogen Heap, “For years I’ve been so frustrated with the deep opacity of the music industry.” What we can do is look to census information, such as that reported in 1996 The Artists in the Work Force study from the US National Endowment for the Arts, comparing earnings across the decades.
Median earnings increased from a paltry $2,958 in 1969, to $5,561 in 1979 and $9,900 in 1989 — still not enough for a mansion, but certainly on the rise. Variations in mean earnings — looking at how earnings distribute across the group — were particularly telling, however. To quote the report:
“Mean earnings were higher than median earnings, which showed some individuals’ earnings were much higher than the rest of their group’s earnings. Musicians’ and composers’ mean earnings were $16,233 in 1989, an increase of 105 percent over their mean earnings of $7,923 in 1979. This growing spread indicates that the earnings of the highest paid members of the professions, perhaps “superstars,” increased faster than earnings of the profession as a whole.”
In other words, a smaller number of artists were pocketing a larger proportion of the income, corroborating industry constraints on supply to the detriment of the larger pool of artists.
And how big was this pool? In 1970, there were 99,533 earning musicians and composers in the USA — equating to 68% of all performing artists. In 1980, the number had jumped to 140,556 but in 1990, it had only increased to 148,020, which was 53% of all performing artists — a drop of 15%. So, while America’s creative economy may have been growing, musicians were being stymied.
In 2000 the figure was 170,015, then a drop in 2005 to 169,647 (based on an updated AWF report) and a subsequent, perhaps significant rise in 2010 to 189,510 according to the more recent US census. In 2005 median income had increased to $22,600, still not much to be writing home about.
Life was tough for anyone that wanted to make it in music. A 1980 artists employment survey found that, “of those with second jobs in 1980… over a fourth of musicians were in sales, clerical or service jobs — jobs with a history of low pay and benefits.”
Earnings came from a range of musical sources, only a small proportion of which has ever been recordings. According to a recent Future of Music survey, 5.8% of revenues for musicians as a whole comes from recording - a figure which is decreasing. Meanwhile 28% comes from playing live music. To relate the two sets of figures, in 2005 a musician earning $2,500 from recordings would already be above the average.
Understanding these figures starts to get us somewhere. Overall, while musicians were never paid that much, people who log their role as a musician are earning more than they did in the past. As we shall see however, this group is only part of the story.
Myth 3: Musicians have never had it so bad
Reality 3: The musical genie is out of the bottle, and artist revenues are up
So, the 80’s and 90’s were all about the industry trying to limit access to artists. The model worked very well - by focusing on (literally) half a dozen artists, a label could get more bang for both its recording and marketing bucks. The advent of digital came like a scene from the dambusters — now the constraint has been removed we are seeing a massive diversification, an explosion of culture.
To understand just how massive we can look at YouTube’s international reach, bringing artists as diverse Korea’s Psy and Morocco’s Hala Turk to almost-immediate international attention. Indeed, in this day and age it is difficult to know whether a kid making a song in their apartment is any different in consequence to a professionally recorded song. Perhaps it doesn’t actually matter.
To appreciate the positive impact of this, we can look at collections societies and Professional Rights Organisations (PROs). Again, figures are not immediately available but the American Society of Composers, Authors and Publishers (ASCAP) has seen its membership increase exponentially, from 45,000 members in 1999 to 172,000 members in 2009, then 460,000 members in 2013.
Today ASCAP has over 540,000 members, and meanwhile, competing PRO Broadcast Music, Inc (BMI) represents 600,000 members. Recalling that the total number of musicians in the US was 189,510, quite clearly the rate of growth of rights holders far surpasses that of ‘professional’ musicians.
Despite taking a recession-based hit, royalty payments from BMI have been increasing year on year since 2000. In 2014, the organisation some distributed $850 million, up 3.2% on the year before. Meanwhile, ASCAP distributed $883 million in 2014, having collected over a billion dollars in revenues.
While the pot may be increasing, so is the pool of people drinking from it, creating a challenge for anyone who wants to make money from music. Is this actually a bad thing? The capability to create music, while a great gift, is also an inherent part of humanity — it was Darwin himself that suggested our musical abilities, shared with other animals, came before our linguistic abilities.
As an inevitable consequence, the ‘market’ for music will by its nature be over-supplied; equally the demand side of the equation will set a low satisfaction bar, to the potential disappointment of more talented and better trained musical professionals. We shall return to this point.
These factors have a direct impact on how musicians can make money, illustrates Barry Dallmann at PlayJazz Blog. “If a band turns up and makes a mess of playing ‘twinkle twinkle little star’ 30 times and they fill the place, they’re going to get booked again. If a band turns up and plays the most incredible jazz set ever performed anywhere in the history of the world and they bring 5 people through the door, they won’t be getting a repeat booking.”
The same is now true online. Like it or not, the genie is out of the bottle — the constraint has been removed on who can record and deliver music. But will people pay for it?
Myth 4: People want music for free
Reality 4: People are paying for music in all its forms
About $109 dollars was spent per capita on music in 2014, 50% of which was on live music. That makes the total US budget for music over $30 billion — interestingly, according to the previous year’s study, 75% of this came from 40% of the population. And there could be more where that came from, the study reports, if the right incentives are there.
In the US, discretionary spending in 2011 — classed as “purchases of items such as tobacco, alcohol, education, reading, personal care, apparel, dining out, donations, household furniture and numerous forms of entertainment” — was calculated at $12,800. Despite a reported sluggishness of recovery of discretionary spend since the recent recession, it’s fair to suggest that the amount spent on entertainment will remain around 15-17%, just over $2,000.
The debate around free music has moved from illegal file sharing to the still-growing (and evolving) market for streaming, so what of it? A similar lack of transparency exists on streaming revenues as to musical artists’ overall revenues, so much of what we have to go on are the sometimes hilarious royalty statements coming from Spotify et al., measured in pennies.
A useful comparison is between digital music streaming and the older model it bears most similarities to — terrestrial radio. Comparisons of “spins versus streams” are hard to find, but David Touve, Assistant Professor at the University of Virginia has had a fair stab in the UK and the US. The bottom line is that in the UK, costs per listener by stream are of the same order of magnitude in the UK, at 0.04 pence per spin and 0.05 pence per stream, though artist Zoe Keating’s figures suggest they are higher per stream, at 0.3 pence.
While in the US, the payments per spin are roughly half while payments per stream are about the same, there’s an added, quite astonishing factor. In the US, while songwriters and publishers are paid, recording artists have never — that’s never been paid for their songs being played on terrestrial radio, due to a long-standing loophole in legislation which sees a play as a ‘performance’.
A bill was introduced in April of this year to change this - unsurprisingly, industry lobbyists have fought against previous attempts to change the situation, arguing that that they would put smaller radio stations out of business, and that “they simply could not afford to pay performers.” In no other industry would this be seen as remotely acceptable.
While streaming is being seen as the industry’s current nemesis, artists seem to be making less of the streaming issue than corporate players. Comments Zoe Keating, “the dominant story in the press on artist earnings did not reflect my reality, nor that of musical friends I talked to. None of us were concerned about file sharing/piracy, we seemed to sell plenty of music directly to listeners via pay-what-you-want services while at the same time earn very little from streaming.”
As the figures suggest, there is clearly money to be made from music as a whole. Live music is seen by many as a counterbalance to falling recorded music sales — revenues for the former overtook the latter in 2010 and have increased steadily ever since.
It’s worth returning to that $2,000 figure, which is the amount of money a US consumer is prepared to spend per year on things that make them feel good. Every month, the American wallet has $166 that could be spent on musical outputs. In other words, music has the opportunity to duke it out with other forms of entertainment — and, as studies indicate, it stands a good chance of winning.
Myth 5: We are in music’s end of days
Reality 5: Music has never had it so good
Are we facing the demise of music? Quite the opposite, as technology’s power to dis-intermediate is levelling the playing field.
Over the 1980s and 1990s, the music and media industries worked together to restrict music supply on a massive scale, delivering a series of carefully prepared superstars to specific markets in order to maximise revenues. While not directly corrupt, the creation of an artificial bottleneck essentially removed the oxygen from western musical culture, to the detriment of the broader pool of artists and music as a whole.
Those musicians not selected for stardom have borne the brunt in the form of lost revenues, but the arrival of digital technologies have removed this bottleneck. As a consequence (and to paraphrase Paul McCartney, you’d have to be blindfolded riding a camel backwards across the desert not to have noticed), the recorded music industry is in trouble.
Of course there’s a lot of people involved in the music industry, and they have been doing a lot of jobs. But that doesn’t mean they are the right jobs today, if they ever were — the fact that artists saw less than 10% of the revenues even in the heydey of the biz is all anyone should need to know.
More importantly, many previously well-off artists are seeing their revenues fall; at the same time, others are looking for new ways to monetise their art. “While I would love to get paid more for records I, long ago, gave up chasing that particular ghost and have looked for money elsewhere,” said one rock guitarist.
Many new and aspiring musicians are also fearful of the change, though how much of this is due to the maintained hope of a ‘recording contract’ is difficult to say. Any history of recorded music industry reads line a series of broken promises and scams. Even the artist-friendly BMI was set up in response to ASCAP’s perceived overcharging of artists, and in no other industry group do you need forums such as “Stop Working for Free”.
Making music was never the easy path. Remarked New York jazz drummer, Craig Holiday Haynes, “There’s a joke that asked, “What’s a musician without a girlfriend?” The answer: Homeless.” For a select few, the recorded music industry offered a golden ticket to fame and fortune but the majority had to soldier on.
All the same, music artists have a genuine, unprecedented opportunity. Making money from music needs either a contract with someone who will pay, or it needs direct access to the source of income — the consumers and fans. The former model suggests stability and longevity, but the long-term trade-off is a greater sacrifice than many artists realise at the outset. “Like every band, we would have signed for nothing, given the opportunity,” says Pink Floyd’s Nick Mason.
The latter suggests a harder, but ultimately more sustainable path and, what is more, digital technologies provide the tools for engaging with, and growing a fan base. Notes jazz musician Barry Dallmann, “The internet gives us an unprecedented opportunity to go straight to those potential fans and put the music in front of them.“
Right now, a new industry land grab is putting musicians last — again. We are seeing some new players, and there are some technological and business models which clearly don’t act in the artists’ interests. But others do and innovation will continue: for example the use of bitcoin-based models to create a financial conduit directly from consumers to artists.
Such models offer hope, but they will doubtless be fought against by the industry. As comments George Howard, “Bitcoin can’t save the music industry because, the music industry will resist the transparency it might bring.” Elaborates Ashton Motes at Dropbox, “Even indie labels – it’s not clear that they’d be willing to disclose who makes what, and what people sell. The whole industry is driven on smoke and mirrors.”
While musicians can benefit from financial advances, recording facilities, distribution mechanisms and so on, the music itself was never meant to have middlemen. It is a part of our culture, embedded deep in the most primitive parts of our brain, and profoundly important to our existence — no wonder people worked out that controlling musical supply would yield significant returns.
With digital technologies however, those days may finally be over. While many are saying that technology is damaging the music business, it may well be the best thing that ever happened to the business of making, delivering and even monetising humanity’s musical talents.
08-16 – Big Big Train, Kings Place 15 August 2015
Big Big Train, Kings Place 15 August 2015
Big Big Train, the band with the dumb name, playing nostalgia-laden music, seemingly derived from some forgotten glory years.
But.
Big Big Train, a symbol of determination, of uncompromising musicianship, of continuity.
A decades-old band which has stuck to its guns and aspirations, with paltry obvious reward.
An international collective of musicians, brought together by fate, the Internet their studio.
A spark, fanned to a flame, air drawn through tightly formed forums and online communities.
A reluctance to perform, not through lack of talent but the financial risk of delivering without compromise.
A spotlight shining from a benign corner of the media, whispering encouragement.
A venue found, a weekend fixed. Arrangements made, rehearsals undertaken.
Big Big Train, live on stage, once, twice, three times a masterclass in musical prowess, a lesson in humility.
And a reminder.
Fame and fortune glint like diamonds in the depths, tantalisingly beyond reach, but for a few.
For others the journey is longer but its rewards timeless, new strands weaving into our shared history.
Last night’s gig was a quiet triumph. Uncle Jack would be proud.
08-21 – The nature of connectedness
The nature of connectedness
It was my son, Ben, who first alerted my to the works of Temple Grandin, an autistic agricultural equipment designer who has written extensively about both her own experiences, and our general understanding, of the autistic spectrum.
One area she highlighted was that the human brain is in a constant state of redevelopment. Where neural pathways are underfunctioning or otherwise blocked, other connections get made. These biological adjustments to the circuitry of the brain enable signals to be re-routed, directly affecting a person’s cognitive abilities.
It isn’t just autism where synaptic re-routing takes place. In a recent conversation a neighbour, who works in rehabilitating drug addicts, explained new research that addiction has physical consequences — essentially, new pathways are created to reflect the ’normality’ of drug use. Once made, pathways cannot be told to cease operating, which starts to explain why addicts can’t “just stop”.
The parallels between addiction and autism don’t stop there. Temple Grandin has highlighted the importance of learning social skills for people on the autistic spectrum, even if this means they are operating outside their comfort zone, as they enable people to interact and function in society.
“In the 1950s, social skills were taught in a much more rigid way so kids who were mildly autistic were forced to learn them. It hurts the autistic much more than it does the normal kids to not have these skills formally taught,” she remarked.
Meanwhile UK journalist Johann Hari, himself on a journey out of a pit of his own creation, was researching drug addiction. His findings surprised him — that more isolated people were more likely to become, and stay addicts. “What I learned is that the opposite of addiction is not sobriety,” he commented. “The opposite of addiction is human connection.”
Given how the technology revolution has made us more connected, and yet more isolated than ever, both Temple Grandin and Johann Hari’s observations may be of profound importance. External connections are vital to our well-being, for sure, and these may well be reflected in synaptic connections which, once created, cannot simply be un-knotted.
Perhaps we shall find that our belief systems also exist as neural pathways, which could explain why the stances we take are so difficult to change — our views may, quite literally, be hard-wired into our brains. Changing minds may also require modifying the cell structures upon which they depend, which cannot take place on demand.
If we do learn that we are what we think, it will have profound consequences on how view such difficult topics as addiction and indoctrination. On the upside, perhaps it will also give us a better understanding of humanity’s all-too-frequent inability to act rationally.
Just a thought — or is it?
08-21 – Toot
Toot
In the pocket of my dog-walking coat was a small Lego trumpet. I vaguely recall how it had got there: it had been glinting on the road, I had reached down to pick it up, as much to see what it was as anything. Unthinkingly, I had pocketed it. There it had stayed, buried in fluffy detritus.
The road trip to Edinburgh had been long, but worth it. We stayed with friends who had just moved North, their cottage overlooking the sea, a microclimate keeping the storms at bay. Happy days seeing the sights and walking the dogs.
We had planned the way back to be punctuated with places we had always wanted to visit, visiting people we would never see otherwise. The first stop was Lindisfarne, the long drive along the spit windswept and bleak.
We parked and walked, backs hunched against the weather. The air was damp, a light, misty rain wisping back and forth. We turned a corner and started towards the abbey, now a ruin, its forlorn structure silhouetted against the darkening sky.
There in front of us, some players were unpacking their instruments. They had travelled from Germany, we were told later, a church band on tour, drawn to play on the island. And play they did, the soaring notes of brass weaving in the wind, man and nature intertwined.
As we watched, mesmerised, I put my hands in my pockets. My fingertips fell upon something, for a moment I wondered what it was then I remembered — that plastic trumpet. Smiling I put it to my mouth and said, “toot.“
And in that moment, all was right with the world.
October 2015
10-20 – What’s a blockchain? And is it heading for prime time?
What’s a blockchain? And is it heading for prime time?
If, like me, you have been feeling slightly bamboozled by all the noise around blockchain, I thought it might be worth putting together a brief primer on what blockchains are and how they might be used.
Blockchain started out as a “system of record” for Bitcoin. Simply put, if I give you a Bitcoin, how can the transaction be verified as having taken place? The answer is for a third party to oversee the creation of a block — a package of data containing not only this, but multiple other transactions and some ‘other stuff’ to enable the authenticity of the block to be proved.
As it happens, transactions don’t have to involve the transfer of bitcoins, they could represent any event — say (and why not), a declaration of undying love. Once the transaction has been added to the system of record, it is duplicated across every computer storing a copy of the blockchain.
Indeed, any piece of information can be captured and stored as a blockchain event: once created, the record will exist for as long as the concept of the blockchain exists.
Equally, the virtual world has room for more than one blockchain. Bitcoin has its own, and other crypto-currencies have theirs. Some blockchains (such as Ethereum) were founded to store information about both transactions and what has been termed ‘smart contracts’ — that is, programmable code that defines when a transaction should take place, or links to where and when an item was created.
As a result of both their indelibility and programmability, blockchains have been seen as a way of managing a whole range of situations and transactions. Of course money transfers (crypto- or traditional currency) but also such situations as preventing forgery and pharmaceutical fraud, as an identifier associated with an item (a painting or a drug) can be proven to be correct.
A particular area of interest is the arts, as blockchain mechanisms enable transactions to take place directly between artists, consumers and other stakeholders. As describes cellist Zoë Keating: “I can imagine instant, frictionless micropayments and the ability to pay collaborators and investors in future earnings without it being an accounting nightmare, and without having to divert money through blackbox entities like ASCAP or the AFM.”
In other words, if you want to listen to a song you have a mechanism which enables me to directly and automatically pay the artist, and enables the artist to set the price, then directly and automatically pay other people involved. When used in this way the whole process, and resulting transactions, can become completely transparent and verifiable.
Such mechanisms have the potential to deliver a new era of fairness in terms of how artists are recompensed, thinks Imogen Heap, whose new single was released last Friday with a Bitcoin pay-what-you-like mechanism. “I dream of a kind of Fair Trade for music environment with a simple one-stop-shop-portal to upload my freshly recorded music, verified and stamped, into the world, with the confidence I’m getting the best deal out there,” says the artist.
Numerous challenges lie ahead, not least that current blockchain mechanisms were not designed to handle the volumes of transactions, nor resulting sizes of records, that could result from mass adoption in such a wide variety of domains. Equally, blockchain tools can just as easily be used by incumbent organisations and intermediaries as altruism-oriented startups; it is by no means clear, for example, that music consumers will default to more artist-friendly models.
These are very early days. It is neither obvious what platform or tool to deploy to what end, nor are blockchain facilities straightforward for end-users to access— yet. Skills in blockchain design and integration into services are in very short supply, as is experience in writing smart contracts that make sense.
At the same time, the high levels of interest across such a variety of industries suggests that blockchain-based capabilities will have a considerable role to play in the near future. For better or worse, blockchains are here to stay.
10-22 – The Internet Of Wine
The Internet Of Wine
The Internet of … Wine?
Jon Collins
Wine growers in Europe have long memories. Only 150 years ago the aphid-like Phylloxera bug, imported from America in plant specimens, was to devastate vineyards across France and elsewhere in Europe, leaving centuries-old practices in tatters. Growers had no choice but to look to pest-resistant American vines, first trying hybrid breeds before settling on grafting European vines onto American rootstocks.
Today, the European Union produces some 175 million hectolitres per year, equating to 65% of global production. Another disease like Phylloxera would wipe over €30 billion from Europe’s revenues, according to 2010 figures. While general fears of a similarly cataclysmic event may have subsided, coping with disease is a major part of the modern wine maker’s job.
As attention has turned to maximising yields over recent decades, a lack of data on the ground (quite literally) has led to a spray-it-all approach. According to the 2010 report, viticulture uses double the fungicides of other types of crop, and about the same amount of pesticides. “The higher consumption of fungicides in viticulture is due to the fact, that Vitis vinifera has no resistance to introduced fungus diseases and requires chemical protection,” it states.
At least part of the answer, it appears, can come from sensors that can ‘read’ the qualities of the soil. Not only can resulting analysis determine where and when to apply nutrients (thus saving money and avoiding over-fertilising), they can identify the onset of disease by watching for symptomatic changes to the environment. If vines are being infected, they can be sprayed, isolated or even ripped out before the damage spreads.
A pioneer in this space is Slovenian technologist Matic Šerc, whose company Elmitel is looking at the role of sensors in wine growing and who is currently engaged in an accelerator programme in Bordeaux to develop the eVineyard app and service. Part of the challenge is working with such a traditional industry, he says. “In certain areas, 30% of growers don’t have mobile phones, never mind smartphones,” he says. “It’s not realistic to expect this to skyrocket!”
Having said this, the reaction from wine growers is not closed to new ideas. Drones are already starting to appear in Bordeaux as a way of checking ripeness, vine damage and indeed, disease, and growers are watching each other to see where technology can make a difference. “Compared to Slovenia, they are more open,” says Matic, who grew up in an environment where the general populace is roped in to help with the grape harvest in the fields once a year (“There’s usually some kind of a ‘party’,”, he says).
One stimulus to the adoption of technology in viticulture is that wine processes themselves are evolving, through the changing climate as well as economic factors. “Seasons are changing, weather patterns are different, so working practices are also changing. In addition, organic growing is rising as a trend, which goes back towards reaching the natural balance.” Indeed, pesticides are themselves a relatively modern invention — if sensors can help bring back more traditional management practices, what’s not to like?
Matic and his team are quickly learning that more goes on in wine growing than engineering can solve, however. “It’s a triangle of the soil, the weather and the vine. When you manage vineyards you can manage the soil and you can manage the canopy, to an extent. But you cannot completely switch the soil, or change weather conditions, with technology.”
So, where and how can tech be of benefit? The key lies in how it can help growers make better informed decisions, helping reduce both costs and risks. “Wine growing has practices that are very old, but the data helps you manage more efficiently, more precisely,” says Matic. “You have to keep in mind what you want from the grapes — the grapes then go through the process of wine making but there are factors that you cannot influence.”
Ultimately wine is more than a product, it is a consequence of everything that takes place to turn the first opening leaves of Spring into the dark reds and crisp whites, the infusion of flavours and textures that bring so much pleasure to so many. “Wine has a story, a personality,” says Matic. “Many things can be monitored, it helps you get where you want to be but you need more than technology.”
Better data may bring precision and more informed decisions, but it is in wine that we find truth.
November 2015
11-10 – Medium: Athens Marathon and the elusive sub-four
Medium: Athens Marathon and the elusive sub-four
5 a.m. and I have been awake for some time, that annoying habit I have of outpacing the alarm. I check my phone and Stuart has already texted. “Are we all up?” he asks. “Reporting for duty,” responds Alun. I get up, try for a shower but the water is cold. Not the most auspicious start. Cup of tea and congealing instant porridge, the latter a forlorn attempt at recreating what is already instant in its usual form. Won’t bother next time, I think. Oh no, I remember. There won’t be a next time.
This will be my fifth marathon. The first ended in disaster as I broke the only rule you need to know – stick with what you have trained for. Having set of at 9.30 pace and maintained it for 15 miles (deciding en route I was clearly untroubled by such mortal considerations), the pain started to set in about 18. By mile 23 it felt like someone had inserted boiling skewers into my legs. I finished it – in 4.43 – but it took me several weeks to recover.
I’d put it to bed in Paris, taking my time and running a steady 11 minute mile pace to come in at a perfectly reasonable 4.42. Then Mark had suggested London, which I managed in 4.55 with only 11 weeks’ training. I didn’t want to leave it there so, heck, why not Rome, which I prepared well for and came in at 4.36. Hmm, I thought. What if I really, really tried? Perhaps one last marathon? If so it would have to be a classic race… what about the classic race, from Marathon to Athens? A Friday night conversation later and the die was cast.
I knew I’d been running faster. A visit to an osteopath with an interest in sports performance and a bit of acupressure magic had released my diaphragm, which had given me a significant boost to my pace. As any marathon runner knows, the mathematically arbitrary threshold of four hours is held in some esteem. To achieve that was beyond my wildest dreams, a barrier only real athletes could cross. Impossible or not, it was the only goal in town.
So I trained. I enlisted the help of a personal trainer. I read the books, I ate the food and drank the shakes for a year. 20 weeks before the day I started a schedule aimed at 4-hour runners. Then the bad news – a cursory review of the route told us it had at least 800 feet of ascent, over a distance of ten miles.
We took it on the chin and carried on, the Cotswold Hills more than adequate as a training ground. We found that my wife Liz’s cousin’s daughter, Virginia, was also running and we exchanged notes. And, to my surprise, our longest training run of 21 miles took place in 3 hours exactly. it looked like the fabled sub-four could be in our grasp.
Despite the usual minor injuries (including, for me, a bollard-based face plant on the final taper) and bugs, we all made it to Athens. It was only as we arrived – marathon running has always been a family trip – that I realised my long training journey was finally over; Stu and Al had picked up my registration so I had nothing to do but attempt sleep. Game on, gloopy porridge notwithstanding.
5.40 and I met Stuart and Alun in the lobby of their hotel, wearing traditional Greek slippers with pride. Everywhere were runners, Chinese and other nationalities. All bore the international expression of quiet expectation. The three of us headed to Syntagma Square, meeting Virginia and her impressively awake family on the way. By 6.15 am we were on the bus, hardly pausing thanks to the well-organised queues.
45 minutes later we had arrived at the small training ground at Marathonas, leaving plenty of time for morning routines, stretching and warming up. The rest was familiar – being herded into pens like cattle (leaving Virginia to a different pen), being blasted by poor quality, though upbeat music, multicoloured lycra and menthol bombarding the senses. And then the klaxon, the walk forward, the sensor across the road, the break into a jog surrounded by a horde of others.
Setting off was typically crowded but before long we were able to settle into a decent pace. The sky was a piercing blue and the sun already hot, and spectators were few. All the same, the first miles went past relatively quickly, the main sound being that of a thousand people running in near step. A few miles in we turned left and headed towards Pheidippides’ tomb, itself disappointing as it was somewhere in the middle of a circular olive grove. Taking my cue from others I grabbed an olive branch and wove it into my belt.
After 6 miles, flat gave way to undulation. There were as many welcome downs as there were ups, none of which were too daunting though the sun was getting hotter. We passed through a conurbation and saw some more crowds, the first having been round the tomb loop. Pace still strong: 8:25’s. Keep it steady, keep it slow we were telling each other as the hill started to kick in some more. 10 miles down by now, we gritted our teeth for what we knew would be a big ask.
Mile 11, mile 12, mile 13. We were half way. The road went from town to country and back, with people at junctions. We passed a group of dancers accompanied by traditional Greek music. I joined in, briefly, even as my inner voice said I should be conserving energy. The sun grew hotter – a pharmacy sign said 23 degrees, and we were only half way up the hill. A drop in height gave us a fast mile and a welcome respite, then we were back on it.
Strangely, and in hindsight crazily, the route followed the right hand side of a tree-lined dual carriageway, from East to West. Which meant that the left hand side was shaded. But we were in full sun. The race pack had included a peaked cap which Alun and I were both wearing. All the same we were starting to feel the heat. Drinks stations, which were frequent, could still not come fast enough.
Mile 16 and exhaustion was starting to set in. I had set my sights on 18 miles, beyond which we only had another 2-mile push before the uphill would stop and we would start down into Athens. We stopped at a water station and regrouped before carrying on. We did the same at mile 18. Two miles to go, I said. On the horizon was an apartment block, which I fixed as the apex. Alun had been dropping behind, the Stuart was as well. I chose to keep the pace as the opportunity for a sub-four was rapidly fading.
Sure enough, the 20-mile mark signalled the top of the hill, itself marked by an underpass which meant a dipping, then a steep rise of the road. I can only guess the heat by now but it was hotter than before, 25 at least, the underpass offering a brief respite. Thinking of various advice I walked up the steep section, looking behind me but Stu and Al were nowhere in sight. Best press on, I thought. Then I was over the rise, the road stretching gloriously down and away from me.
With an hour to cover 6 miles according to my watch, I picked up the pace and was back at an 8.25. A minute in the bag, I thought. I knew I was running out of steam so I started to think tactically. 10 minute miles all the way and the sub-four was mine. I took a few walks in the following miles, through exhaustion but also in the knowledge I was within my time. Mile 24 passed. Two miles to go, that’s round the barn and back I thought. And all downhill, pretty much. Another underpass, another walk.
Half a mile later, a familiar voice. “Mr Collins, how the devil are you!” said Stuart, coming past me. He had got a second wind. “Just keep going, one foot in front of another!” I let him go, stopping again to catch my breath. I knew I had a mile to go, and ten minutes in which to do it. It was tight, too tight, but I was nearly there.
I rounded a corner, going under yet another inflatable arch, each teasing with a potential finish. Another, wooden arch was displaying numbers. Am I there? I wondered, a tiny ray of hope growing within me only to be quashed by the next sign I saw. “3 km” it said. I realised I was still more than a mile from the goal and I had less than ten minutes in the bag – I had been following my watch rather than the signs, so had misjudged the timings. The potential for a sub-four was lost, gone forever.
I was broken but had no choice but to carry on. A left hand bend led into a lush, wooded avenue, all downhill. It was beautiful but it stretched on for ever. I had already seen a man’s legs give way from underneath him, the crowd rushing to his aid. Would it happen to me, I wondered. Just how long was this road. And where was the bloody stadium. The road gave way to another, then perhaps another, I can’t remember. But then I saw another left bend, and I knew I was there.
I turned on to a plaza, a steep ramp appearing like a ten foot wall in front of me. I climbed it, then another, arriving on the track of the stadium full of people. Somewhere was my family, were Stuart and Alun’s families, were Virginia’s family perhaps. I looked but could see nothing but a sea of faces. And in front, so, so far away, the finish, looking like they had put it as far as possible away to create one, last, insurmountable challenge.
And there was Stuart’s wife Sharon and Alun’s wife Annabel. And there was my wife Liz, shouting my name. And there was the finish, looming up before dissolving in front of me, giving way to a cluster of runners slowing, falling to their knees, holding on to barriers or just walking in a daze, as if the concept of stopping had been lost to them.
The line of the track curved round some medical tents, which I followed. “Jon!” There was Stuart, looking as knackered as I felt. His watch battery had run out so he didn’t know his time. My watch said 4:03. I knew I should feel elated – it was 33 minutes off my previous best time – but I felt… nothing. Too tired. We sat in the shade by the edge of the track for a few minutes, saying nothing. Then Alun appeared and we shouted his name, hoarsely, eventually pulling ourselves to our feet.
“That is the hardest thing I have ever done,” said Stuart, a sub-four runner. “In all my years in the Army, we never did anything as hard as that,” concurred Alun. I could only nod. Not knowing whether to stop or move, we carried ourselves forward to get our medals, lining up together like Olympians. In the Olympic stadium. In Athens, the end point of the original, classic, authentic Marathon. And we laughed. And we swore never to do it again.
A half hour later we collected our bags, ate bananas, met our families (and shed some tears) and headed back to the hotel ready for what would be copious amounts of alcohol. The debrief was, simply, that we had done it – achieved some great times in challenging conditions. The race itself, half uphill along one side of a dual carriageway for 26 miles, was not the most attractive, nor would it be first choice for crowd participation. But it was most certainly one for the runner’s bucket list.
That evening, with a Metaxa seven star brandy and a cigar I could barely smoke, I looked up my official time – 4.02:57. Over the days that followed I smiled and laughed about how 3 minutes didn’t matter one iota, how it was only due to a series of arbitrary measures, how it was still an amazing achievement and nothing else mattered.
And I knew, in my heart: this ain’t over.
11-15 – Evil is real
Evil is real
Evil is real. It has existed throughout history, it pervades every culture, religion and ideology. It exists today. While it defies precise definition, it is clearly recognisable by all of us. We know it because we know ourselves, and we know what we, or the people around us, have the potential to become. We are weak; we draw strength from each other; we are easily swayed by charisma, by strong leadership and by aspirational ideas. We know our own histories, in which ordinary folks have been turned against each other on the basis of an idea which even they do not fully understand. It has always been so and will continue to be so, unless we somehow change the basis of what it means to be human.
We do not yet understand what causes someone to become evil. A loss of empathy, a lack of care for fellow human beings, the broader environment and even oneself is a part of it, but even this is not enough. Indoctrination, an unconscionable abuse of our inherent and important ability to embrace and incorporate stories into our own psychologies, this also plays its part. Our desires and aspirations, so necessary for survival, can become a lust for power, a drive to exploit. And the wish for familiarity has a dark side, when it means we ignore the suffering of others or even wish to harm those who have other ideas than our own.
These are human traits, themselves survival skills honed through millennia. They are why we are here. For anyone to say otherwise is to deny their own humanity. To advise that we should operate like this isn’t the case, is at best going to result in short term consequences which may exacerbate, rather than resolve. But to act like it is true does not mean to appease, to be ‘moderate’, to be wooly-minded. For evil is real. It acts like a cancer across the organism we call society, and it will continue to grow unless it is tackled.
As we look to respond to evil however, we need to recognise its causes. While it may be possible for people to be born evil, in general the contexts within which people grow offer a clearer explanation of their behaviours. Those in more prosperous, inclusive environments, in which they have more freedom to act and be themselves, in which they are accepted rather than disenfranchised, are less likely to conduct school shootings or suicide bombings. “Why did they,” we ask, without waiting to hear the answers, preferring our own, generalised perspectives and agendas. “Because America,” we say. “Because Islam.”
And yes, we need to tackle these causes. Not to appease or to show weakness, but because they are the causes. It is going to take tens, maybe hundreds of years — we are still at the beginning of the process of creating a globally fair and just society, and there remains an unfathomable amount of work still to be done. But this is the journey we are on, away from disease and child mortality, away from poverty and towards peace and acknowledgement of basic human rights. We will not get there overnight but there is plenty of reason to be positive and optimistic.
Meanwhile, we also have to tackle the symptoms. When evil emerges, whatever its complex web of causes, it needs to be dealt with. We cannot hope to get healthier as a society if we allow hate, wanton destruction and murder to go unchecked. We owe it to the victims, to their families that we do not stand by and say, “Sorry, there was nothing we could do.” We can, and should condemn evil, wherever it manifests itself; we can, and should protect innocents against evil actions, whatever rhetorical framework is used to justify the motives. And we should hold the perpetrators to account, unstintingly and without compromise.
But in doing so, we must also remember that every human is both consequence and cause. Hate begets hate; anger begets anger; resentment and powerlessness cause a righteous hunger for power which corrupts, which has the potential to recreate the exact conditions, only for them to be imposed on others. By slaying the monster we risk becoming monsters ourselves, as so many of our stories tell us, not to distract and entertain but to repeat an ancient lesson that otherwise we find too easy to forget and ignore.
We all have a choice. Love is not the answer, not by itself. But a decision taken without love plays into the hands of those who, whatever their backgrounds and whatever their justification, would see the destruction of all that we hold dear. From love we find strength, we find understanding, we find community, we find acceptance of difference, we find similarities that bind us more tightly than any ideology. A society build on love is not weak; rather it is strong, forthright, able to respond to harrowing circumstances, united against the only thing any of us have really to fear. The ultimate enemy is not ‘them’, it lies within each and every one of us. We all know this to be true. And we should do everything in our power to overcome it.
11-16 – Social Media Owners Need To Stop Running Uncontrolled Playgrounds
Social Media Owners Need To Stop Running Uncontrolled Playgrounds
For social media, being kind is a business choice
Just how nasty can social media get? Twitter in particular has found itself under the cosh, due to its historical complacency. “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years,” the company’s previous CEO Dick Costolo wrote back in April.
When Jon Ronson wrote a book about the dangers of ’shaming’ on social media it was to Twitter that he was largely referring. While more extreme cases have resulted in criminal convictions, the majority of tweets fail to cross any significant threshold individually but can build a picture that, as Ronson points out, is more akin to the stocks than rational debate.
The fact is, it is simply too easy to throw in verbal insults online. For example, whatever people felt about the politics of Andrew Lloyd Webber’s vote against the Tax Credits bill in the UK House of Lords, many remarks dwelt on his looks rather than his decision.
And now with a falling share price, Twitter is stumbling into action, re-engaging founder Jack Dorsey as CEO and offering new capabilities that might prevent user numbers from flatlining. Among the features is Moments, a.k.a. curated streams of tweets around current events. Hold on to that word - ‘curated’.
Twitter isn’t alone in finding that the dream of user-generated content is more Lord of the Flies than Paradise Island. Equally notorious for its crude, and sometimes cripplingly harsh comment streams has been YouTube. And indeed, across the Web we have platforms that have been sources of highly offensive, even abusive content.
Facebook, for example, remains a significant source of cyberbullying as teenagers use the service to display behaviours previously limited to the offline world. And Snapchat has been linked to sharing inappropriate images, with consent or otherwise.
It is interesting to compare the models of different sites. Whereas on Twitter most messages are shared publicly, on Facebook they tend to be shared with (so-called) friends. To state the obvious, this makes the former more of a platform for public shaming, and the latter for bullying within close communities.
It seems that every times somebody comes up with a way of sharing information, it invariably becomes misused. So is there any hope?Interestingly, we can draw some inspiration and hope from a site that appears to tend towards the chaotic.
Reddit, that love child of Usenet forums and social media, enables the creation of individual spaces (’sub-reddits’), each of which is curated by its creator to a greater or lesser extent. While parts of Reddit can get a bit hairy (in the same way as Twitter), at some level, humans remain in control.
The notion of curation — that is, keeping responsible people and the community involved — does seem to hold the key. For curation to be possible it requires the right tools.
The importance of the down-vote to Reddit cannot be over-stated, as it creates a generally accessible balancing factor. Right at the other end of the scale Quora (which also has both an up-vote and a down-vote) delivers on safe, wholesome, curated Q&A.
It is this additional level of responsibility that should set the scene for the future.With a caveat: nobody wants the Web to get “all schoolmarmish on your ass”; indeed, even if it could, it would doubtless cause people to run to the nearest virtual exit.
At the same time, we can see a future where we move from the uncontrolled, playgrounds of social media (with the occasional, knee-jerk reaction from their respective authorities) to a place in which we take more personal responsibility for our actions.
The alternative is irresponsibility, either on the part of the individuals creating messages or the companies allowing it to happen. It like the old English adage, “The problem isn’t stealing, the problem is getting caught.”
Simply put, our online culture needs checks and balances at all levels, not to restrict general behaviour but to prevent the excesses we exhibit if no restrictions are in place. It is no different in the virtual than the real.
While platform providers may not see it as their place to act as judge and jury — itself a point of debate — they should nonetheless provide the tools necessary to ensure people can congregate without fear for their online safety.
Not to do so is irresponsible, but more than this: as we develop and mature online, we will inevitably gravitate towards platforms that, by their nature, offer some basic protections against abuse. Many Twitter users are moving on; and it is surely no coincidence that Facebook is less popular for the youth.
Even as social media companies look to provide more ‘exciting’ ways to interact, they ignore such basic, very human needs — such as existing without fear — at their peril.
11-17 – We are road kill
We are road kill
We are road kill Consequences Cast aside On a bloody road To a non-existent heaven
11-24 – We are less and less in this together
We are less and less in this together
I’ll stop with the philosophising soon, but bear with me on this one. Perhaps this is blindingly obvious to everyone else but I’ve only recently worked this out — that people in the UK and perhaps elsewhere are being alienated, across the board.
Not only “them people out there” or “over there” but people all over the country, from all backgrounds. Local people. People in cities. You, me, all of us. People of all classes and all cultures, whose views, fears and concerns are not being taken into account, not by the ‘leaders’ of this country, and increasingly, not by each other.
It starts at the top. Ignore the crises for a moment and there is little from the government that constitutes actually helping anyone outside a rich minority, or scoring political points. Austerity is a broken approach. And the “all in this together” mantra has long ceased to be even slightly funny. They never were “all in“, so why should anyone else be.
I also see little from any party that offers a realistic plan to help the general populace today. Nor people with the necessary charisma to carry it off — we have toffs, shiny suits and schoolteachers according to various representations, none of whom have presented a sustainable vision that includes, engages and inspires anyone outside their own focus groups or party faithful.
No wonder people wonder why they should bother to vote. The country teeters on the brink of recession, and people lack much to get excited about. When they complain, they are ignored. It’s more than tragic. It’s dangerous.
It also creates a situation that’s easy to impact. It’s why people of all political persuasions flocked to Corbyn (temporarily, it appears) but also to UKIP, because people felt they were being heard, their views acknowledged.
It’s also being exploited by the media — whose first concern will always be commercial. News organisations are creating echo chambers, telling people what they want to hear for no other reason than to sell more papers. Even when it plays into the hands of fear mongers, for the media does not trouble itself with a conscience.
The overall consequence is more alienation, as people finally start to think they are listened to but as a result, they become more isolated from views they might disagree with. Which makes the press more forthright, more likely to make outrageous statements. It’s a dubious cycle.
All this would be true without the migrant and refugee crises, without the existence of psychopathic nutters and indoctrinated youths. Add these pieces and we have, in US parlance, a real situation. We see it enacted on social media every day. People are (rightly, in my mind) troubled by the amount of divisive speech that has surfaced online, particularly since the Paris attacks.
But who is presenting a rallying call to make things otherwise? We have a distant government, currently calling for airstrikes that nobody knows will be effective, nor even right. Nowhere is there a clarion call for communities to start building bridges and closing divides. Why?
Because it isn’t politically useful. Leading the country is less of a priority than influencing the converted.
In consequence, society is dividing against itself— not just against minority groups, but with increasing intolerance for other political views or even for the expression of individual compassion. We, the populace, are nailing our own colours to a variety of masts. And in doing so we are fighting, arguing. Tensions are running high.
People are looking for answers. To tell them their thoughts are invalid is alienating.
People are defending others. To tell them they should not is alienating.
People are afraid. To tell them they are being stupid, or that they should care more, is alienating.
People are angry and frustrated. To tell them they should not be is alienating, particularly as their fears, anger and frustration have not just happened overnight.
And, of course, innocent bystanders are being judged because they happen to look like, or come from the same place as those we are afraid of. That one goes without saying.
In other words, we are alienating each other. We can blame the terrorists for this, but in reality they are profiting from and building upon a situation that already existed.
We have so much going for us as a nation, in all its celt/anglo/saxon/dane/norman/european/indian/sub-saharan/moslem/etcetera glory, to be proud of. Our United Kingdom of islands and nations, our fantastic mix of cultures, our sense of fair play, our love of tea and knitting, our spirit of a country that has always punched above its weight, our innovation, our complete inability to be cowed, our good humour, our commitment to each other and to our communities.
It’s all brilliant, it will sustain.
We should be applauding each other, smiling as we say bog off to anyone, within or without, who thinks they can in some way take that away from us. It is notable that the most “shared“ people stating such views have been TV commentators, not our political leadership.
We need leaders that can inspire, that seek to unite rather than exploit divisions, that actually want to lead our great nation in all its multifaceted glory, to have it stand tall, as a beacon of light in the world. We need leaders that listen to the concerns of ordinary people of all creeds and persuasions, rather than trotting out terms like “hard working people” as euphemisms for “people that might vote for us”.
Only by standing together can we move forward without fear, and we need leaders that recognise it. We’ve seen it in other troubled periods of our history; we saw it most recently in the 2012 Olympics, and we need it now more than ever.
11-30 – Should We Worry About Dna Testing
Should We Worry About Dna Testing
Should we worry about DNA testing? The answer lies in the terms and conditions.
First, a bit of background. When the Human Genome Project was first initiated in 1990, its budget was set at a staggering $3 Billion and the resulting analysis took over four years. In 1998, a new initiative was launched at one tenth of the cost – $300 Million. Just over a decade later, a device costing just $50,000 was used, aptly, to sequence Gordon Moore’s DNA in a matter of hours. And today, costs have dropped to under a thousand dollars for a full sequence, and even less if only a subset of the data is analysed.
A consequence of this falling cost threshold is an explosion in different research studies, not least in research involving mitochondrial DNA to trace our collective history through the maternal line (leading to hyped-up announcements about our origins such as this one). Healthcare research quite clearly stands to benefit a great deal. And meanwhile out in consumer land, different types of study (usually based on Autosomal testing) are used to demonstrate paternity, report on congenital health risks and establish family origins.
It’s exciting stuff but, as with any topic which involves finding things out, it comes with risks attached. One might argue that knowing something is better than not knowing something, and this will frequently be true — not least in healthcare where a diagnosis can lead more quickly to treatment. Equally however, knowledge can open doors that might better be left shut. And the question of privacy looms large — while you might want to know about your characteristics or health conditions, you might not want certain others, or indeed your employer or government, to know.
An exacerbating factor concerns the accuracy of such tests. Just two years ago, the Google-funded DNA testing service 23andMe was investigated by the US Government Food and Drug Administration, which accused the company of basing its results on poor science. “After… many interactions with 23andMe, we still do not have any assurance that the firm has analytically or clinically validated“ its technology, stated the FDA. For a time the company stopped providing information on health risks but has now recommenced.
Concerns have been also raised about inconsistencies between tests. “The discrepancies were striking.” wrote Kira Peikoffdec, who sent her saliva to three different labs for genetic testing (again in 2013). The American College of Obsetricians and Gynaecologists reached the conclusion that such profiling was “not ready for prime time.” Similarly, ancestry tests have been pooh-poohed as offering little more than “genetic astrology.” Of course, things may be so much better by now. While costs have fallen still further, there is little indication of any breakthrough in terms of reliability and consistency of testing however.
But does it really matter, for example for an armchair ancestry hobbyist or for someone who has a specific concern that no official is prepared to address? The answer to this question lies in the fact that even if tests may still be inconsistent and unreliable today, this may not always be the case. Any assessment therefore needs to take into account the privacy of the information concerned, or indeed in three kinds of information — the genetic sample (usually saliva); the raw test data; and the resulting analysis and interpretation.
As it turns out, there is even less consistency in the terms and conditions of the different DNA testing service providers, than has been reported about the tests. A spectrum of terms and conditions exist across providers, from those offering highly restrictive policies that minimise any possible risk to the customer, to organisations that have less restrictions on what they can do with the data.
While this can give no indication as to the quality of the actual service, noteworthy is Slovenian-headquartered Gene Planet, whose sales office is in Dublin and who uses independent labs in Italy and the US. Keeping it simple, the company’s privacy statement states: “The genetic data provided by you and generated during the course of your relationship with us is regarded as sensitive personal data under Irish data protection law on the basis it is health data. We only process this data in connection with our genetic testing service. We will keep this data confidential and will only use it to the extent necessary to provide you with the specific results you have requested.”
More specific is Scotland-headquartered BritainsDNA, which states: “Your genetic information will be held confidentially by BritainsDNA and not shared with others without your permission. Your name, address and other identifiable details will be held separately from your ID code which will be attached to your genetic information.” And it goes on to say, “The laboratory will not analyse your saliva for any biological or chemical components, markers or agents other than your DNA. The laboratory will not have access to your name or your other personal information, the sample will only be known by an ID number and unique bar code.”
At the other end of the scale is AncestryDNA, a subsidiary of what started as US genealogy publisher Ancestry.com. A number of clauses in the organisation’s UK privacy statement are worth pulling out:
- “By providing AncestryDNA with personal information, you specifically consent to the transfer and storage of personal information to and in the United States.”
- “All samples are stored either at the testing laboratory or other storage facilities in an anonymised format and may be kept by us unless or until circumstances require us to destroy the saliva sample, which you can request at any time by contacting us.”
- “In addition, if you voluntarily agreed to the Research Project Informed Consent we may use the Results and other information for the purposes of collaborative research and publication and in accordance with the Informed Consent.”
- “Third party service providers: Under the protection of appropriate agreements, we may disclose personal information to third party service providers we use to perform various tasks for us, including for the purposes of data storage, consolidation, retrieval, analysis, or other processing.”
In other words (and unlike other providers including 23andMe, which states, “Unless you choose to store your sample… your saliva samples and DNA are destroyed after the laboratory completes its work, unless the laboratory’s legal and regulatory requirements require it to maintain physical samples.”), the customer has to specifically ask for source materials to be destroyed. In addition, phrases such as “appropriate agreements”, “various tasks” and “other processing” mean that the organisation could pretty much do what they like with any results (again unlike 23andMe, which requires “explicit consent”).
Why does any of this matter, if after all you were just looking at your heritage? As already noted, genetic profiling is a work in progress — as it gets better, it will be possible to find more out. Which brings back to the point about need to know. As mentions BritainsDNA’s own policies, “Once you get any part of your genetic information, it cannot be taken back.” If you find something out which could jeopardise your health insurance for example, and then you do not declare it, then you could be accused of fraud. BritainsDNA goes on to specify a “hold harmless” clause, exonerating itself in the case of such discoveries.
It is perhaps not just the CIA, but ourselves, that should see the benefit of plausible deniability. And speaking of US authorities, one of the spin-off advantages of having vast computer resources is that it is possible to store increasing amounts of information about individuals. The downside, as notes the Web We Want campaign, is that “sometimes, innocent people are swept up in systems for tracking criminals.” Jurisdictions are all too important here: it would appear highly inadvisable to allow your most personal of information — your DNA profile — to leave your national boundary, and thus be subject to different and possibly contradictory laws, many of which (such as the US genetic discrimination act) are still works in progress.
Perhaps even more important than third party access to your data comes back to what you, and your peers, might find out about yourself. Perhaps based on experience, BritainsDNA goes in pretty hard on this point. “You may learn information about yourself that you do not anticipate. This information may evoke strong emotions and have the potential to alter your life and worldview. You may discover things about yourself that trouble you and that you may not have the ability to control or change (e.g., surprising facts related to your ancestry).”
So what, say the optimists. So, say the DNA test providers themselves, the example where you discover the person you have known for 30 years as your father turns out not to be, responds the privacy agreement. It’s not hard to imagine the potential for anguish, not indeed the possibility of being cut out of a will or causing a marital break-up.
Perhaps the point is not to say “don’t do it”; rather, to recognise the potential risks that knowledge can bring, and to ensure that they are mitigated before they cause problems (as afterwards would be too late). An incomplete, but still important list of things to think about is as follows:
- First, think about the consequences of each possible outcome. Perhaps, like Julie M. Green, you would rather know that you had a risk of a degenerative condition. Or perhaps not, like Alasdair Palmer, so be sure in advance.
- Recognise that the results may be inaccurate. Despite the advances, these remain early days for genetic testing, Have particular concern for the ‘nocebo effect’ — the propensity to start displaying symptoms of a disease you think you might have.
- Ensure you minimise the scope of the research to what you are comfortable with. If all you are after is a bit of ancestry fun, do that — but make sure that is all you are getting.
- Decide what to do with your raw data. Your choices are to request it, down load it and store it safely, or request for it to be destroyed. If in doubt, take it out — you could always have the test done again, and probably more cheaply.
- Control what others can do with your data. After reading up, you might decide you are happy to have yourself used as a live sample for any and all further genetic research, whatever the outcome. Or you may prefer to know who is using your data, anonymously or otherwise, and to what end.
- Choose an organisation which complies to your own national laws. if your DNA crosses a national boundary, then you have pretty much said goodbye to any controls that could be placed on it, particularly as many laws are only applicable to their own citizens
- Consider looking for a completely anonymous service (Cygene claims to do so, in the US), albeit recognising that absolute anonymity is very hard to achieve
- Above all, read and be comfortable with the T’s and C’s, privacy policy or participation agreement. Ensure that your sample is only going to be used for the purposes you intend, and that you are not ‘volunteering’ to share your data or participate in research projects you know nothing about.
These are exciting times for science in general, and for genetics in particular. In five to ten years’ time we will know much more, and we will also have international legal frameworks that may have caught up with how we deal with such knowledge. In the meantime perhaps consumers can dabble with genetic testing as a bit of fun, but let’s remember it was our own curiosity that killed Schrödinger’s cat.
December 2015
12-08 – On writing Rush-Chemistry, and Egg Nog Gate
On writing Rush-Chemistry, and Egg Nog Gate
“What’s next?” said Sean. “You can’t just leave it there.” Sean was the self-styled big cheese at Helter Skelter, the publishing firm that took a risk on the first edition of ‘Separated Out’, a book that became the authorised biography of Marillion. Sean was also a true gent, a person who cared deeply about music and the cast of thousands who made it all possible. His shop was on Denmark Street in London, an eclectic treasure trove of musical literature, itself surrounded by guitar emporia and just a stone’s throw from the proudly independent book store, Foyles.
Sean was right, but I had not given any thought to the subject. “I don’t know,” I replied. We’d met in Foyles’ jazz cafe, its uncomfortable, squashed together benches an apparently deliberate statement: this is art, you’re supposed to get a sore arse. Sean sat cross-legged, the only way he could squeeze his lanky form into the only, cramped space available.
“What about Rush?” I suggested.The band seemed to fulfil the necessary criteria: despite having a strong following (and therefore, readership), they were not mainstream and therefore not particularly written about. After a period of hiatus due to some deeply tragic circumstances, the band were back together, recording and touring. On top of that, the power trio had delivered the soundtrack to my school and college years.
“Let’s do it,” he said. I paraphrase — the conversation filled a good hour — but the gist is there. Rush seemed a logical choice, I was hankering to do more writing, and above all, I believed I had the process nailed. Start writing (thanks Karin), get a draft done, send it to the band’s management, sort some interviews and, well, job done. How hard could it be?
A year or so later, we were back in Foyles. “This is your difficult second book, isn’t it,” laughed Sean, his eyes sparkling. I nodded, shrugging. Things hadn’t quite panned out as planned. The process worked, after a fashion; in addition I had benefitted from my day job to travel to the US, and therefore Canada, which was a boon — I had been able to meet a number of wonderful people en route, producers and engineers, and pick up what I thought were great insights.
At the same time, given the distance I was less able to rely on serendipity for meetings. A fortunate coincidence led me to speak directly to a personal friend of a band member; an unfortunate misinterpretation meant a message went back that some bloke was writing an authorised biography of the band. I never said it, but that is what was heard.
Two things all budding biographers should know about the nature of music writing: first that it is, in many cases, parasitic. It is possible to have a situation in which the subject gains as much from the relationship as the writer, but equally frequently, this will not be the case. And second (as I have learned through anecdote and example) a known technique for obsessive fans to gain close proximity to their heroes, is to claim to be writing a biography.
So the mis-hearing of the term ‘authorised’ (I had used it, to describe the Marillion book) was more than a setback; it captured the worst fears of a band who had become rightly, and fiercely, protective of their own privacy. I’m sure it didn’t help that, having written some 80,000 words on the band by this point, I had become a little obsessive myself.
But the biggest challenge was neither of those things; it was, in fact, timescales. The people at Rush’s management company SRO-Anthem had been nothing but helpful, putting me in touch with close collaborators and giving me good references. Despite the setback, I had been told that I could still be able to talk to the band at some point, but it might take time (lesson 3, biographers: you are dealing with human beings, not facades). If only I had such time, but back in the UK Sean was gently cajoling me to get the thing finished, otherwise I would miss a publishing cycle.
There was some excellent news. I had spoken to Hugh Syme, Rush’s talented and long-standing partner for album covers and indeed, musical contributions. Hugh had suggested he did the cover for the book, an idea that I put to SRO, who asked Neil Peart, Rush drummer, who also had most to do with artwork within the band. Neil said yes, and Hugh immediately created a fantastic cover based around a Chemistry theme.
As the now-immovable deadline loomed, I had one chance left to engage in person, on a December trip to San Jose with the return journey via a cold, wet Toronto. A tentative morning appointment with SRO-Anthem was quashed at the last minute due to last-minute needs of the office Christmas party that afternoon. “I’m sorry, but I’ll be making egg nog,” I was told, a remark which will for evermore be referred to as “Egg Nog Gate” in our household. Lost for what to do, I booked the next flight out.
The book was finished shortly after, and we went to print with a truly spiffing cover. Happy indeed is the person whose book is stocked by Foyles itself. As for the content I was, and remain disappointed for a number of reasons — not just the lack of direct band insight, nor the fact that a decade later, I know I could do a much better job. But in addition Sean’s interest seemed to have waned, taking with it his attention to detail. At the time I accepted this situation gladly (he usually wielded the red pen like a stiletto) but the prose is weaker for his reduced editorial input.
Little did anyone know that Sean was displaying the first signs of leukaemia, a bitterly nasty condition that would take him from us — but not before he had recommended me to Mike Oldfield, a gesture for which I remain profoundly grateful. The logistical downside was that Sean left a music book publishing company which never really managed to get back on its feet; several years later I chose to take back the rights, as the only financial purpose the book was serving was to pay accountants.
Sometimes things don’t work out how you hope, but there is a positive twist. The year after publication, I received an email from a good friend of Neil’s, who wanted a copy of the book to give to the drummer for Christmas. I sent one off with much delight, from one budding writer to another, more established author. As Neil announces his retirement from the band with which he has played for past 40 years, and which has brought so much pleasure to so many, I would like to add my own gratitude to the man who gave me his support when all else seemed dark. Thank you Rush, and thank you, Neil.
12-11 – Twitter map of the world - follower projection
Twitter map of the world - follower projection
I have realised I am in that nether “working it” land…
2016
Posts from 2016.
January 2016
01-05 – 2016 Prediction Get Ready For The Mobile Api Explosion
2016 Prediction Get Ready For The Mobile Api Explosion
2016 prediction: get ready for the mobile API explosion
Jon Collins, January 2016
This is the API economy, so we are told. It is fair to say that the facilities modern Internet sites and services provide are revolutionary: anyone with a modicum of Web programming knowledge can grab an API key and, within a matter of minutes, be using information from the site in their own programs.
Information is sent as structured text across standard Web protocols such as HTTPS; meanwhile scripting languages such as PHP incorporate tools to process information thus received, turning it into searchable arrays which can be easily interrogated and manipulated.
And furthermore, such a model has been incorporated in just about every Web-based software offering. Quicken and Sage for accounting, for example; Strava and Garmin for sport; Trello and Toggl for task management. No startup worth its salt would consider launching a service without offering an API.
The consequence, as many have written, is an explosion in innovation as all such sites have learned the benefits of integrating with each other. APIs are the Web version of ’co-opetition’, that fatherless child of co-operation and competition, in which organisations have to collaborate with their rivals in order to stay in the game.
Unsurprising, then, that APIs are having a business impact on companies that depend on the Web for business. The API economy requires organisations to define strategy around what they integrate with versus what they build themselves, how they can build on third party platforms versus how they can add their own value.
In some cases, Web-based companies can succeed or fail based on the quality of their API. As says AppDynamics founder Jyoti Bansal, “APIs themselves are becoming the product or the service companies deliver.” And Salesforce.com, a pioneer in the space, generates half its revenue from APIs today.
All this is true, with a caveat. Even if APIs have catalysed a good part of the online service revolution, the mobile world has not benefited. Sure, mobile apps can fire off HTTP requests and read the results. But the way in which apps are developed still reflects the old ways of software development.
Whereas a Web page can pull and display information from another site through a handful of lines of script code, a mobile app requires considerably more effort. Mobile apps are (by and large) compiled rather than interpreted, they require desktop-based development environments and above all, attention to a broad range of details from multitasking criteria to network availability.
The mobile programming model is not particularly wrong; however by its nature, it cannot deliver the same benefits as the Web. There cannot currently be a mobile-based API economy, for example. Mobile development projects will generally be measured in weeks, rather than days, constricting the ability to test new ideas.
Where facilities exist to simplify mobile development, they are frequently proprietary or in limited use. This partially comes from the inevitable Android vs iOS platform choice — cross-platform development packages do exist, but (as yet) there is no ‘one tool to rule them all’, which (in the form of open standards) has been a major enabler of the API economy.
So what, the discerning reader might say. So, the mobile development environment is maturing.
In October for example, Amazon launched its Mobile Hub, a toolset to enable the auto-generation of mobile apps (at least in part) that make use of Amazon Web Services. And Intel announced what it termed its Multi-OS Engine (MOE) to enable Java apps to run on both Android and iOS, creating a non-proprietary bridge between the two.
While neither offers a final answer, both contain elements of what might be termed symptoms of the coming rapture — that is, the moment when mobile developers no longer have to think about what is going on under the bonnet of what they are building.
When (not if) this happens, we can expect a similar level of innovation and diversification in the mobile world, as that we have seen in the Web world. Or, put more simply, a moment when any organisation, large or small, can create a simple, effective app which builds upon the power of the Web.
The API economy may have been seen as ‘fuelling the second Web boom’ (to quote Deloitte) but it is only a matter of time before we see a second mobile boom, driven by the removal of shackles on mobile apps.
When this comes, and it will, expect to see the mobile platform become a business differentiator not in terms of whether or not a corporate mobile app exists, but how fast such apps can evolve to meet the fast-moving needs of their users.
01-05 – Social media owners need to stop running uncontrolled playgrounds
Social media owners need to stop running uncontrolled playgrounds
Just how nasty can social media get? Twitter in particular has found itself under the cosh, due to its historical complacency. “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years,” the company’s previous CEO Dick Costolo wrote back in April.
When Jon Ronson wrote a book about the dangers of ’shaming’ on social media it was to Twitter that he was largely referring. While more extreme cases have resulted in criminal convictions, the majority of tweets fail to cross any significant threshold individually but can build a picture that, as Ronson points out, is more akin to the medieval punishment of stocks than rational debate.
The fact is, it is simply too easy to throw in verbal insults online. For example, whatever people felt about the politics of Andrew Lloyd Webber’s vote against the Tax Credits bill in the UK House of Lords, many remarks dwelt on his appearance rather than his decision.
And now with a falling share price, Twitter is stumbling into action, re-engaging founder Jack Dorsey as CEO and offering new capabilities that might prevent user numbers from flat-lining. Among the features is Moments, a.k.a. curated streams of tweets around current events. Hold on to that word - ‘curated’.
Twitter isn’t alone in finding that the dream of user-generated content is more Lord of the Flies than Paradise Island. Equally notorious for its crude, and sometimes cripplingly harsh comment streams has been YouTube. And indeed, across the Web we have platforms that have been sources of highly offensive, even abusive content.
Facebook, for example, remains a significant source of cyberbullying as teenagers use the service to display behaviours previously limited to the offline world. And Snapchat has been linked to sharing inappropriate images, with consent or otherwise.
It is interesting to compare the models of different sites. Whereas on Twitter most messages are shared publicly, on Facebook they tend to be shared with (so-called) friends. To state the obvious, this makes the former more of a platform for public shaming, and the latter for bullying within close communities.
It seems that every time somebody comes up with a way of sharing information, it invariably becomes misused. So is there any hope? Interestingly, we can draw some inspiration and hope from a site that appears to tend towards the chaotic.
Reddit, that lovechild of Usenet forums and social media, enables the creation of individual spaces (’sub-reddits’), each of which is curated by its creator to a greater or lesser extent. While parts of Reddit can get a bit hairy (in the same way as Twitter), at some level, humans remain in control.
The notion of curation — that is, keeping responsible people and the community involved — does seem to hold the key. For curation to be possible it requires the right tools.
The importance of the down-vote to Reddit cannot be over-stated, as it creates a generally accessible balancing factor. Right at the other end of the scale Quora (which also has both an up-vote and a down-vote) delivers on safe, wholesome, curated Q&A.
It is this additional level of responsibility that should set the scene for the future. With a caveat: nobody wants the Web to get “all schoolmarmish on your ass”; indeed, even if it could, it would doubtless cause people to run to the nearest virtual exit.
At the same time, we can see a future where we move from the uncontrolled playgrounds of social media (with the occasional, knee-jerk reaction from their respective authorities) to a place in which we take more personal responsibility for our actions.
The alternative is irresponsibility, either on the part of the individuals creating messages or the companies allowing it to happen. It like the old English adage, “The problem isn’t stealing, the problem is getting caught.”
Simply put, our online culture needs checks and balances at all levels, not to restrict general behaviour but to prevent the excesses we exhibit if no restrictions are in place. It is no different in the virtual than the real.
While platform providers may not see it as their place to act as judge and jury — itself a point of debate — they should nonetheless provide the tools necessary to ensure people can congregate without fear for their online safety.
Not to do so is irresponsible, but more than this: as we develop and mature online, we will inevitably gravitate towards platforms that, by their nature, offer some basic protections against abuse. Many Twitter users are moving on; and it is surely no coincidence that Facebook is less popular for the youth.
Even as social media companies look to provide more ‘exciting’ ways to interact, they ignore such basic, very human needs — such as existing without fear — at their peril.
01-07 – I have seen the future, and it doesn’t mention Uber
I have seen the future, and it doesn’t mention Uber
As we all know, some of the best ideas come when they are least expected. For me, the epiphany came just as I turned into my driveway; how fortunate that I was moving very slowly at that point, as it was a bobby dazzler of a thought.
Insight often comes through one of three routes: aggregation, extrapolation or conversation (for the record the last is often the most useful but the most risky, due to its anecdotal nature). Aggregation comes from quantitative research; extrapolation from analysing trends and, in the parlance, looking at where the hockey puck is headed.
In my experience, two kinds of extrapolation exist. The first is shorter-term, based on seemingly sudden changes in the business or demographic landscape. Right now, for example, everyone is going digital, mobile, social and so on. And the sharing economy is in full swing, based on the valuations of Uber, AirBnB and the like.
Such waves tend to overlap with, and overtake each other, each wave revealing new winners and sending others crashing into the rocks. Behind such shorter-term changes however, are relatively glacial forces of technology evolution, each continuing on its well-trodden path.
For example Moore’s and Metcalfe’s laws, driving miniaturisation and the network effect, each of which has had such a global impact over the past few decades. While such principles may eventually reach their logical ends, they have a way to run before their impact recedes – the Internet of Things is the most recent consequence.
As they continue, it is they that create and destroy whole businesses. The sharing economy has emerged due to cloud-based information sharing capabilities, creating a cookie cutter of opportunities to link suppliers with consumers. The resulting businesses are symptoms of where technology is at today.
But technology continues, regardless, becoming easier and more open. Inevitably, the ability to link trusted parties will commoditise, becoming a feature of the platforms upon which we increasingly depend. “Sharing” is something we will just do, as it becomes as straightforward to borrow a book or a shovel as to offer a lift.
Any new technology startup has an opportunity to benefit from a massive pool of potential difference as it short circuits traditional business models — simply put, making things cheaper for people while operating at lower cost than traditional businesses. Once such opportunities are plugged however, they become part of the landscape.
This means, however successful a young company might be (I’m looking at you, Uber) it has limited time with which to establish itself. This means either becoming a new platform (move over, Amazon, Google and Facebook) or being subsumed, potentially at massive profit to the original founders — in which case, job done.
But it would be a mistake to see any such business as the shape of things to come. I have seen the future and it is us, going about out daily lives with the tools we need to do so. Sure, we will be sharing and serving each others’ needs. But we will not need multiple third parties to do so, however successful they may appear in the short term.
01-13 – Retailers Online And Offline Need To Be Problem Solvers
Retailers Online And Offline Need To Be Problem Solvers
Retailers online and offline need to be problem solvers to succeed
With the future of retail, of course it’s personal – and it’s not just about the business model
Jon Collins, 13 January 2016
The retail industry is transforming, we are told. With such a wealth of digital technologies to choose from, and with online brands stealing the lunches from traditional store owners, the element of panic being felt across retailers is, perhaps, unsurprising.
But even while purveyors of goods and services struggle to adapt, they are perhaps missing a fundamental point: that this isn’t about them. Retail Shopping — buying stuff — should not be so hard but right now, it is.
At the base of any consumer transaction lies the notion of a value exchange. This isn’t just about the money — consumers will pay more in return for greater convenience, reduced risk, a feeling of affiliation among other criteria.
Add these up and consumers will generally then choose whatever option yields the shortest distance between a perceived need and having that need being met. Advertisers and brands are fully aware of their role in driving that perception.
But too frequently it seems to be forgotten by retailers large and small, from major supermarket chains who find themselves being outflanked by low-cost outlets, to owners of high street establishments, franchised and independent, that sit at their tills wondering why nobody comes in.
The greatest impact has ben due to the onslaught of online sellers, which has led to the demise of so many well known names. As we know however, there was no inevitable reason why Blockbuster, Woolworths or any other retailer had to fail.
Other organisations have survived, if not thrived — in the UK John Lewis continues to perform and even Waterstones (a bookseller, of all things) recently turned a modest profit. And I have watched with interest as a local lunch bar, Stacked, has gone from strength to strength.
As so frequently in technology, the problem with retail is defined in terms of what’s going wrong with business models, rather than what needs to go right. The buzzword is ‘omnichannel’, suggesting that retailers need to reach consumers across the variety of mechanisms now available.
For us consumers, this is already so last year. We exhibit a form of Orwellian doublethink, going from a “wow” state about something new (such as buying goods via a smartphone, or using click-and-collect) to a “well, of course I’d want to do it that way.“
Keeping up with such fickle behaviours is challenging, but is much easier to understand from the point of view of the consumer. The irony is that, even as retail strategists plan their moves, at the weekends they become the very people they are trying to engage.
In the omnichannel world, sales can just as easily be lost as won. Consider the aforementioned click-and-collect service, which (based on personal experience and that of a colleague), would perhaps be better named click-and-wish-you-hadn’t.
On one occasion the product was delivered to a store ready for collection, then returned before it was picked up; on the other occasion it wasn’t actually available but the order was accepted anyway. In both cases the order was cancelled without asking the customer.
If you were the punter, in both cases the chances are you wouldn’t rush to use the same service again. Of course, commentators need to be careful not to treat their personal experiences as fact. But this is retail. Of course it’s personal.
So here’s a thought — rather than looking out of a headquarters window (or even from that upstairs room in the back office of a store), how about looking at the real customer journey? Not from prospect to buyer, but from home, to couch, to shop, to bus, to phone. And back.
In other words, this is not about omnichannel – that’s a bit like defining DIY as ‘multitool’. Rather it is about recognising and handling the actual customer state, from needs analysis through to those needs being met.
Retailers are problem solvers, be it dealing with the weekly shop or handling an urgent request for a washing machine part. The ways in which the problem is communicated are mere points on the journey, not the journey itself. Retailers fail to understand this at their peril.
Retailers that recognise this will continue to grow and do well (as long, that is, as people have the inclination and ability to buy what is offered). Those that don’t, however, will continue to miss both their targets and the point.
February 2016
02-03 – Machine Learning Myths, Science Fiction And The Singularity
Machine Learning Myths, Science Fiction And The Singularity
Machine Learning: Myths, science fiction and the Singularity
‘Machine learning’ is the latest term used to embody our obsession with thinking computers
“And then Hephaestus, Olympian god of fire, created a massive automaton out of bronze.” Known as Talos, the giant creation was to protect the island of Crete. “On seeing strangers approach, he enveloped himself in fire and engulfed them.” Or so the myth goes.
Our fascination with machines that can think is rooted in our collective histories. Perhaps we have a deep psychological need to engage with beings greater than ourselves, super-intelligent, transcendent parent figures that abdicate us from reliance on our own senses.
“[AI is] an ancient wish to forge the gods,” stated Pamela McCorduck in her 1989 book Machines That Think. And unsurprisingly, fiction before and since has offered a continued feed of examples of how we have sought to recreate the idea of a higher power with a technological image.
When the techno-gods are good they are very good, like those portrayed in Iain M. Banks’ Culture; and when they are bad, they become Skynet, providing as they do their evil deeds a suitable backdrop upon which to express our own humanity.
Right now, we are told, we are on the brink of making such an intelligence real. The advent of what we call Artificial Intelligence has progressed in line with that of computer processing, unsurprising as we see computer algorithms as primitive versions of our own abilities to make calculated decisions.
Marvin Minsky (may he rest in peace) built the first ‘neural’ network in 1951, and every generation of computing since has been coupled with a wave of interest in computers-that-think — from AI itself in the 70’s, to expert systems in the 80’s, data analytics in the 90’s and so on. Most recently we have been seeing a rise in popularity of machine learning, which we will doubtless move on from.
This repeated cycle is not a bad thing, as it reflects our increased trust in what computer algorithms can do on our behalf. As explained Nick Bostrom a decade ago, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”
A good illustration is Google: as Larry Page himself has said, AI is core to how the search engine works, albeit not as outwardly ‘human’ intelligence. (He also mentioned how OKCupid uses algorithms to match people likely to be romantically compatible, suggesting that AI is acting like the fabled demigod of love).
As computers become more powerful, so their abilities to compute are progressing. Indeed, they are already capable of many processing tasks that are way beyond the ken of humans. Even some metrics of human intelligence — such as the IQ test — are being surpassed.
But thus far computers remain unable to act without instruction. The latest resurgence of AI was has been captured by the term machine learning, itself coming hot on the heels of Big Data analytics.
In both cases, the need for human intervention has been emphasised, either through recognition of the dearth of data scientists, or by providing user interfaces designed for business experts rather than technologists.
“We’re looking to have machines do what they do well, and humans to do what they do well — to enable the machine and human to have a dialogue,” says Augustin Huret, founder of ‘augmented intelligence’ platform MondoBrain.
Will this situation change? The moment that this changes has been called the ‘Singularity’ — a distinct moment where, like with Mary Shelley’s Frankenstein, super-beings spring into life. While it is still seen as decades away, this big bang of computing has pretty much been taken as read.
It may never happen, for the simple reason that complexity is exponential and computer power, despite its stunning growth in the past 60 years, is ultimately linear. We will indeed arrive at a point where computers can emulate human communications, but the former will always remain a logical entity, driven by maths, whereas we are most certainly not.
This is no bad thing, and neither is our aspiration to strive to create heavenly beings, which is driving a great deal of innovation. I have already written about an orchestration singularity, in which computers will eventually become smart enough to manage themselves; meanwhile driverless cars are an eminently logical progression.
No doubt we will achieve capabilities that are currently the stuff of science fiction. But will computers emerge that can act on a whim, or in any way irrationally? To do so would be illogical: either we’d have to program in such a facility, or the algorithms would need to decide that such a thing was necessary to their development.
Both of which are unlikely: even Singularity advocate Ray Kurzweil argues that ‘human’ isn’t necessarily the goal. Machine intelligence, if and when it comes, will continue to develop like a flywheel that spins through accelerating cycles of self-improvement, taking on board increasingly broad problem spaces.
Pretty much as now, in fact — in many ways computers are already there. And so are we, if we think about it hard enough.
02-12 – Hipsterism isn’t a fashion, it’s a reaction...
Hipsterism isn’t a fashion, it’s a reaction…
…against Perspex and neon, against soft edged fonts and loud noises and being told what to think, say and do. It’s a statement that says, I like the old stuff! I like the things I was brought up with, that my parents and grandparents grew up with. I like old vellum and charred wood, frayed edges and hair. Hair! I like hair, long hair, beards, moustaches, all these things that are being denied in a size zero, androgynous modernity we inhabit. I like dolly mixture and sunsets, camp fires and long walks and vinyl records, not because I want to be a part of some exclusive club of cool kids but because I reject the alternative, that clean cut, cold hearted and clinical climate of collective conformity called modern life. Unable to react to it, to respond to it, to reject it outright, I find myself in a passive aggressive no mans land of simple pleasures, of checked shirts and hard boiled eggs, of pencil sharpeners and pot plants. And they know it, and they don’t like it. Try as they might they cannot capture it, market it, not beyond replicating sun bleached and burnished themes in their publicity shots, in using similarly serifed fonts and whimsical quotes in in their throwaway tag lines and ironically unironic marketing copy. But however hard they try, they cannot be it, because it implies an absence of precisely what they are trying to achieve. By looking to recreate nostalgia, they merely reinforce the reason we look for nostalgia in the first place. Meanwhile the real irony, that the point is lost on them, is lost on them. Hipsterism isn’t a fashion, it’s a reaction. And it will pervade.
02-18 – Utilities Need To Heed Lessons From Google
Utilities Need To Heed Lessons From Google
Worried about Google owning the data, utilities? That’s the least of their worries
Jon Collins, February 2016
When considering the impact technology is having on the utilities industry, the most obvious topic is data. Not only do power and water companies create a great deal of data, but they also have a reputation of not being very good at managing it — to the extent that a few years ago, some wag called utilities companies self-regulating due to their inability to do anything untoward, or indeed anything at all, with the information they held about their customers.
Perhaps rightly therefore, utilities are worried about losing control of their data to cloud-based behemoths such as Google/Alphabet, who do appear to have the necessary platforms to do something with all the bits and bytes being generated by, well, generators and indeed, from across the grid. “Right before our eyes, the search giant is weaving a web of services and applications aimed at collecting more and more data about everyone and every activity,” wrote industry observer Frederic Filloux. “Google will soon be in the best position to provide powerful predictive models that aggregate and connect many layers of information.”
To add insult to injury, such upstart start-ups have had the audacity to step onto utility companies’ core turf — that of power generation. Last year Google made a major investment into wind and solar plants, ostensibly to power its own data centres. And meanwhile, car manufacturer (or is that what the company is?) Tesla recently announced a battery pack for the home, which can charge from the grid while energy is cheaper, then discharge during peak times. As companies like renewable power company Good Energy illustrate, the nature of the power grid itself is changing.
But neither data management, nor a changing grid, are the big whammy. A power-grab is taking place right under the noses of the traditional resource providers, and they are either too busy or distracted to do anything about it. One perpetrator is, indeed, Alphabet: when the company formerly known as Google acquired smart thermostat vendor Nest, this was seen as a foray into the world of smart devices and, indeed, the data they generate.
Read between the lines however, and it becomes clear that Nest’s aspirations are far bigger than that: the goal is to position the Nest brand itself as the preferred platform for the connected home. Or, as Greg Hu, who is running the Works with Nest program puts it, the ‘conscious’ home. Let’s consider the evidence: for a start, the brand. Apart from the “We’ve been bought for $3.2B” blog, there’s no mention of either Google or Alphabet on the Nest site.
Second, we can consider the Works with Nest program itself. This now involves over 12,000 developers from a wide variety of companies, both new startups and established brands including Yale, Whirlpool and ADT. To coin a phrase, if “it’s the ecosystem, stupid,” then Nest is looking to position itself at its plumb centre. The spin-off benefit is that the program gives new companies something to plug into , and older companies something new to offer.
Third, we can consider Google’s role as a platform provider. The company’s attempts to compete against social sites Facebook and Twitter may have had sub-optimal results, but they have given the company an infrastructure that can cope with multiple updates from, well anything. The fact that Nest and Google have currently independent efforts to create an IoT platform is largely irrelevant; more important is that the drive to create such platforms is relentless.
Quietly but quite intentionally, this is a brand-grab, a first move into brand familiarisation such that the term ‘Nest’ becomes synonymous with the social network of stuff we are creating in our daily lives, and thus becoming the platform du choix for the smart home. The Internet of Things may have barely got off the ground, but savvy strategists know that the real value lies not in devices, nor in the data per se, but in the relationship between a trusted brand and what the data represents.
It’s not just Google/Alphabet doing it, either. Look at Xively, http://www.zatar.com or Samsung’s SmartThings, all of whom would give their right arm to take a default position in the connected homes of the near future. The telcos are on to it — In January, O2 announced it would be launching AT&T’s ‘digital life’ service in the UK, for example. But utilities companies, who arguably are already well established in the home, are lagging well behind the curve.
Utilities companies are, indeed, getting smarter about using data, but data ownership is only one facet of what they need to worry about. If anything, the data is a symptom of the trust relationship between brand and consumer. By not targeting the home automation market directly, utilities companies are consigning themselves to be resource providers only, with business models driven by efficiency rather than any particular value-add. While we will always need people to run our power and pumping stations, these are unlikely to be the greatest sources of future ROI.
(Additional research by Ben Collins)
02-23 – My big TV moment - Tetbury Woolsack and Record Breakers
My big TV moment - Tetbury Woolsack and Record Breakers
A few years ago I was on the Tetbury Woolsack Committee, which organises the “internationally famous and world record breaking” Woolsack Races. As befits an event of such esteem, we generally managed to get a local celebrity along – that particular year we’d somehow got the presenter of the BBC’s Record Breakers TV programme to come.
That year also, I figured I should run the race itself, which involved carrying a 60-pound sack of wool a hundred yards up a pretty steep hill. I wasn’t brilliantly in shape, but you know, what the heck. The morning of the event, us organisers were up at 5am as usual, putting up signs, planning stall locations and so on. In a quiet moment I happened to be standing by the painted faces stand. “Come on, let’s paint your face,” I was told. “Sure,” I said, promptly forgetting immediately afterwards.
Everything went pretty smoothly – barriers arranged, PA speakers installed, tents put up and so on. Mid-morning the Record Breakers team arrived with presenter, Jez Edwards, doing a round of interviews. We spent a good few minutes talking about the history of the town, the event and so on. Wow, I thought, I’m going to be on the telly.
The races started shortly after, first the kids and then the more serious racing. I was in the third heat: just before it began, Jez Edwards decided he would also have a crack! So there I was, racing against the man from the Beeb. He quickly left me for dust; I plodded up, making it to the top somehow.
A few months later, the Record Breakers Woolsack feature was due to be aired on the TV. Of course, we all gathered round the box. The sequence started with the usual – some sheep, an archetypal Cotswold pastoral scene and so on. Then it cut straight to Jez Edwards, talking at the bottom of the hill.
“So here I am, at Tetbury Woolsack Races,” he said. “It’s been such an exciting day, I’ve decided to join in!” As he did so I knew, somehow, that my big TV moment was imminent. And sure enough it was, but not quite as I expected. “And guess what,” he continued, “I’m even racing against the local clown!”
Cue shot of me in face paint, looking startled. Fade to race view of Jez sprinting up the hill. Cut.
02-26 – Bitcoin Anonymity Advocates Ignore A Darker Truth
Bitcoin Anonymity Advocates Ignore A Darker Truth
Bitcoin anonymity advocates ignore a darker truth
Jon Collins, February 2016
Buried in the rhetoric of any security conversation is a dark truth. Be it about hacking, fraud, international terrorism or any “bad guys” related subject, security is usually presented as, “Here’s a problem, so what’s the solution?” In this particular case, the potential for cryptocurrencies to be misused has led to proposals that they should be better controlled.
While it might appear quite a leap to go from questions about the anonymity risks of bitcoin to the deepest recesses of our collective psyche, bear with me. First. let’s be clear — Bitcoin was never designed to be anonymous. “Bitcoin is probably the most transparent payment network in the world,” explains Bitcoin.org.
To explain briefly, each Bitcoin transaction contains a ‘hash’, a 25-byte address directly traceable to its instigator. It has to be, otherwise it wouldn’t be provably correct. For those wanting to cover their virtual tracks, the general advice is to create a new address every time. While this can prevent a series of transactions being linked to a single source, it’s still public.
Once independently verified, each transaction is then stored in what is, to all intents and purposes, a globally, publicly accessible database — the Blockchain. While it might take some digging, if we all used Bitcoin, it would be like having access to each other’s account details.
So what precisely do the authorities mean when they say they want to make it less anonymous? The issue is whether transactions can go under the radar, as says a recent EU report on the topic: “Highly versatile criminals are quick to switch to new channels if existing ones become too risky.” So, just water finds its way through rock, so does criminality seek the easiest route.
Equally, so we probably have laws already in place to deal with it. For example, Bitcoin ’tumbling’ services (essentially virtual coin-for-coin currency exchanges) could potentially be used for money laundering or fraud. Both of which are already illegal.
Don’t get me wrong, I understand the concerns, given that cryptocurrencies are themselves evolving. Bitcoin is one of many potential cryptocurrencies, and indeed, the Blockchain model is a model among many. While the idea of an untraceable transaction does not yet exist, it remains technologically possible.
The dark truth, however, is that humans have an equal propensity for good and evil, an eternal battle which has been fought throughout our histories, across our stories and indeed, within our daily lives. We are all corruptible, should sufficient opportunity come our way, or circumstance drive our behaviour.
Trying to suppress this reality plays to the dark side, inadvertently creating more problems. We can see it in the ongoing encryption debate, which pitches civil liberties and personal privacy against the institutional desire to keep bad things from happening. And in the case of Bitcoin, attempts to control an transparent currency mechanism will drive less transparency, not more.
Thinking more broadly, currency itself is already under the cosh. While it has served a useful purpose as an mechanism to simplify trading between goats and timber, there’s no philosophical reason why near-frictionless data transfers shouldn’t enable us to dispense with such intermediate steps. Of course, practicalities make this harder, but again, criminality seeks the easiest route.
Bitcoin and its alternatives may create challenges but will come to be seen as the latest development in the eternal game of cat-and-mouse. If the authorities want to protect their citizens against the sometimes downright nasty vagaries of human behaviour, they need to think way beyond what will become a mere blip in our technologically augmented evolution.
Additional research by Ben Collins.
May 2016
05-25 – Ode to W H Smith
Ode to W H Smith
Oh W H Smith, how I miss
The rows of sellotape
So carefully arranged between staplers
And ring reinforcements
The magazines on shelves of steel
That reached so high and stretched
Away, away from once young hands
Oh W H Smith, times have changed
The world can only wonder where
Your thunder, once so strong, has gone
The lights seem somehow dimmer now
The pennies scrimped and saved
To shore up marginal gains
Pained relics, remnants of retail nostalgia
Oh W H Smith, dare I stand
Amid the shelves, once elegant
But now awry, ramshackle
Labelled with dog-eared cardboard
Corners cut, your former joy
To serve shut down, shutters stuck
In some half open, slack-jawed position
Oh W H Smith, where are you now
My memories, still so sweet, though tarnished
By today’s shambolic tactics.
Even as you betray our friendship
I hear your words and see
My own mortality:
“Can I interest you
In any of
Our special offers?”
June 2016
06-06 – On Brexit, elephants and pythons: it’s a jungle out there
On Brexit, elephants and pythons: it’s a jungle out there
Anyone struggling to see the wood from the trees about the Brexit debate? I know I have been. I bloody hate the term, for the record. Taking this complex and important series of questions down to such an inane play on words is an insult to British sensibilities and the national psyche. The same inanity is reflected in the referendum itself, which offers two choices alone: ‘in’ or ‘out’. What a dumb, stupid idea.
British citizens are faced with one of the most important decisions in their lifetimes, which will affect generations to come. The fact this is being decided on such a fundamentally binary perspective beggars belief — there is no third choice, for example, “spend the next five years planning a sound economic future and then go for that” is not on the table. Nor, as far as I can tell, any other way of gauging national opinion beyond “make your mind up, you uncultured peasants.”
The UK population is, by the nature of this incompetently presented, badly managed and opportunistically twisted ‘decision’, being right royally done over but whatever the result, the consequences will be felt for years to come. So, what is anyone to do when faced with this situation? The answer, as we have been given no other choice, is to vote. But which way? Having read a crapload of writings from both sides of the debate, I’ve distilled out a number of factors.
The global banking collapse, and the subsequent UK government focus on ‘austerity’, is having a profound effect on our national psychology. Economies remain in recession across the world. Many banks are still struggling as organisations; but meanwhile an increasing proportion of wealth is in the hands of a decreasing number of people. Such folks care little for national boundaries, nor for the continued financial challenges of the many.
The collapse is generally accepted to be the fault of the banks and the regulators, the former who took advantage of a lack of financial oversight (including hiding risk) and the latter who failed to provide regulations to prevent this. The creation of the Euro led to a financial expansion in Europe; when the financial collapse happened, many countries in the Eurozone became vulnerable — Ireland, Greece and Spain among others.
UK immigration has increased considerably over the past 15 years, to the discomfort of many whose opinions on the matter were ignored. Until 1982 net migration (people in minus people out) was negative; non-British citizens were ‘coming in’ in the 10s of thousands. In 1996 this increased to over 100 thousand and in 2002, the number of non-Brits in the UK increased by 350 thousand, the figure staying above 250 thousand per year since.
In terms of the economic future, the general consensus among economists is that Brexit would be bad for Britain, if not the rest of the world. “Acrimony and rancour surrounded debates around austerity and joining the euro, but analysis from the Bank of England to the OECD to academia has all concluded that Brexit would make us economically worse off,” says a blog from the London School of Economics.
European membership costs. Figures of 350 million per week (50 million per day) have been confirmed as dishonest, but a figure of 23 million per day (according to fullfact.org) still sounds like a lot, even if it’s less than 50p/day per head of population. The question of whether such money could be better targeted is a good one, we generally rely on economists to help us through the detail. See above point on economics.
The political shenanigans have been hugely, astonishingly inappropriate. Only recently has it become clear that members of the Conservative party are essentially treating Brexit as an internal power play. Both sides have been fear-mongering; meanwhile other voices in the UK political debate are delivering little, through either not saying much being seen as a sideshow by the media. The overall result is a callous misuse of the UK democratic process to ply individual agendas.
So where does this leave us? Ultimately, we are comparing apples with, well, something completely different: let’s go for elephants. Immigration is absolutely the elephant in the room, as a significant proportion of the UK population are now saying ‘enough’ and mainstream political parties (from both sides) are finally listening. But confidence in politicians is at an all-time low.
As a consequence I am not at all surprised that people are wanting to leave Europe — this may appear to be a sledgehammer to crack a nut but for many, who feel they have not been listened to for over a decade, it offers a course of action which may result in being able to control immigration better. Even if the economic consequences are awful, people feel this is a risk worth taking or, simply, have given up believing what they are told.
This dynamic has been jumped upon by a small number of opportunistic politicians, who are quite comfortable with manipulating the population into thinking there is a place they will be better off. Do I believe the NHS will be better after Brexit? No, and nor do the Eurosceptic William Hague, nor John Major who likened the NHS in the hands of Johnson, Gove and Duncan Smith to, “a pet hamster in the presence of a hungry python.”
Which brings directly to my conclusion: I don’t believe the people wanting power care too much about what might happen post-Brexit. The ultimate debating positions are about either dealing with known challenges, or taking a step into an ultra-high risk unknown — with the risk including that the elephant, immigration, cannot be curtailed. However dirty the bathwater, we risk throwing out our future as a nation with it.
All due to a shitty, over-simplistic question on a shitty piece of paper. Damn right I am angry, and how I wish things could be different but they aren’t. I don’t think the really powerful care too much. US firms will move their headquarters — English-speaking Ireland is no doubt rubbing its hands together, but so will be other European centres. Mathematically and inevitably, some will shift.
The question becomes not whether things will be worse, but how bad they will be and whether it is a price worth paying. If the debate is about costs, then quite clearly any additional cost will be more expensive than necessary. If the twitterbit is really about immigration, then let’s tackle it head on, rather than just saying “it’s all going to be OK, don’t worry your pretty heads about it” which seems to have been the approach in the past.
As a deeply proud Englishman and internationalist, I want to see a strong country continuing to play on the world stage. At the moment, this is true. In the future, if we remain in Europe, it will continue to be, without a shadow of a doubt; if we leave, it might be but our future will be less certain. This is not a risk I am prepared to take, to gamble the futures of my descendants on a whim, manipulated by politicians whose main interest is to gain power at any cost.
A final shout-out to the clearly charismatic but ultimately psychopathic Boris Johnson. Boris is many things, but he is above all a historian, and this is his one big chance to make a splash. He will stop — no, he is stopping at nothing to achieve it. And I will not be his puppet.
July 2016
07-01 – Can Technology Save A Post Brexit Britain?
Can Technology Save A Post Brexit Britain?
Can technology save a post-Brexit Britain?
Whatever the rights and wrongs of Britain’s choice to leave the EU, whatever the causes and whoever said what to whom, few would disagree that the immediate aftermath has been one of abject disaster. Britain currently lacks leadership and direction, even as the UK’s financial position is significantly undermined. Despite a rallying stock market, a weak pound and a lowered credit rating have led to a postponement of any recovery predictions. But what’s the way forward? Please bear with me as I set out some context, first on my own position on the issue.
For the record, I voted to stay in the EU. Not because leaving was necessarily a bad idea – I wouldn’t choose it unless every other option had been ruled out, but I was grudgingly open to being convinced otherwise. As it happens, such debates never happened. The arguments used to promote leaving were less to do with creating a more sustainable position for growth based on any kind of structured plan, playing more to baser human instincts such as immigration fears and loss of control. While these concerns are founded on real issues, their relentless focus above other topics resulted in a skewed, divisive and sometimes hate-breeding campaign which has pitched the nation against itself.
The country has, quite literally, been torn into two. But perhaps it is only in Britain that a civil war could unfold on quite so civil terms. We have much to be thankful for, not least that even given the state we are now in, we have not resorted to the kinds of pan-national aggression that have scarred so many other nations. Following the death of MP Jo Cox, Nigel Farage was rightly vilified for claiming the country had won its independence without a bullet being fired. But having recently returned from Croatia, where houses still show damage from bullets and mortar shells, and village shops sell votive candles so locals can keep a light alive on mass graves, I recognise that worse states exist than our current
In terms of what the future holds, nobody knows. Some are taking each announced loss of investment or warning message as a sign that the whole thing’s a disaster; others are picking on every positive sign as unmitigated proof that the nation made the right choice. Such “I told you so” posturing is inevitable and needs to be worked through, as people come to terms with what they believe they have lost or justify why they made the right choice. What Britain cannot do, however, is wallow, nor snipe. Neither, as the melting pot that it already is, can Britain stand by and allow racial hatred to gain ground. The clue’s in the name United Kingdom.
So, what to do? The first point to note is the bigger picture beyond the UK and Europe. Economic globalisation and its consequences have been referenced by both camps as reasons what we should stay ‘in’ or get ‘out’, both suggesting we will be in a stronger position. Receiving less interest is the impact that technology is already having on how we relate to each other internationally. Like it or not but the stage is both individual and global: to put it bluntly, ‘they’ don’t have to come over ‘here’ to steal anyone’s jobs.
Technology has not finished its relentless progress, and neither will it slow. I have written before that we have not yet seen IT achieve its full potential; far from it, as we are on the brink of the next wave of progress. The Internet of Things, big data analytics and machine learning, robotics and 3D printing, virtual and augmented reality, mobile devices and giant displays, autonomous vehicles and nanomachines are on a convergence path, the consequences of which will be profound in terms of how we function as individuals, in business and as a society. While I have written elsewhere that this doesn’t mean the end of employment, it does bring a raft of opportunities and threats, in equal measure.
Against this background and in order to progress, we should recognise that nations have rebuilt themselves from far weaker positions, and continue to do so. I have done my share of berating our leaders for a lack of a plan, but I would suggest even this isn’t the most significant need right now. Rather Britain needs a vision, in which we build on our strengths and meet the whole world on terms that we set out. Impossible, I hear some say. Yes, trade agreements are hard (so I understand), but once again, that’s not the starting point. First off, we have to be very clear what we have to offer. The answer is not to try to sell our current portfolio of goods and services at a lower cost than other nations, that way lies a race to the bottom in which we already lag behind.
Rather than setting ourselves up to fight a losing battle on cost, it is value-based goods and that hold the keys to the UK’s future success. In practice this means playing to the nation’s core strengths. The reason is simple. Through technology, a large part of the global economy has become platform-based – that is to say, successful companies are built on the basis of shared technology platforms and networked ecosystems of suppliers, rather than trying to do all the hard work themselves. If you want to see the maths, note that 40 years ago 85% of a company’s balance sheet was in ‘tangible’ assets, i.e. stuff the company owned, whereas today this figure is at 15%.
Successful companies, and by extension, successful nations, will be those who can make best use of the wealth of technology now available to them. Of course it’s not just about technology by itself – but just as everything is becoming enabled by technology, from people to power plants, so this means understanding what this enablement means, and knowing how to drive out benefits. This boils down to three priority areas: data centricity, resource management, engineering management and user experience.
The first, data centricity. We are in the digital age, which translates as splashing around in lakes and oceans of data generated by everything we do. Harnessing the power of this data is a challenge and requires substantial changes from how we have traditionally worked. But organisations and our society as a whole can get ahead of international competition by simply being better at it. The answer is to invest in data science and the technologies that support it. As an aside, there was little evidence of either Remain or Leave camps making strategic use of data in their campaigns, either through inexperience or because the timescales were too short to do much about it.
Resource management is about applying that data to make processes both efficient (cheaper) and effective (deliver more). Its application is broad – from retail supply chains and engineering spares management, to allocation of people to jobs and reducing travel times, and indeed the sharing economy characterised by AirBnB and Uber. Algorithmically, resource management is not without its challenges but with the Internet of Things incorporating sensors in the mix, we have a lot more data to play with. There’s even a place for manufacturing, as an automated decision could be made as to the relative costs of manufacturing (to become 3D printing) an item in one country, versus the transit times and other costs of creating the item locally.
The third area is user experience. Web designers consider this to be about navigation around a site but it is becoming increasingly important due to the recognition that if a person can’t immediately engage with a technology interface, the result is inefficiency, cost and risk (consider the user experience aspects of a surgeon undertaking a remote operation, for example).
So, what’s all this got to do with the UK? Of course, any nation could respond to some of these needs, and indeed, many nations are doing so. While we can indeed benefit from an increased focus on technology, the answer is not to simply “do more of that stuff” but to look at Great Britain’s strengths and see what it can bring to the party as an independent nation in a technology-oriented world.
Of primary importance is its creativity. The UK is recognised as one of the most creative countries in the world, leading in terms of marketing and design, music and the arts, and indeed, science and technology. Such skills generate some of our finest exports, not least the financial sector which is, essentially a hub for smart people doing clever things with software. Other nations may be as creative in certain aspects, but few have had such a broad creative impact. Now is the time to invest in new companies, to educate people in the skills they need to be creative in the platform economy.
Next, we have a diligent and determined workforce, which has in the past fifty years been largely ignored or treated as an annoyance. When the pits were closed, thousands of people were left without a future; when companies moved call centres offshore, the lack of future was sustained (and is doubtless a factor in the nothing-to-lose attitude driving the waves of anti-immigration feeling we are currently experiencing). The fact that foreign firms are able to run manufacturing plants in the UK suggests that the problem lies in a failure of local management, not staff. As the next wave of technology-enabled manufacturing arrives at the shore, let us create organisations that put the interests of our island front and centre, working with international partners to the benefit of all.
While saying anything about ‘putting British interests first’ may have been tarnished by the bad apples in Granny Smith’s barrel, let’s be clear. Innovation requires diversity of people, of classes, of genders and of cultures. I have never understood why diversity is seen as something to be avoided, or forced onto an unwilling environment. In the new national order, we need every kind of brain working to a collective vision. Yes, we need debate and we need disagreement, because innovation and progress is never a tidy business. Yes, people may feel threatened and their fears need to be addressed. But above all we need to embrace the diverse portfolio of people and skills we have amassed. Doing so makes the many stronger, not weaker.
Finally, we should also consider our strengths as an independent sovereign nation, particularly in terms of how it could offer a safe haven for data, not for the less salubrious elements of our global society but to protect the privacy of its, and other citizens. I have written before about how the new European data protection law was already behind the curve in terms of where data is going. Keeping up with such changes requires a faster legal system than European regulatory speed enables.
So, there we have it. In summary, our island nation may feel like it’s been cast adrift in the Atlantic, and in many ways it has, but we are far from a position of abject collapse. Perhaps the biggest risk we now face is recession, which is driven by uncertainty and a loss of confidence, so even if we have already shot ourselves in one foot, let’s not do the same in the other one. Keep in mind that the media will continue to make hay out of pointing out the bad, as this is what sells papers, but by joining in we damaging nobody but ourselves. Of course, we can choose to bicker and crow about how right we are, or how wrong everyone else is. But consider, every breath we take doing so is starving energy from the nation’s future.
So let’s get out of the blame game for the past and present, and start taking shared responsibility. Britain will only become stronger if the nation rallies around a shared vision for economic and societal success. Of course, the above is only a partial view, coming from the standpoint of technology. Of course it will not be correct in every detail, that will take time to hammer out. But in summary we need to focus on where we add value in the digitally enabled world, we need to educate all of our people to deliver on that vision, and we need to invest in, and strive towards making it happen. As a diverse, multi-faceted and highly capable nation. Together.
October 2016
10-28 – My first ever information management strategy
My first ever information management strategy
I have to admit I was quite excited to have been asked. After all, it was about a topic that sounded very grand, and it had the word ‘Strategy’ in it. It was 1996 and I was 30 years old — enough technical experience to have a stab at most things without breaking something — but not yet versed in the ethereal world of high-level management consulting. The, frankly older consultants exhibited such calm, such… gravitas. And now was my chance to take a seat at the top table, to listen, perchance to dine with some of the smartest people I was likely to meet.
And so, I laboured. I interviewed everyone I could find. I filled a wall with sticky notes, I drew mind maps. I held brainstorming sessions and read everything I could find about the topic. I drafted a document, I went through the company standard first, second and third level review. The higher thinkers read what I had written, and they found it good. I was overwhelmed with joy, my proudest career moment since that meeting in Berlin when I had held the room in the palm of my hand.
Not long after, I left the company so I never found out if it was implemented. I doubt it — shortly after that, the firm merged with another, and was acquired by a third. No doubt my efforts were lost in the noise, after all it was custom built for the original company.
Or was it, indeed, custom built? A few years later I was reminiscing about my great achievement. I had defined an information architecture, with meta-data enabling the company’s information assets to be structured and organised. I had defined an indexing mechanism, ensuring that any such assets could be catalogued and found. And I had defined a management process, with the roles and responsibilities required to keep everything ticking over.
In other words, I had ‘invented’… the library, as used from the present day from the ancient Egyptians and no doubt beyond. As this realisation dawned I felt initially horrified, then a strange peace descended. What lessons ? That despite ending up with what might be seen as an ‘architectural pattern’, it was important to have to worked through the process.
Oh, and that sometimes there are no lessons, only experience.
November 2016
11-01 – Today'S Tech Can'T Beat My Stupid Email Response
Today’S Tech Can’T Beat My Stupid Email Response
Today’s tech can’t beat my stupid email response
We’ve all seen the film. The highly experienced, perhaps a bit older person, usually a guy (though that could be my selective memory), gets through eighty percent of the action with barely a scratch. A short time before the grand finale, however, he lets his guard down for a brief moment and the knife goes in, the sniper picks him off, he falls to his doom, leaving the hero or heroine to finish whatever they set out to do.
Which is exactly how I felt when I actually responded to one of those ‘let us know what you thought’ emails. Not like a hero or indeed, heroine, but the hapless bloke. I can’t believe I just did that, I though to myself. And now I’m dealing with the consequences.
It seemed innocuous enough. I’d just got off a flight with a national carrier —British Airways, though it makes ono difference — when I received an email asking me about the experience, which had been very good. “Win some airline points, travel the world,” it said, or words to that effect. I was tempted. What have I to lose, I thought,
So, I filled in the five or so questions and hit ‘send’. Then ‘the thing’ happened, the clever bit, the moment that we should all be wary of. This time, I wasn’t — my guard was well and truly down. “To be entered into the competition, please let us know which partner offers you would be interested in,” I was asked, or words to that effect. I unchecked some but, stupidly, left some checked.
Hitting ‘send’ again was that same moment where the guy in the film put his head above the parapet, realised he’d left his notebook in the alien-filled lab, reached to find he’d run out of arrows. I only had myself to blame for the onslaught of email which then followed. Whatever ‘opt-out’ settings I’d chosen, religiously, on past T&C pages were wiped away in an instant, offering my details to the general market.
Since then, I’ve been beset by ‘offers’. It might not be so bad, were the companies buying my details actual purveyors of real products, but this does not prove to be the case. I’ve been offered iPhones galore, and some people are desperately trying to get hold of me about a string of jobs. They’re all scams.
No doubt my details were put onto a list, which was then sold to an aggregator of lists, which then sold to whoever was interested at a very low overall cost. Email is virtually free to send, and the model doesn’t care about people ignoring the messages — it’s the handful that actually ‘click’ on the offers that make it all worthwhile.
Note: clicking on ‘unsubscribe’ doesn’t de-authorise the use of the address, it simply indicates there is a real person at the other end of it. Unsubscribe and expect the volume of requests to double. Fortunately, in this digital world the virtual representation of myself (as represented by that particular email) can be killed off, albeit painfully as I re-connect to the hundreds of services that use it.
So, yes, I can’t believe I was so dumb. But it does beg the question about the kind of environment we have created for ourselves, through choice or simply because this is how things have turned out. Black Mirror may be doing a good job of highlighting future dystopias but we have created one of our own already and, in the absence of any kind of code of conduct, the big corporations are active participants.
What’s the alternative? In my own, local experience — I have helped the less technically savvy of my village deal with being scammed or find their computer is a significant node in a botnet on more than one occasion. But, as stories like this indicate, we can’t function in a way that requires us to operate a ‘virtual clean room’ in which everything is kept 100% clean, or it is all tarnished.
Like it or not, this is the binary choice we currently face and which, sadly, new legislation such as GDPR does little to protect against. Initiatives such as the Web We Want are a start but even they struggle to come up with any way to overcome this simple, yet frequent scenario: “What if the person actually agrees to let the bad stuff in, then realises what a stupid error it was?”
The answer may lie in a combination of better use of metadata, 2-factor authentication or indeed a general move to a Public Key Infrastructure. It will require corporations to accept they can’t treat our data as their own, which is a challenge but by far the bigger issue is how to get the population of the world to do the sensible rather than the stupid.
While all is not lost, I hold myself up as a case in point and fear that we may be stuck with what we have already created. Now, excuse me, I have another thirty emails to delete…
11-04 – On “monetisation of data” and other phrases that should be banned
On “monetisation of data” and other phrases that should be banned
I’ve been thinking a lot about monetisation of data. Horrible word I know, but it keeps appearing in corporate circles, generally in the context of what companies want to achieve with all that data they are producing, or which is now available to them from external sources — social media and so on. As per a conference I was recently at, the conclusion frequently reached is that monetisation of data — that is, making money from it — is becoming a high-priority goal for any business.
Why ‘monetisation’? Is it just that it is a nicer word than ‘profiteering’? Or possibly because it sounds slightly like the ultimate goal presented by Maslow’s pyramid of needs, ‘self-actualisation’? Although the former deserves scrutiny (which will be the subject for another day), I’m mostly with the latter. The fact is that organisations know they can do better if they use information better, and doing better primarily means either making more or losing less money.
So, yes, monetisation it is. For some industries, turning data into so-called ‘business value’ is nothing new — anything to do with finance, for a start. Accounting, for all its complexity, is ultimately about managing tables of figures and their relationships; banks are purveyors of mathematical transactions. Healthcare, engineering and other scientific disciplines also have a substantial data element. Retail supply chains have long been driven by data, as are manufacturing systems and utilities plants.
What’s changed the game for all companies (which is why all businesses are ‘going digital’) is that customer-related data has engulfed business strategy, at the same time as all other data sources proliferating. Marketing used to be a relatively isolated set of activities, feeding the other parts of the business on a regular basis with information and potentially sales leads. Today however, such information has become incredibly accessible, so it appears, with customer expectations changing equally quickly.
And meanwhile, in principle at least, you can now know exactly how a business is functioning to the n’th degree, second by second. Through a myriad of sensors, via a multitude of sources you can check everything from the humidity levels in a container crossing the ocean to the stress levels on a paving slab. Nobody really knows how much data is enough, so the response can be to keep adding to the pile of sensors. From a client I have heard of an tendency to plan the right number of sensors in advance, but this is emerging best practice.
The result of all this data, both external and internal, is that many companies have ended up paralysed. Even senior corporate strategists flounder like a dwarves in Smaug’s cave, arms already laden with the gold nuggets of information they see before them but unable to do much more than grab handfuls of it at a time. The opportunity is tantalising; meanwhile other, slicker organisations simply wade in with buckets and take what they want. What an opportunity, what audacity!
Is it any wonder, then, that consulting firms and computer companies are promoting solutions to this challenge? In honesty, not that they really have a solution to the challenge as a whole. For all the machine learning and information management frameworks, the open APIs, agile delivery and DevOps strategies, nobody has a magic sieve that can separate useful data from the shrapnel. Instead, such advice usually turns to scoping — what beneficial goal are you trying to achieve and what specific information you need to do so.
Monetisation is therefore a shorthand term for not wanting the opposite, i.e. not wanting to remain in a state of uncertainty where everyone and their dog seem to be having an easier time of making sense of it all, and potentially cashing in, than you are. The valuations of some of the digitally enabled startups may be vastly inflated, but who wouldn’t want some of that? Find me a company that says, “No, sir, please don’t give me that vastly inflated valuation.” But just as not all singers are destined to be pop stars, the majority of companies need to set more realistic goals.
There’s a further irony. Money is itself just a form of data, a (frequently poor) measure of value of anything, derived to simplify exchanges of goods and services. The fact we have worked out how to exchange the measure itself (as illustrated by currency) is an indicator of how complicated this can get. In other words, even if an organisation successfully achieves ‘monetisation’, it will have managed to convert a partial representation of reality into an arbitrary estimate of what that might be worth.
If that sounds vague, it’s because it is. The real trouble with the monetisation of data isn’t that the idea is grubby. It’s that it takes companies away from the things they have been able to understand, their products and services, and heads them en masse towards a cave where all that glitters is not gold. It may well be a necessary step for companies to consider how they can get more value out of their data, just as they are looking at services (another horrible word — ‘serviceisation’). But in doing so they should be careful not to lose touch with what they were setting out to do in the first place.
11-16 – How to be smart in a post-truth world
How to be smart in a post-truth world
What the heck just happened? Events over recent months in the UK and USA, two nations divided by a common language, have led to a swathe of commentary expressing disbelief and anger alongside the jubilation and enactment. “They were all stupid,” says one side; “Stop moaning and get over it, losers,” says the other. No doubt the name calling will continue.
But what does any of this mean for an industry whose central purpose is to collate, process and deliver information? In the midst of it all lies a common theme, that of seeing data and facts as only one element of the debate, and an apparently low-priority element to boot. “People have had enough of experts,” said Brexit campaigner Michael Gove MP, even as his side laid out its own, fact-light agenda.
“Let’s give £350 million to the National Health Service,” they said, a promise Brexiters had no intention of keeping. They couldn’t: it wasn’t ever up to them, even if such an idea was even possible. Remainers had their own “Project Fear” meanwhile: a string of catastrophic consequences that would inevitably occur should the vote go against them. Many won’t, to the quiet delight of everybody.
As it turns out, neither ‘facts’ nor ‘promises’ mattered in Brexit; nor did they have much of a place in the Presidential election. Voters really didn’t appear care about the potential negative consequences, about historical misdemeanours, about promises that could never be kept (cf that some parts of the wall with Mexico becoming a fence — “I’m very good at this, it’s called construction,” said Trump).
It turns out that Michael Gove was right. People really had had enough of experts to the extent that they would, in both cases, appear to vote for the unknown rather than the known. “I think people just wanted to see what would happen,” a taxi driver said to me a few days ago. I’m not sure he is right; rather, I believe that we, as a species, have proved ourselves unable to appreciate the much bigger picture of what has been going on.
Over time we will unpick the reasons why people voted one way or another, and perhaps arrive at some conclusions: the data about voting attitudes, as well as reviewing historical factors through past decades, will no doubt reveal some truths. But if there is anything we can take from the current situation, it is that our current analytical abilities cannot necessarily reveal future behaviours.
Addressing this, deeper truth is of fundamental importance. If I had to put my money anywhere, it is that the models we use have completely failed to grasp the geo-political and psychological complexity of the situation. Condemning voters of either side as ‘stupid’ is symptomatic of this failure: they must be stupid, because they have done the inexplicable, right? Wrong, it is our ability to explain what is going on that is lacking, not ‘their’ decision making skills.
If we did understand such things at a deeper level we might be able to see more clearly the causes of current voter behaviours and indeed, do something about them. It may be for example that the seeds of recent events were sown back in the 80’s and 90’s, way before social media (which is taking the brunt of criticism) was a ‘thing’. Even armed with such an understanding however, we might still struggle to predict the unexpected from happening again.
Why? Due to the very characteristics that make us human in the first place. We are quick to jump to conclusions; we have agendas; we prefer to act on less information rather than waiting for a complete picture, particularly if it might go against what we want to do. We hunger for control, we often act in ways against our longer-term interests. And, frequently, we seek to justify our actions and positions using data that fits with our views, ignoring all that does not.
We know all of this. Lies, damn lies and statistics, we say, as if the data is the problem, rather than our propensity to interpret it selectively. The pollsters got it so, so wrong, yet still we use them. And while they are global and virtual, the echo chambers we inhabit today are no different to the past. So we share information that reflects our views, suppressing the “clearly biased” views of others. It is ironic that we even have a very human notion — of irony — to explain this phenomenon.
Meanwhile however, we continue to build information systems as if data holds some hallowed, incorruptible place in our lives. It doesn’t: we only have to look at how the oh-so-open Twitter has been castigated for harbouring trolls, or Facebook’s fake news issues, to see how vulnerable data can be to human behaviour. The models we build into systems design are equally subject to bias; the architectures assume people will be good first, and then are patched in response to the rediscovery that they are not.
Right now, we are seeing a renewed wave of interest in Artificial Intelligence, the latest attempt to create algorithms that might unlock the secrets contained in the mountains of data we create. Such algorithms will not deliver on their promise, not while they are controlled by human beings whose desires to be right are so strong they are prepared to ignore even the most self-evident of facts. And that means all of us. A failure to understand this will continue us on a path to inadequate understanding and denial of the real truth that lies beneath.
December 2016
12-08 – Dave and Dave, and Dave, and Alexis, and other tales of AWS serendipity
Dave and Dave, and Dave, and Alexis, and other tales of AWS serendipity
#cloudbeers is an impromptu gathering of people that spend more time than they would necessarily like at tech conferences; and have an appreciation for the outputs of the fermentation process. There’s a Facebook group and the occasional tweet, usually along the lines of “At . Anyone fancy a beer?” You get the picture.
“At AWS re:Invent. Anyone fancy a beer?” asked a guy called Dave on the #cloudbeers Facebook group last week. Id never met Dave but I was at AWS and, indeed, I fancied a beer so I responded. I said I wouldn’t be available until later because I had some meetings — I actually wanted to walk the expo floor, get a few demos in, pick up a t-shirt perhaps. Which was fine, we arranged to meet at 7.
So off I went, becoming just another of the 30,000 strong delegates, partners and staff attending the event. Easy to get lost in the throng, lose track of time. I went to a few stalls, eventually alighting on Intel’s partner stand where I could see a number of demos (and get a free notepad. One was a presentation by a guy called Dave. I couldn’t spend that much time, after all I had plans for the evening.
So I thanked him, headed upstairs to change and then to the bar. To say it was busy would be an understatement — the various watering holes were holding either sponsored events or a ‘free beer’ pub crawl which actually meant hour-long queues. I sent a message to Dave on Facebook suggesting another venue and… surely not… but that face looks awfully familiar…
Ten minutes later we’d met, and yes, it was the same Dave. I’d already bumped into old friends and colleagues but this was something else. We laughed heartily, drank beer and wondered just how likely it was that I told Dave I couldn’t see him later because I was meeting the same guy, who I then had to leave because I was meeting the same guy.
A couple of days later, as all good things come to an end, I checked out of the hotel and had my boarding pass printed. To my surprise the flight was earlier than I remembered — 3pm rather than the evening — so I rushed to a meeting and then headed straight to the nearest taxi rank, at the back of the Venetian conference centre. Nobody waiting ahead, so I headed for the first taxi.
As I waved, I saw a guy arrive behind me. “Want to share?” I asked. “Sure,” he said, in an English accent. As we got in the cab, I turned to him and my eyes narrowed. “Haven’t we met?” I asked. Then I realised where. I had hosted a panel of retail IT execs for Rackspace a while back, and this person was one of the panellists. His name was Dave.
So of course we chatted, and when we arrived at the airport we headed through security and sat together in the main area as neither flew often enough to have lounge access. As we sat, I saw my old friend James, who certainly flew enough to qualify, walk through with a couple of peers. I didn’t clock who they were as I rushed up to James to check if myself and Dave could be his, or indeed their guests.
“Sure,” said James laconically, after checking with his co-travellers. I rushed back to get Dave, and it was only as I returned and caught my breath that I looked at who they were. “Haven’t we met?” I asked one. Then I realised where. I had hosted a panel of open source execs for Rackspace a while back, and this person was one of the panellists. His name wasn’t Dave, but it was Alexis.
There we have it. In the space of 3 days I had blown out Dave to meet Dave, then met Dave, then met Alexis who I had originally met in the same way I had met Dave. All in what is already the strangest place on the planet. Absolute coincidence, of course. And I remain completely rational about the whole thing.
2017
Posts from 2017.
February 2017
02-17 – From Trolls To Fake News Why Doesn'T Tech Go With Wisdom?
From Trolls To Fake News Why Doesn’T Tech Go With Wisdom?
Why is there no wisdom in technology?
It keeps happening, doesn’t it? Right now we have fake news, but this is just yet another example of how well-meant applications of technology have the ability to plunge us into yet another state of chaos — à la “we are all citizen journalists now” becoming “Oh my goodness, who clicks on this rubbish!” Fake news is just the latest in a series, from internet trolling and cyberbullying to overbearing surveillance and data misuse.
Such situations beg the question, when will we learn? Learning is something technology is very good at, we are told. Indeed, right now we are in the machine learning age, another step along the way to the singularity when we can let our digitally enhanced selves merge with the greater, artificial intelligence we are in the process of creating. Techno-nirvana is only a decade or so away.
Meanwhile, our enterprises can’t get enough of the outputs of such algorithmic capability. Big data begets even bigger insight, a comprehensive dashboard rather than a rear mirror, report-based view, for decision makers at all levels. We are being empowered, enabled, our capabilities enhanced by the wealth of information at our fingertips. I paraphrase but that’s the marketing pitch, pretty much.
Call me an old-school cynic but I’m having a job reconciling what I actually see happening with these rosy-glow visions. In my struggle to marry the two, I have realised one element that is lacking, its very notion derived from aeons of thinking about a subject. The word we use to describe such deep consideration and its consequences is ‘wisdom’.
Marc Prensky, who was one of the first to coin the term ‘digital wisdom’ in his 2001 essay, was first to say that wisdom is not an easy term to define: “The Oxford English Dictionary suggests that wisdom’s main component is judgment, referring to the “Capacity of judging rightly in matters relating to life and conduct, soundness of judgment in the choice of means and ends.” Philosopher Robert Nozick suggests that wisdom lies in knowing what is important; other definitions see wisdom as the ability to solve problems—what Aristotle called “practical wisdom”. Some definitions—although not all—attribute to wisdom a moral component, locating wisdom in the ability to discern the “right” or “healthy” thing to do.”
While we can debate what wisdom means until the cows come home, less controversial is the notion that our continued use of technology lacks its input. We can cite specific examples, like Donald Trump’s shoot-from-the-hip use of Twitter, or broader themes such as the apparent inability of most new developments to take cybersecurity into account, but the fact remains that ‘soundness of judgement’, ‘knowing what is important’ and ‘the ability to discern’ are notably uncommon.
This is, perhaps, fair enough. After all, our ability to communicate across a distance, to capture and store large quantities of information, to then manipulate, analyse and report on it is little more than a century old. The lessons we are learning — perhaps repeatedly — are still bedding in, each major crisis of techno-stupidity offering only one element of a much broader, still-emerging picture.
So, we still use social tools even though we wonder about the implications for privacy or psychology. We continue to expect music for free even as we abhor celebrity culture. We leave our virtual doors and windows open, coping with identity theft and fraud, but keep our back doors locked due to media-hyped and irrational fears. We do so because we lack the mechanisms, hard-wired into our DNA and reinforced through stories, to tell us to do otherwise.
Just as we lack ancient wisdom, so we lack the ancient wise. Technology corporations may have technical fellows, distinguished engineers and chief scientists, analyst and consulting have their principals and partners, but no group of seasoned, gravitas-bearing people professes to keep the keys of how technology can be harnessed for good; nor would they be likely to be listened to, neither by kings nor knaves, if they did.
One day such a group may evolve; one day we will know how to listen to our digital consciences, the quiet voices within all of use, retuned to our newly data-driven lifestyles. But for now we remain in the eye of the brainstorm, a tumult in which we confirm digital right from wrong through hindsight. If we just wait for evolution to take its course however, we condemn ourselves to repeating the same cycle in which the fittest, not the best or most moral, survive.
October 2017
10-25 – A New Page
A New Page
So, what’s all this about? The reason for this Facebook page is, quite simply, that I am finding it increasingly difficult to know where to put stuff about what I’m doing, particularly given that it will not be of interest to the majority at wherever it ends up. While it feels like a vanity page, at the same time, it’s seriously helping me get my stuff together.
By way of illustration, here’s a few of the bigger projects I have on at the moment.
First a musical, called Super Awesome, which is a rags-to-armageddon cross between Britannia Hospital, Silicon Valley and Glee. I’ve written the lyrics (with an idea about tunes), I’m now writing it through again in iambic hexameter for the sheer heck of it. Then I’ll flesh out the music, possibly in collaboration as I always seem to write the same tune. Once that’s all done, well, I don’t know, but we’ll have a musical score.
A technology book, currently called Smart Shift, which traces the history of computing from, well, a jolly long time ago to the present day and looks at the impact it has on our society and culture. I actually finished writing it last year, just after the Panama Papers but before Fake News and Trump, all of which are symptoms of this techno-rapture we now inhabit. I’m getting some help pulling the final pieces together, after which I will either find a publisher or self-publish some time in 2018.
The Devil’s Violinist, a novel about Paganini who (I maintain) was one of Europe’s first rock musicians. His heyday was 1820-1830, occupying a post-Napoleonic Europe that was for the first time relatively free of banditry and the related hazards of travel. Suddenly artists could visit far-flung cities, as could their entourages and hangers-on, tour managers and supporting acts. Paganini had an English secretary who wrote a short book about his experiences and wished he could write more… this is currently at 60,000 words, third draft, there will be a few more before it is anywhere near ready but it’s a great set of tales.
And finally, I’m currently putting all available time I have into a… actually, I’m not going to tell you but it is going to be epic, and bigger than anything I’ve tackled before. It’s fiction. Watch This Space. Or indeed, watch this space, all will unfold.
Plenty to keep anyone busy, alongside various short stories, poems, lyrics and songs (I’ve been trying to do one a day for the past few weeks), podcasts and other attempts at getting stuff out there. I’ll be talking about all of these things as and when they appear, I very much value your feedback (good or bad) as otherwise I’m a tree in a forest, not knowing which way to fall.
So thank you, brave people for liking my page, at the time of counting all 29 of you, I hope I will not disappoint and do let me know what you think 
10-26 – Ripples in the Pool
Ripples in the Pool
That pack of cigarettes
Ripples in the pool
A stone, wrapped with regrets
Ripples in the pool
Proving who is the best
Ripples in the pool
Memories laid to rest
Ripples in the pool
Reeling in the cast
Ripples in the pool
Breaking with the past
Ripples in the pool
10-31 – Escalator Stories
Escalator Stories
I go to London about once a week, travelling from a rural reality to this strange, incredibly vibrant environment (I know, for most, that will be the norm and I am the exception). After a slow build on the train, climbing onto the platform is like stepping into a scene. People coming from every direction, going about their business with brows furrowed in determination. It’s too easy to become part of the flow, but I often wonder what’s going on in the lives of each person I pass. Chances are that each will have their fair share of trauma. But on we all go.
Her mother’s dying, not long now
They’ve kept it from the kids
But worrying won’t pay the bills
An escalator story
They got him on the way from school
Threw his bags in a ditch
He cried but couldn’t tell his mum
An escalator story
Three bottles empty, on the shelf
How did it come to this
What would they do if they found out
An escalator story
Drugs can’t disguise the chronic pain
Getting up is the worst
Next appointment, months away
An escalator story
November 2017
11-15 – There's a Spider in the Kitchen
There’s a Spider in the Kitchen
There’s a spider in the kitchen
I don’t know what to do
There’s a spider in the kitchen
And a mantis in the loo
There’s a scorpion in the bedroom
It gave me quite a scare
There’s a scorpion in the bedroom
And crickets everywhere
There’s stick insects in the cupboard
But no apology
Because my Dad is into
Entymology
11-22 – A New Start
A New Start
Take me to the lands beyond the dark
Where sunlight burns the shadows from the earth
Take me through the slumbers of your soul
Where dreamscapes rise and crumbling mountains fall
Paint me places where there is no pain
Where neither sadness nor confusion reign
Where time has slowed
And only clear blue skies remain
Of storms long past and waters flowed
Pull me from a clear and tranquil sea
The waters warm and buoyed, with salt
Release me from the flotsam in my heart
Into a gentle nothing, a new start
2018
Posts from 2018.
January 2018
01-05 – January 5 2018. The Yin and Yang of Innovation and Governance
January 5 2018. The Yin and Yang of Innovation and Governance
These are interesting times – whatever your political affiliations or wherever in the world you might be. In this context, technology is a two-edged sword — it holds both great promise and enormous risk. We can choose be evangelists or doom-mongers, or we can simply recognise this dichotomy: for every healthcare breakthrough, there will be a fake news, and so on.
It was probably ever thus — one can imagine the dawn of the iron age, when somebody chose to make a sword even as somebody else made a ploughshare. With each breakthrough comes a breakdown, an opportunity to exploit as well as enhance, and yet somehow we are still here; I remain optimistic that humanity as a whole will prevail, whatever the short-term challenges.
We don’t always make it easy for ourselves. Older companies struggle with innovation for a thousand reasons, leaving gaps for others to fill to sometimes dramatic effect. And meanwhile, our legal systems remain behind the curve, their multi-year, consensus-driven models rendered hopelessly inadequate by the pace of change. And technology is so complex, it can raise unexpected and massive challenges (such as the latest Meltdown and Spectre security flaws in computer chips).
To whit, this bulletin. As I write this, I am reminded of Alistair Cooke’s Letters From America, a weekly new broadcast which ran from 1946 to 2004. Cooke was always the observer, his role to enlighten. I stand more chance of achieving the latter than I do matching his longevity, he died at 96 but I would be 110 by the time I finished if I kept going that long. I can only hope medical science has a few tricks up its sleeve.
So, what’s news?
2018 Predictions
I wish I’d thought of Kai Stinchcombe’s tagline on Medium, “I’m whatever the opposite of a futurist is.” I recently documented my top 5 2018 predictions as follows:
1. GDPR will be a costly, inadequate mess. No doubt GDPR will one day be achieved, but the fact is that it is already out of date. For one, simple reason: we will consent to have our privacy even more eroded than it already is. Watch this space
2. Artificial Intelligence will create silos of smartness. Integration work will keep us busy for the next year or so, even as learning systems evolve. c.f. This piece on the Register: Skynet it ain’t: Deep learning will not evolve into true AI, says boffin.
3. 5G will become just another expectation. But its physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.
4. Attitudes to autonomous vehicles will normalize. Attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right?
5. When Bitcoins collapse, blockchains will pervade. The principle can apply wherever the risk of fraud could also exist, which is just about everywhere. But this will take time
6. The world will keep on turning. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.” Okay, this last one isn’t really a prediction, more an observable fact.
DevOps Automation report
Let’s be clear, it was always about what’s currently being labelled DevOps: if you can do things faster, test them and get them into production quicker, you can find out what you really need and move on. This shouldn’t be rocket science but it is very hard for us humans to get our brains around. In this article I cite Barry Boehm, founder of the spiral methodology — I was surprised to find that it emerged in the 80’s, not the 70’s but no doubt prototyping approaches have existed since the invention of the wheel.
Why do (in-expert) organisations think they are secure?
This is the first in a series of “unanswered questions” — you know, the ones that nag at you but never really get tacked. In this case it was from security expert Ian Murphy — “Why do companies with little or no real security experience think they know their environment better than anyone else?” I welcome any additional questions you may have.
Extra-curricular
In other news, over the break I was involved in a Christmas single which is raising money for mental health charities (I’m also in the video); I have a weekly podcast with my mate Simon; and alongside my writing, I have fallen madly in love with the piano so I have set two challenges for 2018: to finish a novel and to play Widor’s Toccata on the biggest church organ I can find. I’ve started a video blogon the latter if you want to follow my progress.
01-12 – January 12 2018. Digital transformation - when specificity becomes too specific
January 12 2018. Digital transformation - when specificity becomes too specific
First off, thank you to all who have engaged in conversation since I started sending out my hand-carved newsletters. I have had some long chats and big reads on GDPR and Blockchain in particular, on both of which I shall be following up, as well as feedback on layout and so on. On a specific point of GDPR, consent, opt-in and so on with regard this very newsletter, I believe I am in good shape given how it is only going out to people such as you, with whom I have an ongoing dialogue or have done business with in the past. As ever, if you don’t want to receive this informational bulletin (I will never sell you anything), please let me know or click on the unsubscribe link at the bottom of this email.
Meanwhile, I’ve been thinking, and writing, about digital transformation. It’s not that I think it is all bunk, but rather, as the Irish adage goes, “If you want to get there, don’t start from here.” A little definition can go a long way, but sometimes it can get in the way — we’ve all been in meetings (if you haven’t, you are the lucky one) where more time is spent trying to define some term, than actually getting on with making things happen. Case in point is digital transformation, which seems to spawn more discussion than any ‘technology trend’ of recent times. This does beg the question of whether something can really be a trend, if nobody can agree what it is… but that’s for another day.
As I send this out, I wonder if I should be commenting on the Meltdown and Spectre security flaws. I’m not sure I can add much to what has already been said: people are patching their systems like crazy; you should update your mobile device when requested; and everything based on Intel chips will run slower for a while, until they are replaced or someone comes up with some snazzy firmware tweak (which will mean more patching). Otherwise, the world will continue to turn.
If you’re looking to ‘do’ digital transformation, read this first
Meanwhile, I’m not sure there’s any such thing as digital transformation - as in, you can’t just walk into WalMart and buy it; neither is it an architecture, nor an approach, nor even a philosophy. However, it’s certainly got people talking. I set out my reasons why it isn’t a thing here: in summary, terminology matters not a jot but the propensity to change is fundamental:
1. It’s all about the data — the term is just an ill-considered response to what we knew anyway, that we are in the information age.
2. Technology is enabling us to do new things — to continue the Sherlock-level insight, this really is enabling breakthroughs. Who knew?
3. We tend to do the easy or cheap stuff — trouble is, these breakthroughs happen just as often because we are lazy, as driven.
4. Nobody knows what the next big thing will be — is where the varnish starts to peel. Won’t we just have to ‘transform’ again?
5. That we are not yet “there”, nor will we ever be — which is enough to lead any strategist to breakdown. This gig will never be done.
6. Responsiveness is the answer, however you package it — so our focus should be on ability to change. Common sense perhaps, but it isn’t happening.
On the upside, there’ll still be plenty of jobs
A good example of the digital hype and in particular, point 4 above is how we’re all going to be out of jobs (yes, everyone, from manual workers to lawyers, according to the University of Oxford). Here’s a summary of 10 reasons why nobody should worry about whether they will have something to do in the years to come:
- Because decisions are more than insights.
- Because we have hair, nails and teeth.
- Because we ascribe value to human interaction and care.
- Because we love craft.
- Because we value each other and the services we offer.
- Because we are smart enough to think of new things to do.
- Because complexity continues to beat computing.
- Because experience and expertise counts.
- Because we see value in the value-add.
- Because the new world needs new skills.
The bottom line is that even as we automate certain manual activities, we lose neither the desire, nor the propensity for work (or indeed, value exchange between us). We have evolved such that we see work as necessary: we derive satisfaction from doing it ourselves, and sharing the fruits of our labours with others. Will jobs change? Well, yes, but how does this differ from the past 50 years?
Oh and finally, don’t even start me off on monetisation.
Extra-curricular
In other news, I’ve been getting this MailChimp thing up and running - any feedback welcome. I don’t recommend anyone looks at the latest piano vlogs (they are painful) but they are a point in time which I hope to move beyond soon! I’ve been writing a bit of poetry, largely as a way of getting the creative juices going first thing in the morning - you can check the latest on my Facebook page.
Thank you to all my subscribers. Any questions or feedback, let me know.
Until next time, Jon
February 2018
02-27 – Here it comes
Here it comes
Here it comes:
The unexpected
Burst of inspiration
Ideas tumble in, tumbling
Like waves crashing
On familiar shores
You drown in sheer joy
Of understanding
Here it comes:
The unexpected
Moment of destruction
Cold circumstances bring
Reality collapsing
Your house of cards
You feel the crushing weight
Of understanding
Here it comes:
The unexpected
Point of revelation
Randomly preserved
Memorised connections
Showing, not telling
The unexpected clarity
Of understanding
2019
Posts from 2019.
March 2019
03-05 – Putting on the oxygen mask
Putting on the oxygen mask
Nobody can ever know what it is that hits someone, that causes a change in their perspective or behaviour. It could be something which appears significant, such as a friend who was involved in a rail crash, or a motorbike accident; or it could be something that appears relatively trivial. It doesn’t matter, beyond the fact that it has taken place.
We are all weak, vulnerable, messed up. From an early age we learn coping strategies, we get on with life as being better than the alternative, but in the knowledge that it isn’t quite right; we laugh and joke, and have moments of joy and peace even as we struggle to make sense of it all. And then, at a moment in time, we decide, no. Something gives up inside us, is no longer able to keep up the appearance.
At the same time, stress. It’s impossible to know what the number is, of thoughts we can keep in our heads at any moment in time. As a race, we’re pretty good at processing information; we also have a (bad) habit of seeing ourselves as invulnerable, even as we take on more and more. We fill ourselves up, swimming in a tank of our own making, squeezing out oxygen until we leave ourselves only an inch or two of breathing space.
And still, we cope… until the point hits when we need far more headspace than we have allowed ourselves. Suddenly, and usually though some unexpected, external event, we go from a semblance of normality to a situation where we are gasping for breath, desperate. We choke, we become addled, we kick out in frustration and fear. Why is this happening to me? Why me? stops being a question, becomes a mantra.
The new situation it exhibits itself differently for different people. Some get depressed, locked into their own trauma; some get angry, unable to control themselves even long after the situation has abated or gone away; some consider the option of taking themselves out of the situation, permanently. All are vulnerable, weak, as they do not have the space to process what amounts to all of life, which means they can react to even the smallest of triggers.
What’s the answer? Acceptance, ultimately, that we are not superhuman, that we have frailties that only we can deal with, that we deserve our own attention, that we have something to offer, that it all makes sense. That none of it matters, but all of it is important. And time, time to understand, to work through what may be long-standing issues. And, yes, change.
Not only does the answer often look very different to the expectation, but also, we need to create space if we are going to find it. Which means taking responsibility to stop, to fall back, to let the tank drain, to breathe clean air. To accept that each individual must first (to switch analogies) put on their own oxygen mask, if they are to help others.
But more than this. Crisis may be a problem with no time left to solve it: it may have been building up for many years, lurking, being put off. At the same time, it is an opportunity: if we, as humans, can only stop when we have no choice but, then the fact we have been forcibly stopped is a gift.
The present may feel bleak, but so does a field in winter, when all has died. The field doesn’t matter; more important is the first, tender shoot of new growth, then the second, each of which extends naturally towards the light.
We can try to be selfless, we can feel our problems are not important enough, that we will still be able to employ coping strategies just like we used to. The first step is to recognise that the moment has passed, and then, if that is a case, to make a decision: whether we are important enough to put first, not in an indulgent way but because, ultimately, we are all we have.
This journey is unique to everyone, but the pattern is not. From the moment of crisis, people choose to continue as best they can, for as long as they can, or they choose to tackle it head-on. Many never reach the point of decision (which is tragic), and many choose not to (equally tragic). Some, perhaps a minority, decide, or find themselves with no option but to scrape off layer after painful layer before they can be themselves again.
April 2019
04-10 – Hey LinkedIn, bear with me on this...
Hey LinkedIn, bear with me on this…
…I’m shifting to a new host, re-installing Wordpress and trying a bunch of new stuff. Including an automated LinkedIn connector. If this works, I’ll eat a very small hat. Made of rice paper.
Update: to be fair, that appears to have worked.
October 2019
10-23 – How I Write Reports
How I Write Reports
10-23 – How not to run a sub-four marathon
How not to run a sub-four marathon
Here’s my seven “laws”. I’m no athlete but I got there in the end :)
1. Stick to the plan
2. You are what you eat
3. Lateral (muscle) thinking
4. Distance makes the heart…
5. Get the science
6. Enjoy it!
7. Do all the things
To be expanded. With dinosaurs.
10-23 – Travel Forward 2019: Let's do this
Travel Forward 2019: Let’s do this
You know that thing when you realise there’s under two weeks to go? I’m reviewing the final PDFs of the Travel Forward conference agenda right now and once again I’m staggered to think how it has gone from the aspirational, yet largely empty canvas of six months ago, to the packed, exciting and dynamic programme we now have.
As I’ve been briefing speakers, the message has been simple: senior technology decision makers from across the travel industry will be coming to days one and two of the conference… but what happens once they have gone home, slept, woken and arrived back in their workplaces on day three?
Our goal is not only to inspire but to educate, with practical steps that enable attendees to take their businesses forward (the clue’s in the name). I say “our” - I’ve been lucky enough to work for, and with some really smart people to pull this programme together.
So, team, speakers and attendees, let’s do this - let’s make Travel Forward 2019 a conference to remember, where preconceptions are left at the door and where hopes and dreams are replaced by practical and actionable steps towards genuine, technology-powered opportunity.
10-23 – Updating site
Updating site
Nobody cares but me. Call this a stake in the ground.
10-24 – Dystopian Dreams
Dystopian Dreams
10-25 – Retrospective thoughts on Smart Shift
Retrospective thoughts on Smart Shift
Smart Shift, a book about the impact of technology on society, is now published online. Here’s my thoughts on its multi-year gestation.
About seven years ago, I decided to write about everything I thought I’d learned, on the impact of technology on society as a whole. Having been down in the weeds of infrastructure (either as a job, or as an analyst), I wanted to express myself, to let some ideas free that had been buzzing in my head for some time. I know, I thought, why not write it as a book. That’ll be simple.
I already had some form, concerning the notion of getting into print. Biographies of a couple of popular bands, a technology-related book and various mini-publications gave me experience, some contacts and, I believed, an approach which was, one way or another, going to work.
Fast forward a few years and many lessons, and we have a book. While I took advice and had interest at beginning, middle and end, while I worked through the process of proposals, of creating a narrative that fitted both what people wanted to read and how they wanted to read it, of having reviews and honing the result, it was never published.
And, perhaps, it was never going to be, nor was it supposed to be, for reasons I didn’t fully understand. The first, so wonderfully exposed recently by screenwriter Christopher McQuarrie, is the lottery nature of many areas of the arts: writing, film and music.
The crucial point is that the lottery is symptom, not cause: a mathematically inevitable consequence of the imbalance between a gloriously rich seam of talent-infused material, and a set of corporate channels that have limited bandwidth, flexibility and indeed, creativity, all of which is navigating a distracting ocean of flotsam and jetsam. While the background is open to debate, the consequences are the same: just “doing the thing” right doesn’t inevitably lead to what the industry defines as success.
Much to unpick: a different thread, of course, could be that my own book is either flotsam or jetsam. A better line of thinking still, is to recognise a number of factors that are spawned from the above, not least, what is it all for?
Before answering this broader question (broadest of all questions?) it’s worth pointing out the nature of this particular beast. Let me put it this way: any treatise that starts with the notion that things are changing (e.g. anything about technology) is signing its own best-before warrant. The window of opportunity, and therefore one’s ability to deliver, is constrained by the time period about which one is covering, and the rate of change therein.
In other words, over the period of writing, I was always out of date. No sooner had I written one thing than the facts, the data points, the anecdotes started to wilt, to wither on the vine I had created for them. It isn’t by accident that I ended up delving into the history of tech, as I had already captured several zeitgeists only to see them die and desiccate before my eyes.
On the upside, I now have a book which could (still) be revised: each chapter is structured on the principle of starting with something old, and using that as a foundation to describe the new. Canny, eh?
Returning to “what is it for”, one point spawns from this: there’s a place for history in the now. I know, that’s not blindingly insightful, but the link between the two is often shunned in technological circles which prefer to major on revolutions than deeper-rooted truths.
Meanwhile, and speaking of the now, one needs to accept the singular consequence of both lottery culture and rapid change, simply put: if you’re a technologist, the chances of getting your message out there in book form are miniscule, if you rely on a relatively slow-moving industry. Which very much begs the question, what is the point? If the answer is to be published, then you may be asking the wrong question but, as Christopher intimates, good luck to you.
At this point, I’d like to bring in another lesson from my experiences with singing in a band, or in particular, what happens when only a handful of people shows up. It happens, but it doesn’t have to be a disaster: what I have learned is, if one person in the room is enjoying themselves, they become the audience. It’s humbling, uplifting and incredibly freeing to give just one or two people a great time through music.
Put everything together and the most significant lesson from Smart Shift is this: my job, and my passion is to capture, then share an understanding. The job, then, is to balance reach with timing: better that a handful of people get something at the moment that it matters, than a thousand receive old news.
The bottom line is just do it, get it out there. Grow your audience by all means, build a list of people who want to hear what you have to say, and have something to give back in response. But start with the right ones, with the person at the back of the room that claps along. Not because of any narcissistic ideal but because, if the job is to communicate, an active audience of one is infinitely more powerful than not being heard at all.
10-31 – Afterthoughts
Afterthoughts
And so I’m dead.
By now I’ll know
If after-life has aught to show
Or whether it is nothing more
Than biochemical remains
That you’re consigning down below.
Yes, I’m dead, that’s doubtless true
But nothing else has gone from view
No other lives turned on their heads
Not here, at least.
Which matters more than who’s deceased.
In life I had one goal, to fill it…
Actually, two, if I could will it
Fulfil the many things I could
While being with the ones I loved.
Which all along, was my endeavour.
So never think I didn’t once
Appreciate the smallest moment
Spent alone or with another.
There’s so much that we could have covered
Had I not shuffled from this place…
…I’m dead. But:
The time we had, it was enough
Heady and inspiring stuff, but
Infinitesimally probable
Consequences of events
From Big Bang to birth’s miracle
Godly or godless, heaven-sent.
So could I feel, and if I had
The wherewithal to reveal
A single thought
It would be naught but gratitude
To this, to you
That have imbued
My life as-was.
And if you could, perhaps you ought
Remember that your time is short
Be bold, fear nothing but the thought
That life, a gift that comes by chance
Is only ever given once.
10-31 – We go to the end
We go to the end
We go to the end. together We stare ahead, toward the void We stand on the edge of forever We feel the peace Where once was noise
We open ourselves to the embrace Of silence Hand in hand, with every pace A step into the unknown. A moment taking us beyond What once was, what could ever be
We go to the end, together We cross the threshold, where beyond No time, nor space can find us For a moment, just a moment We pause Then we are gone
November 2019
11-01 – The Silences
The Silences
It’s the silences that get to me The empty spaces in between The otherwise continuous stream Of noisy, gung-ho positivity And unveiled anger, bordering on The vitriolic
Its the silences that show The hollow truth behind what we know To be no more than a protective facade In this dialectic war, any words Will serve as ammunition
But then we falter, attempted misdirection, Distraction, ultimately unsatisfying descents Into whatabouttery lead only to a realisation That the barrel is empty, the battle is lost…
…At least, this time, as we emerge Once again forthrightly on the front foot Confident of a position that can once again Ignore, avoid the distraction of either facts Or purpose.
11-07 – Snippet
Snippet
An hour before sunrise. The first, dull half-light of the new day gave silhouettes their dim outlines, pitch against grey slate. A light breeze, chill-edged by the cold cloudless night, was beginning to disperse the rising marsh-mists of morning, picking up a leaf here and there as it cut through the copse.
11-22 – Time is a mirror
Time is a mirror
Time is a mirror
Reflecting past and future
Testing, testing what we know
Days are a river
Running, fore and after
Moving, moving with the flow
Today’s another day
Tomorrow will have things to say
The future’s going to happen
Anyway
Nights will last forever
On the edge of never
Depths of silence reclaim what we owe
Today’s another day
Tomorrow will have things to say
The future’s going to happen
Anyway
December 2019
12-09 – Prologue
Prologue
His first, jolting scream of horror was followed by a second, then another each overlapping the last as they came, layer on layer, jabbing like mosquitoes at a street lamp, flashing colours spinning faster and faster like a bright-painted fairground ride. His defense from each deadly wave too quickly became a feeble hands on head protection of his inner self, inner soul against the hooligan terror. Deep down he knew what was happening (hadn’t he said this was inevitable?), even now sadly weighing up his glib prediction against the true terror of this onslaught, even through the pain as it stabbed and pecked at the fragile, fraying cord still supporting his thoughts and mind and sanity and oh God the pain … and he felt his grip weaken, and he felt the cord fray and break, and he gave a last scream as he fell into the chaos, as his mind, as his consciousness slipped down to drown in the still, black depths beneath an insane sea.
2020
Posts from 2020.
February 2020
02-26 – Write about what you know, they said
Write about what you know, they said
Write about what you know, they said. What, I thought- my childhood? That continuum of non- descript non-events that somehow went to define me? Of solitary walks to school, collecting Smurf stickers from the garage, feeling no fear atop of the long,- high wall that went for ever, or the terrifying moments crossing the pub car park, head down to avoid the attention of parka-ed mods, proudly astride their scooters. What do I know? The form of pond life under a microscope, the way in which a fine brush can bear just so much enamel paint, the satisfaction of a finely folded origami shape, then the rush to a tap as it becomes a water bomb, the sibling games in the garden on a summer afternoon, the solitary walls in the woods coloured by the one time a grown-up, intentions unknown, took chase… yes, what do I know? Never boredom nor anger. Sometimes fear, confusion at becoming a target for the vindictive wrath of another. Often laughter, occasional pain, that fall from a tree, that tapped finger, more falls, each leaving a small scar like a badge. Always support, always structure, each day planned like the last, night time routines unshifting, but for the occasional , unsatisfying attempt to read under the bedsheets with a torch, the attempts to watch TV from a perch at the top of the stairs, only to be spotted and roundly put to rights. The long, long, long summers, warm days and bike rides and friends’ houses and marmite sandwiches. The occasional shopping trip to the big town, all concrete and sallow faces. The holidays, long walks to the beach carrying an encampment of wind breaks, towels, picnic baskets, the games of dice and cards, the boat trips and donkey sanctuaries. The books, the music, the jigsaws at Christmas, the relatives bringing cream cakes and motor racing stickers. The love and comfort, the fear and uncertainty, the continuity of it all, the sad, hindsight realisation that it could not be idyllic forever, as self-awareness and hormones left only confusion and a constant need to conform, even as everything only became harder. Write about what you know, they said… but there are no stories, only memories, of another, more peaceful time.
May 2020
05-05 – A Recipe for Wholemeal Sourdough Bread. With Notes.
A Recipe for Wholemeal Sourdough Bread. With Notes.
Loosely based on BBC Good Food.
Ingredients
1 kilo of wholemeal flour for two loaves, plus enough (500 grams-ish) for the starter and (cough) levain
20 grams of salt
I use flour straight from a mill — I sieve it to remove most of the bran and give the bread half a chance of rising.
Introduction
Sourdough bread is made without bought yeast: it uses a starter, which is a sloppy, semi-fermented paste. It’s dead easy to make a starter: it has to be, as making bread is one of the first things we ever did. Starters are remarkably resilient — check out the US pioneers that would carry a bit of starter in a pouch, then use it to kick off their bread making when they got the opportunity. So, above all, don’t be scared of either the idea, or the process. All these pictures of beautiful bread are largely bread doing its thing, you just need to follow some standard principles and it’ll do the rest. The one thing it does take is time, which, let’s face it, we all have quite a lot of at the moment. And perhaps we always should. But anyway.
In terms of practicalities, a starter takes under a week to kick off; then sourdough bread is best considered over a four-day period.
* Day 0 is when you take the starter out of the fridge and re-activate it.
* Day 1 is when you make a levain, ready for bread making
* Day 2 is when you’ll do the bread making itself
* Day 3 is when you cook the bread
If this sounds like a palaver, only Day 2 requires repeated effort, and even that is just dipping in and out — a bit of effort in the morning then four lots of 5 minutes, every half an hour, in the afternoon. So, it’s more a case of building it into a routine than any hard effort. Ideally, a nine-to-five worker would have Day 2 on a Saturday, but that’s also when you’d be wanting to get that delicious, freshly baked bread out of the oven… so perhaps reserve a bit of time, between emails, say, on a Friday morning — which puts Day 0 as Wednesday evening. How hard can it be?
First things first: you’ll need that there starter before you can do anything. Again, given the above timescales, you should kick that off at the weekend, then you will be ready to go on the following Wednesday. Good luck!
Making the Starter
The starter makes itself, with a bit of help. Take a reasonably large vessel - a 500ml yoghurt pot with a lid, say - and put in it 50 grams of flour and 50 mls of lukewarm water. Exact proportions aren’t important but you don’t need too much of anything — stir it and you should end up with a loose paste. Put the lid on loosely, then leave it overnight on the kitchen surface. Do the same every day for 4-5 days, and you should end up with something frothing and bubbling of its own accord. Some notes:
1. If you want to be totally hipster, you can use a Kilmer jar or similar but this will make no difference to the starter.
2. You can also give it a name, but come on, be serious. Unless you have kids in which case, totally. Or you just want to.
3. But you can call it Mother. No, I don’t know why either.
4. You can use any kind of plain flour - wholemeal, white, spelt, doesn’t matter. You’re just creating something for yeast to eat as it develops.
5. If it develops a layer of water, you can pour this off if you feel so inclined. You can also throw away half the starter from time to time, as you never need that much.
6. A test, reputedly, is that a teaspoon of starter should float in warm water. I don’t think this works for wholemeal, and if it’s not floating but still frothing, don’t fret.
In any case, a good, frothy starter is clearly doing its thing. Once you have this, you’re ready to make some bread. See below and put what you don’t need in the fridge, unless you are planning on making bread every day.
Making the Levain
Levain is a French word (yeast is ‘levure’), which is of no relevance whatsoever, and nor is the levain itself: all that’s really happening at this stage is that you’re getting the proportions of starter right for a couple of loaves. Levain is the sort of word people use to make bread making sound more mysterious, and therefore less accessible, than it actually is. It’s words like levain that cause snobbery and pretentiousness, and leave normal people, who would otherwise be perfectly capable of producing a loaf, feeling inadequate and unsure of themselves. Levain is a touchstone for all that is wrong with cooking, taking away any concept of initiative or self-belief and leaving us all to be over-reliant on recipes as if we can’t think for ourselves, but have to follow someone else’s steps as though they contain some magic formula that would otherwise be unattainable to mere mortals. It is the fault of levain, yes, levain that we have celebrity chefs, whole shelves full of beautifully illustrated books, and competitive cooking series in which it isn’t enough to make something delicious and nutritious, but it has to be a feat of culinary ingenuity. It is because of levain that we have Paul Hollywood.
Anyway, stick a healthy tablespoon of starter into a bowl, add 100 grams of flour and 100 grams of lukewarm water, and leave it on the kitchen surface, loosely lidded, overnight. That’s it.
Making the bread - Day 2 Morning
The morning of Day 2, you kick things off by mixing the levain into 600 mls of lukewarm water. By now you may be wondering about all this ‘lukewarm water’ business, what’s that about? Essentially, yeast operates best at about 25 degrees Celsius — that’s when it most likes to bud, to eat sugar and turn it into carbon dioxide. As room temperature is (say) 16-20 degrees, you can give the yeast a helping hand with water that is 30 degrees or so, that way, when it makes contact with the flour, you should arrive somewhere around that magic 25. Doing this means the resulting dough is already at the right temperature, within: you don’t need to worry about airing cupboards, warm porches and the like.
You should have a thin slurry of yeasty goodness, into which you can add a kilogram of flour. Note that the proportions that count are of starter to flour: the amount of water is relevant more for how workable the resulting mix becomes. So don’t be afraid to add a bit more if the dough feels too solid: too much water and you can end up with a very sticky and unmanageable dough, but it shouldn’t affect the ability of the bread to rise. (As a digression, half a kilo of flour will make a ‘standard’ loaf. Normal bread making requires 10 grams of (proper, not low) salt and 10 of yeast per loaf, so this is no different).
Mix the flour into the dough. You can use an implement for this, but doing so misses a trick as your fingers are the best judge of how well something is missing. Top tip: as you start, use one hand to hold the bowl, and the other to mix. You can bring your second hand in once the process is underway. Second top tip: be sure to have already rolled your sleeves up before starting: if you don’t, it will already be too late, as your hands will be coated. You should end up with a rough dough: get as much of the dough off your fingers and back into the dough before you finish, them leave on the surface for between 1-4 hours, in a clear plastic bag. This holds in the moisture and stops the dough from drying out.
Note that the timing is not massively relevant: you’re wanting to let the mixture activate itself over the day, but equally, you have things to be getting on with. I suggest you get this stage done between 9-10, then you have something to work on after lunch. Equally, you could do it first thing, and time lunch around the next stages. You get the picture.
Day 2 Afternoon
Throw the 20 grams of salt over the mix, and add a splash of water, then you’re ready to knead this emerging masterpiece. You’re using your hands, again, and you want to be able to feel the material stretching through your fingers. You should know when it’s done, as it’ll feel consistently stretchy. If it feels a bit tight, add a splash more water but no more.
Leave for 20 minutes. Then take the dough out of the bowl and flip it onto its back, straight onto the kitchen surface - its underside should look a bit pocked. pull the sides over each other then flip it back and stretch the sides down and underneath, to create a skin. Then pop it back in the bowl. Put some flour on your hands to do this if it’s sticking.
Leave for 20 minutes, then do it again. And again after 20 minutes, and again. Once more, timing does not need to be super-accurate. Then leave it in the bowl, bagged, for two or three hours. It should rise, not by much — more important is that bubbles appear under the surface.
At this stage - late afternoon or evening - you should have your dough, proved and ready to go. Divide it into two (super-top tip, which took me years to work out, is that accurate results are best done with scales. Well, duh), then follow the above process of putting each piece onto its back, then folding into the centre before flipping and stretching the skin. This time, stretch tighter than previously, until you feel you have a Jack-the-Giant-killer tight tummy of a skin.
You can now put in a floured basket, if you have one. It isn’t essential. You can just use the bowl you were using — I have two thin laminated plastic salad bowls (barbecue style) that do the job. Sprinkle a bit of flour in first, but don’t worry if you forget. Do the same with the second portion of dough. (Or, if you like, divide it into four and make pizza bases. Follow the same fold and stretch process for each, leave to rest for half an hour or so, then attempt to juggle around your head before giving up and rolling with a bit of flour. This can, but doesn’t need to be, semolina flour. Make sure they’re as thin as possible whilst still supporting the hefty amount of cheese and tomato you plan to load them with.)
Back at the dough, put the bowl(s) in the fridge overnight, back in the plastic bag. I have a large transparent bag I have been using for this purpose, for years.
Cooking the bread
The next morning, haul yourself out of bed, make a cup of tea, take the bowls out of the fridge and put their oven straight on at ‘hot’ - 240 Celsius for a standard oven, bit less for a fan oven or gas mark 9. You want it hot to get that initial quick rise as the trapped air expands. Ideally, put in a casserole dish - we have an old ironware Le Creuset type thing, a Pyrex dish is probably just as good. Use a higher shelf, and leave a lower, loaf-sized shelf free.
When this is up to temperature, we get to the fiddliest bit. You want to keep the oven hot, at the same time as getting the expanded dough, cross-cut, into the dish. No easy answers for this, but my order is as follows.
1. Get the dough bowl ready and scrape the dough away from the sides using a spatula, so it is loose and ready.
2. Flour your hands a bit.
3. Remove the casserole dish from the oven and put it close to your working surface.
4. Turn the dough onto the surface, flip it upright and shape it very carefully. Slice it across the top in an X.
5. Take the lid off the casserole.
6. Pick up the dough with two cupped hands, lift it across and drop it into the casserole.
7. Swear profusely and panic as you realise you have dropped it off centre. Jiggle the casserole and sigh with relief.
8. Put the lid back on and put the casserole back in the oven, on the higher shelf.
Set the timer for 30 minutes and enjoy that cup of tea. Once time is up, check the loaf - hopefully it will have done its thing, rising and making a crown out of the cross-cut. Remove it from the casserole - hopefully a jiggle will release it, or you may need to use a spatula - and put it on a lower shelf to brown for a further 10-20 minutes. If you have gone down the two-loaf route, put the empty casserole back in the oven, while you get ready for stages 1-8 once more.
Meanwhile, back at the first loaf. To be cooked, bread should reach a temperature of 94 degrees - we have found that cooked wholemeal sourdough needs another 5 minutes or so even when it has reached this temperature, so it is not sticky in the middle. You can use a thermometer for this (okay, I do have one bit of fancy-pants gadgetry) or keep your fingers crossed — if the latter, I would err on the side of more time, all you will gain is a bit more crust.
When you see fit, remove the loaf from the oven and put on a cooling tray. It should be crusty yet still with some give. Follow the same steps with the second loaf. Leave to cool as long as you can stand before cutting a deep slice of crust and slathering it with butter. You deserve it. Oh, and don’t forget to take a picture and upload it to all the social media channels, as there is nothing people love more than seeing pictures of freshly made bread.
2021
Posts from 2021.
June 2021
06-17 – The Records
The Records
There’s a story I need to tell. It’s long, and possibly boring, so bear with me.
“I told Mark and Pete about the records,” I said to Liz.
“Did they laugh?” she asked.
“Yes they did, thankfully,” I said. Liz smiled wryly, and we both went on our way.
I had been in Stroud Brewery when I told them. Dare I, I thought to myself, but heck, what’s to lose but my dignity. Again. Here goes, I thought, as I went in.
“Can I tell you about the records?” I asked.
“Sure,” said Mark and Pete, not knowing what else to say: we were sitting in a pub, what else was there to do but share idle stories. So I began.
“Well, there I was, at the Canal Trust bookshop,” I started. I didn’t tell them why I was there - it was to enquire about whether the bookshop wanted one of those transport cases for albums. Liz was with me, as was Stan, the dog: both were waiting on the path outside, within earshot. Two volunteers, both men, were sitting outside, wearing Canal Trust shirts and by this token good candidates for an enquiry.
“So,” I continued, “I said to these guys, can I tell you about the records?” What I didn’t say, to Mark and Pete at least, was that I had explained it as the funniest moment of my life, a turn of phrase that would come back to bite me. “Sure,” said the two Canal Trust people. At least I think they did, or perhaps it was just a nod, but it didn’t matter, I was going to tell them anyway.
“Okay,” I said, here’s what happened. There I was, walking the dog the other day, and I saw one of your colleagues. We were chatting and I asked him what he did.
“Ah, I deal with the records,” he said.
“Wow,” I said, “that must be really interesting, dealing with all those historical documents.” I was distinctly impressed, thinking to myself that he must be an absolute mine of information.
“Nah,” he said, opening a door. “These records.” He pointed at row upon row of albums of music, LPs stretching from wall to wall.
“Oh,” I said, laughing. I don’t remember him having found it particularly funny, but it tickled me.
So, that’s what I told the two Canal Trust volunteers: I was already laughing as I said it. I got to the end and said, “Nah, these records!” and waited for their response.
Nothing. Not a titter, not a smile. “You see, the records,” I said.
“I don’t get it,” said one.
“You know, albums. LPs. Like, you know, Johnny Cash.”
“Oh,” he said, nonplussed.
“I get it, I think,” said his colleague.
“The records,” I said, desperately. I tried again with an example, before saying, “I will never tell that story again.”
Nothing. Just a slightly perturbed face, as though a coin had appeared on the table for no reason.
“You might want to ask inside, you know, about the box,” said the other volunteer.
Recognising this as an opportunity to exit, I took my leave and did precisely that.
As I re-emerged onto the path, not able to make eye contact with the two, I saw Liz looking at me, her face sparkling with barely suppressed mirth. As we moved away she collapsed into helpless laughter.
“You see, the records,” I said, without any hope left.
“Stop,” she said, bursting into laughter once again.
So that’s what I told Mark and Pete. They laughed, their collective response no doubt helped by the empty pint glasses sitting in front of them. At that moment I didn’t care, I felt nothing but gratitude.
Which is what I told Liz. And now I am telling you. If you have read this far, I can only thank you.
You see, the records.
Quocirca
Posts published in Quocirca.
2002
Posts from 2002.
August 2002
08-22 – Whats The Fuss Storage Networking
Whats The Fuss Storage Networking
What’s the fuss… about Storage Networking?
A few years ago, some bright spark noticed that storage devices – disks, tapes and so on – were tied too closely to the computers in which they were installed. What if, he (or maybe she) thought, what if storage devices were given a network of their own, or even attached to the LAN directly? All storage devices could be shared between all the computers simultaneously, new storage could be added or replaced at any time, and information could be managed, moved, backed up and secured from one central place. Nice dream – this is the vision for storage networking, in all its forms. Networked Storage, or the networking of storage, involves creating an environment in which storage hardware (in its simplest form, hard disks, tape drives and so on) can be directly attached to a network of some form. There are two ways in which this can be done:
Create a network (of protocols, devices etc.) exclusively designed for, and hence optimised for, storage. Storage devices of all forms can be arranged in a form that best suits storage needs, for example data can be mirrored to a fail-over site. Also many data transfers (say, backups) can take place without taking bandwidth away from the regular network, or processor time cycles from servers. This model gives us Storage Area Networking, or the SAN.
Create devices which uses regular network protocols (usually Ethernet) but which are built from the ground up to store and retrieve data. Such special-purpose “appliances” can be optimised for performance criteria such as data throughput, reliability and information integrity. At the same time, they can be built more cost-effectively than general-purpose devices (regular computers to you and me). This model gives Network-Attached Storage, or NAS.
The similarities and differences, costs and benefits, strengths and weaknesses of SAN and NAS have been debated beyond the call of duty within and outside the storage industry. Let’s face it however, most people couldn’t give an orangutan’s elbow for the differences between SAN and NAS. In the future, it is fully expected that the two models will merge; meanwhile, we are blessed with both.
The business benefits of storage networking may be summarised in one word – value. Correctly specified and implemented, a pure storage environment is able to deliver a better, faster storage service at a lower cost. With NAS, there is less of an impact on the existing network infrastructure, physically at least (however it is recommended to take account of the additional network load of adding new devices). SAN involves a greater up-front investment – there is a new network to be implemented, after all – but boasts greater advantages of security, performance and resilience. As with any other infrastructure deployment, storage implementations are as much about about understanding (and therefore meeting) the business information management needs of the organization, as having a grasp of available technologies and how they should be deployed and maintained. Don’t believe anyone who says, “you just plug it in and let it run.” Keep in mind the following:
The most fundamental storage management application is (still) backup and restore, with the most important being restore. Addition of storage should equate to the addition of managed, fault and disaster-tolerant facilities, not just plugging in of disks.
Interoperability remains the bane of the SAN, as it has been for the past three years or more. Relatively new standards, such as those from the SNIA, are being implemented to ensure both hardware and software interoperability, but it is still a brave IT manager who invests in SAN storage hardware from different manufacturers.
Scalability of storage should be a problem solved, but it has to be implemented. Technologies such as storage virtualisation are being discussed and implemented as ways of managing and using storage assets; however the definition of virtualisation varies from manufacturer to manufacturer, from virtual physical ports on hardware, through virtual placement in a silo to all storage assets being viewed as a single, infinitely expansible pool.
Where’s storage networking going? Virtualisation is a no-brainer, at least it will be once the manufacturers have agreed what it is. Certainly, from the customer perspective it would solve a lot of ills, such as the old chestnut of how to manage the disparate, distributed, duplicative, fragmented storage environments that exist in many organizations. Virtualisation cannot exist without suitable management tools, which may exist for on a manufacturer by manufacturer basis, but are still in their infancy (consider CA’s BrightStor portal and HP Openview Storage Node Manager) when it comes to really getting a handle on an organisation’s storage assets. The next evolutionary step, following “virtual” and managed” is the provision of storage as a managed service: this has many dependencies ranging from the physical – available bandwidth and implementation of caching technologies – to the emotional – would you really trust a third party to manage your data?
Whatever storage insiders would tell you, there is still a little way to go. Technologies are improving all the time, new standards such as iSCSI are being pioneered and each new step is further approaching but yet delaying the vision of coherently managed, fully multivendor, networked storage delivered as a service. Mustn’t grumble – technology is only part of the problem and the savvy IT manager would do well to use the time available to understand his storage needs and how well they are being met. So – is your storage house in order?
September 2002
09-12 – Whats The Fuss Virtualised Infrastructure
Whats The Fuss Virtualised Infrastructure
What’s The Fuss – The Virtualised Infrastructure
Jon Collins, 12/9/02
Given the current downturn in the technology market (and the fortunes of many within it), any comparisons between IT and nuclear fusion may appear overdone, not to mention in bad taste. It is true, however, that the major steps taken by the industry over the years (client/server computing, the World Wide Web and mobile technologies to name but three) have been made as a result of convergence, the coming together of a number of technologies. Whatever the naysayers might argue (though they do have the soapbox), the results have been big bangs rather than idle whimpers. E-commerce may have been more hype than substance, but who would give up their Web browsers or email access? The mobile market may have been overhyped, but who would hand back their cellphones or palmtops? Behind the hype lies real substance.
Spend might have dried up for the moment, but progress continues as previously disparate areas of IT overlap and merge. Let us consider some realities. Despite continuing fears of brown-outs, the bandwidth offered by the Internet continues to grow while its costs continue to decrease. However, the complexity of IT infrastructures continues on an upward curve. The increased bandwidth and reduced cost of Internet access makes outsourcing of applications or infrastructure increasingly attractive.
Finally – and here’s the curve ball – we have Web Services.
Maybe we’d better pause for a moment to remember what Web Services are all about. Consider the following: a software application can be thought of as a set of software elements, each dealing with one part of the functionality. These software elements can communicate using defined protocols and mechanisms across a network, they do not need to reside on the same machine. Leap of faith time: they can be situated anywhere in the world and they can be managed by any third party. Second leap of faith: anything that can be provided as a service can also be provided as a Web Service. After all, the tag refers only to the fact that a standard communications protocol has been adopted. Does this mean that the hypemeisters Microsoft, Oracle, Sun et al. are right after all? Yes, but not even in the way that they would expect. Web Services aren’t the answer, but taken together with enough bandwidth and managed outsourcing of services from trusted third parties, they are the enabler to the virtualised infrastructure. And that really is the answer.
The CIO of any organisation has, essentially, two problems to solve. The first is to have applications that meet the needs of both internal staff and customers. The second is to have sufficient storage resource to cope with the amount of data that needs to be managed day by day. Both problems are dynamic: they do not stand still long enough to be solved, and the solutions need to take into account a plethora of existing systems and storage devices. They also need to consider the fact that the new-and-improved solutions of today will be the old-and-inferior legacy applications and devices of tomorrow. What if… what if, to create an application, the CIO can select application piece parts off the shelf, and assemble them to make an application to fit his needs? That is the vision of Web services. Each service can be defined according to the business processes it needs to support, then used as the basis of a build, re-use, buy or rent decision. Why build something when it can be bought or rented more cheaply? But similarly, why buy when functionality already exists in-house? Web services offer the choice, and this is becoming more substance than hype as Web Services are becoming the de facto standard for inter- and intra-application communications. This is not hype: it is happening today in many organisations. There is nary a storage provider that is not marketing utility pricing models for its storage, and there are few that have not signed up to the standardisation efforts to enable further integration and virtual access.
Second, the storage market is rapidly (really) evolving to take into account CIO’s needs to manage their storage as a single, virtual pool, independent of hardware constraints, either as a single addressable service within an organisation or as a shared hosted service. As discussed before, issues of vendor standardisation remain, but they are recognised and are being dealt with (expect progress over the next 18 months). It is not enough to have such a pool of storage without facilities to manage, allocate and secure it: these facilities are driven by the applications. And what better to use as a communications mechanism with the applications, than Web Services. Again, this is not pie in the sky: vendors have already cottoned on to this, and some have already implemented Web Services interfaces into their storage management offerings.
There is a final piece in the virtualised infrastructure puzzle. Where, and how exactly, does the application run? The slightly surrealist answer is yes, it does. As the application has been broken up and distributed, so is its execution as a complete system. Outsourced, rented application elements will be processed (with suitable service guarantees) by trusted third parties, in tandem with in-house services running on in-house or hosted hardware. Something needs to be in control, to ensure that the piece parts are interoperating correctly and meeting their respective service levels. It does not take a rocket scientist to work out that, once again, a standardisation to Web Service-based protocols yields an ideal mechanism.
Here is convergence in all its glory, allowing CIO’s to focus on the problems it has to solve, rather than the knock-on effects of trying to manage disparate, uncoordinated microcosms of technology. The key benefits is return on investment, as you only implement what you need to add value to the business. Need a new application? Write the shopping list, plug together and play. Need more storage? Click to buy as much (or little) as your business needs. Maybe this is starting to enter the realms of fantasy, but only for the moment. Driven by customers in a hostile market, vendors are working together as never before to make this vision a reality. Whether they know it or not.
October 2002
10-31 – Whats The Fuss Broadband
Whats The Fuss Broadband
What’s the fuss - Broadband Communications
Broadband is a simple enough term to understand, at least for the person who is going to use it. At Quocirca, we have seen several definitions of new-and-improved broadband and its poor cousin, old-and-inferior narrowband:
Broadband corresponds to multiple voice channels in a telecommunications circuit, whereas narrowband corresponds to only one.
Broadband corresponds to a data rate of over 1Mbps
Broadband constitutes sufficient bandwidth to permit the transmission of broadband services, i.e. streamed multimedia, videoconferencing and the like.
The third definition may appear a little vacuous, but it is the one we favour because it concentrates on the end rather than the means. It allows more technological flexibility, for example for data compression or caching of streamed media rather than “pure” bandwidth, and it also takes into account the use of the term in spheres such as the 3G “broadband” protocol UMTS, which has an initial maximum of 384kbps. Broadband is as much a state of mind as a technology, defined in terms of what it enables rather than what it is – the transmission of sufficient quantities of information to enable such applications as multimedia streaming (think using a computer as an interactive TV) or video telephones.
Broadband communications have existed for years, at least for telecommunications providers (telcos) and the large corporations that could afford the extortionate costs. What has changed over the past couple of years is the development of a range of protocols known as Digital Subscriber Loop (DSL). The xDSL range (“x” stands for “whatever”) enables transmission of very high data rates across the so-called “last mile” – the pairs of wires that run from local telephone exchanges to homes and offices. Given the fact that most data traffic will be to or from the Internet, Quocirca proposes another definition of broadband:
Broadband constitutes affordable, accessible bandwidth for the transmission of Internet-based broadband services without needing major modifications to existing infrastructure.
xDSL is a range of protocols, each of which is more applicable to certain needs. Most smaller organisations and home users (those who BT has deigned to connect, that is) are finding Asynchronous DSL (ADSL) the most appropriate. ADSL is asynchronous in that the “up” channel is smaller than the 512Kbps “down” channel, a model which fits the Internet usage pattern fin which more information is generally received than sent. A further strength of ADSL is that it is always “on” – there is no need to dial up to the Internet. Synchronous DSL (up equals down) is more appropriate for businesses, for example to enable inter-site or inter-company communications.
In addition to the low-cost, high-speed, always-on access to the Internet that xDSL provides, enabling the businesses to do their Web-based dealings more cheaply and efficiently, broadband access opens the door to a number of new ways to use the Internet for the business. For example:
If it has the right skills in-house, it may be more appropriate for the business to host its own information rather than relying on third parties such as Internet Service Providers.
Conversely, the increased bandwidth opens the door for the business to make better use of externally hosted services such as those provided by Application Service Providers. These companies have had a bit of a bad press, largely due to their dubious grasp of their middle name (“service”) but also because of the lack of available bandwidth to take advantage of the service. It is all very well having 100 megabytes of storage space, for example, but this is of minimal comfort if it is only accessible over a modem link. There are companies, such as eProject.com and SalesForce.com, that have proved the workability of the ASP model, but they d require sufficient bandwidth to make their services workable.
There are plenty of things wrong with broadband, not least in its UK availability. Our definition is from the point of view of the end-user and not the telco, who must roll out ADSL equipment to all its local exchanges. British Telecom has a hard-earned reputation for heel-dragging and for playing the system to prevent other providers from installing their own facilities. The end result remains “no service available” at present, particularly outside metropolitan areas. Even when it’s up, ADSL has a reputation for non-optimal performance. The “down” bandwidth is a maximum that is then reduced as more users access the facilities of the local exchange. What’s more, a fully contended consumer ADSL line (50:1) gives a lower possible throughput than a clean 56K modem. Read all about it - its in the small print. Once the other issues are ironed out, it is likely that security will gain the major issue with always-on broadband. Last but not least is security. ADSL connections are always-on in two directions – if you can get out, others can get in. There is a real risk that always-connected computers will be attacked, hacked or otherwise misused (for example, as a base to send Spam e-mail).
The first “next step” for broadband is the eventual completion of its roll-out – this looks likely to take a good few years, though the technology and the will (in most quarters) is available now. Broadband will be remembered not for what it is – no more than a high bandwidth socket on the wall to most – but what it enables.
November 2002
11-28 – Whats The Fuss Content Management
Whats The Fuss Content Management
What’s The Fuss – Content Management
Jon Collins, 28/11/02
Content Management owes its parentage to two converging technology areas, namely document management and the Web. The latter needs no introduction; as for document management, fair to say that it is a well-mined seam for those that know it, and a minefield for those that don’t. To document management we owe one principle, namely:
Everything is a document
This principle is central to understanding Content Management. Put it this way: every form of data, from an email or a spreadsheet to an audio file or a banking transaction, can be considered as a document. This principle becomes even more important when we take into account something else inherited from Document Management, namely the eXtensible Markup Language. XML is the ideal packaging mechanism for all these so-called “documents”.
Enough about technology: for now let’s consider what content management is for. Over the past five years, many millions of Web sites have been evolving from simple, text-and-graphics-based informational sites (“brochureware”) to complex resources linking many forms of information and enabling a far richer “end-user experience” (if you will). Against this backdrop, the “content” - that is, the text, graphics, audio, video and other data - needs to be managed. It needs to be created, verified, delivered, maintained and bumped off when it has reached its sell-by date. As the Internet evolves, these tasks become ever harder. Not only is there ever more content to manage, but also the evolution towards richness of content has also increased the scale of the challenge. It is far easier to manage a few pages of text to a multi-layered, multimedia “experience”. Even the simplest of sites have a tendency towards complexity, over time. As the solution to these ills, Content Management enables content to be stored, managed and maintained appropriately. It also permits the process of content development to be controlled. The litmus test for whether you need a Content management application is simple - can you recreate your web site as it was on an exact day six months ago? If you want to know why you would need such a facility, just wait six months and try to find that article that was so interesting at the time. It may be, in the future, that such a capability becomes a legal requirement for any commercial organisation.
Content Management can be thought of as a springboard. It is not entirely necessary to manage content in a structured fashion, or to use tools to automate it. However, Content Management facilities enable organisations to do more with less, to manage more information and deliver it more reliably than otherwise. Enterprise-scale content management applications can be expensive, hence commitment is required from the top not only to cover the costs of the products, but also to implement the necessary processes to enable their benefits to be realised. Content Management is as much about process as product: not only is there the content development and delivery process to think about, but also the other processes (aka workflows) of the organisation will be impacted, in particular the customer-facing processes such as marketing, sales and support.
The deployment of a Content Management application should be considered as an integrated part of a company’s strategy for using the Web. As Content Management is Web-based it needs to work with application servers, e-commerce engines and other paraphernalia of the Web. Linkage between content management and CRM are inevitable, in a drive to get that “experience” unique for each and every user - and, of course to log every key-click they might make. Content Management is often linked to portals - which are no more than windows onto content from a user perspective, or content farms, from an application perspective.
Content management is a relatively mature, and hence stable, market and suffers less from teething problems than a number of other application areas. Issues with content management tend to come more from the way it is implemented - implementation of the wrong process, or failure to integrate with external systems and workflows can cause more problems than it solves. A further issue concerns the distribution of content. Content distribution networks, either implemented internally or outsourced to companies such as Akamai, relive pressure on web sites by offering alternative locations for the content. This can solve difficulties of accessing content from specific geographies, not to mention easing the load on a company’s “core” web site.
What of the future? Content Management may have the basic model correct, but it must adapt to fit with new technologies and new business models as they come on stream. It is already being by the arrival of broadband technologies as these enable new forms of content, such as streamed audio and video to be delivered. Web services and Application Service Provision will also impact on content management, not so much in its principles but in the way it is implemented. It is likely that Content Management will remain forever one of those technologies that hides under the bonnet, even integrated into the operating systems of the future. Equally likely is its positioning as a technology that no company can afford to do without.
2003
Posts from 2003.
February 2003
02-10 – Through The Fog Pki V2
Through The Fog Pki V2
Through the Fog – Public Key Infrastructure
Jon Collins, 4/2/03
Security is a strange phenomenon in Information Technology. Like a Will O’ The Wisp, it’s elusive – everyone would like to see one but few can say they’ve actually done so. And so we are faced with the promise and the reality of Public Key Infrastructures (PKIs) – such a useful, powerful technology, coupled with near-total apathy on the part of the user community to implement it. Okay, that’s a bit of a generalisation (but hopefully, it got your attention). Public key cryptography is in common use, in fact every time you see the little golden key in the status bar of your browser, that’s PKC at work. However, it’s a point to point thing. What has never quite caught the imagination of the wider world, is the implementation of digital certificates for more general use.
If this has been gobbledegook so far, it might be worth defining some terms before we move on. Encryption is a well-known way of protecting data from being seen by the wrong people, however the tricky bit is how to let people know how to decrypt the information when it arrives. The encryptor needs to send a suitable key, and if this falls into the wrong hands (or even the right ones), what’s to stop another person using the key and passing themselves off as the encryptor? To solve this, the eggheads came up with ‘public key encryption’, which uses two keys. A message encrypted with the first key can be decrypted with the second key, and vice versa: one of the keys is kept private, and the other is made public. This enables a number of uses, for example:
if you want to send me a message that only I can read, you can encrypt it with my public key, knowing I will be able to decrypt it with my private key.
if I want to send you a message and I want you to be sure it came from me, I can encrypt it with my private key, and you will be able to decrypt it with my public key.
This simple scheme has other advantages: as it ensures that the originator cannot be impersonated, it can be used as a mechanism to guarantee the origin of any information, encrypted or otherwise, as the information can be “signed” (a.k.a. provided with an encrypted header) with the originator’s key. In security circles this is known as “non-repudiation”. Finally, there is the knock-on benefit of data integrity – it would be impossible to tamper with the facts while they were encrypted.
Powerful stuff, but simple can very quickly get complicated. Should everyone want to use public key encryption, then everyone would need to manage everybody else’s public keys: this is neither a pleasant nor a likely scenario. In its wisdom, the industry has defined a framework known as the Public Key Infrastructure (PKI) as a management mechanism for public keys. Managed by a “trusted third party” known as a Certification Authority (CA), PKI’s can issue, store, release, revoke and otherwise control public keys, providing a useful service for both the originators and recipients of encrypted or signed information.
So – for people who want to ensure the privacy of the information they send, public key encryption is highly appropriate. As mentioned, Web browsers do it every time they access a secure page using the secure hypertext transfer protocol https. Software downloads, typically for browser plug-ins such as Macromedia Flash, are digitally signed and you can view the certificate and verify its CA for extra personal comfort. In other words, you really are using PKI’s already, albeit in a limited way. Today’s e-commerce, still alive and kicking despite the dot-com crash, could not function without public key encryption, as it gives businesses and their customers confidence in transmitting vital information over the wire. Despite all this, there seems to be a singular lack of interest in taking such facilities any further. Have you, personally, set up an encryption facility on your own computer to transmit sensitive personal and business information (for example, vie email)? Of course you haven’t, or if you have you are in the absolute minority. Rather than whipping yourself about it, you could ask one of two questions: first, why have you never received a single email, from your colleagues, superiors, business and customers, which requires you to decrypt the message or verify the identity of the sender. Second, why is your organisation not doing anything about it? After all, if no one else is doing it, and nobody is enforcing (never mind requiring) it, why should you be any different? Despite the obvious threats of fraud and privacy, companies and individuals still appear to a pretty relaxed attitude to the security of the Web. Viruses and worms exist that re-send random emails from your outbox to random recipients from your address book – the possibility of sensitive corporate information arriving in the lap of a customer or a competitor seems quite real. At the same time, the lack of a comprehensive security framework for the Web is cited as one of the main factors why companies are slow to adopt the Internet as part of their infrastructures. Something has to give.
Security vendors such as RSA and Entrust have been baffled for years as to why the take-up in PKI products has been so small. There are many factors that have hindered adoption in the past:
PKIs have remained expensive despite several initiatives (such as Identrus for the financial industry.)
Both sender and recipient need to agree to use public key encryption in their transmissions. As PKIs are not yet ubiquitous, this leads to a Catch-22 where everybody waits for everyone else to start using public key encryption first.
CA’s such as Verisign developed a bit of a reputation for letting anybody create an entry in their directories (go to Verisign’s web site HYPERLINK “https://digitalid.verisign.com/” https://digitalid.verisign.com/ and search for Mickey Mouse, for example – there are at least 50) – this has not helped the “trusted third party” cause.
The interoperability of PKI implementations has been flaky. Again, initiatives (such as the PKI Forum interoperability framework and creation of standards such as XKMS for key management exist to counter the problems.
Finally, the security of the Certificate Authorities themselves may be at risk. Organisations such as ECAF in Europe are building policy frameworks to which CAs will have to comply, but at the moment many countries do not have trust policies for CAs.
All of these factors contribute to a “not yet” policy on PKI. The technologies required for PKIs already exist, but the world is not yet using them and maybe never will - knowingly. There are more pressing problems to be solved, particularly as (like disaster recovery), confidentiality and non-repudiation only become a priority following a problem. Regardless of the current apathy, perhaps PKI really is a problem to be solved by the infrastructure providers – not only the ISP’s but also the Ciscos and Microsofts – and not by end-user organisations. The management of the key handling part of the infrastructure can, and maybe should, be outsourced – plenty of companies believe the latter, including IBM and EDS. More recently, in December last year, the fledgling Web Services standards were enhanced to encorporate PKI-based security mechanisms. Many applications in the future will require the Internet as a backbone, and hence most will need to leverage the enhanced security that a PKI can support. Once PKI is delivered as an integral part of the application, and is managed as an outsourced service, it will be used – likely with a sigh of relief by many businesses, who can only benefit as a result.
June 2003
06-01 – Through The Fog Management Of Utility It
Through The Fog Management Of Utility It
Through the Fog: Management of Utility IT
In the early nineties, there was much talk about frameworks for network management, which claimed to be able to monitor and control network elements (routers, hubs and other hardware) from a single point. Various companies had frameworks for sale, including Sun, HP and BMC; some smaller companies such as Boole and Babbage, Candle and Tivoli were also setting out their stalls. Memory fails as to exactly when each of these products came to market, but the buzz was unforgettable. Buy one of these frameworks, the marketing used to read, and your network management issues will be a thing of the past. Not to mention doing your washing and resolving marital conflicts, no doubt.
Unfortunately, the brave new world promised by the management frameworks never happened. There are a number of reasons for this, not least the every-increasing complexity and scope of the things to be managed. Network management was never enough, so server management, application management and database management were added to the pool. But even that was not enough, and to add insult to injury, the damned thing to be managed kept changing without leaving the tools any time to keep up. The cheek of it.
Today, the management landscape appears as hyped as ever. We are told that utility computing, the flavour of the month with many vendors, cannot exist without management software, which is being given another shot at the limelight as a result. Today, several companies occupying front line positions of utility computing – Sun, IBM, HP, BMC and CA for example – include management as an essential element of their solutions. It’s probably best not to get into a discussion about whether or not utility computing is a good model. For now, consider it as the latest incarnation of delivering IT as a service - a few years ago the solution was outsourcing, and more recently there were ASP’s, both of which were different sides up the same mountain, so to speak. Rather than worrying whether this services “mountain” is right or wrong (as it happens, we think its right), let’s consider what a management framework needs to be able to deliver if it is to support the utility vision.
For a start, if IT is delivered as a service, then services need to be considered all the way down through the IT stack: applications, databases, computers and connectivity all need to work as a service. Management also needs to be provided all the way down, so that issues with service delivery can be traced to the source and dealt with accordingly. Higher level monitoring and fault resolution need to interoperate with facilities at lower levels, for example it should be possible to reallocate server time or storage space without running off to several different consoles in other buildings. It may never be possible for one company to provide management tools for every situation, therefore it should be possible to incorporate components from several management providers into the same framework. Of course, different parts of the IT stack may be managed by different departments or even external providers, for example connectivity may be provided by a third party Internet Service Provider (ISP). Therefore there should be interfaces between management applications, for example between enterprise and telecommunications management platforms, to enable monitoring from a single point. To support the utility vision, this should include support for charging mechanisms, whether services are internally or externally provided – how interesting it would be to see exactly how much a specific service costs, when all of its components are taken into account.
Second, information needs to be delivered at a business level. Database access times, for example, are interesting but a lot less relevant than the speed of delivery of, say, an e-commerce transaction or an inventory check. It should be possible to enable the monitoring of true business processes – “customer buys a product” would be a good example of a process which is directly impacted by an IT service being delayed or restricted. Therefore, management frameworks need to interface with tools higher up the stack, not least to helpdesk applications, but also to workflow platforms and even business applications such as Customer Relationship Management (CRM) tools. If information is being delivered upward, you need to consider what information needs to be seen – a jumble of mnemonic-clogged alerts or flashing icons will not mean anything to an application or business manager, for example. Management tools need to deliver information in a way that business users can understand – for example, following the conventions of a corporate Service Level Agreement (SLA). Tools also need to deliver information to other management tools, for example in a Web-based form that can be viewed within a portal – this should include both online information and historic reports. Similarly, a framework that offers its own portal interface will be better placed to integrate information from other tools.
Third, a management platform should be proactive, not just reactive. It is useful to have a monitoring station that says when things have gone wrong, but it is better to include mechanisms to fix the faults at the same place. Better still is to incorporate a level of intelligence so that problems can be identified before they happen – a simple example is, “given the speed at which that disk is filling up, it is going to run out of space soon,” and a more complex example is, “the last time we saw this much interest about one of our Web site special offers, the server crashed, and it looks like the hit rate is going to exceed what we saw last time, so we’d better take steps.” Once a problem has been identified, its resolution should also be made as simple as possible, for example through automated reallocation of resources or the provision of simulation mechanisms to ensure that a fix will not cause more problems than it solves. Management frameworks need to incorporate policy-based engines that can incorporate pre-defined policies (“no user allowed more than 2Gb disk space”) and generated policies (“if network load increases beyond this point, we may have a problem”, or “the number of failed Web site transactions has become unacceptable”), and there is no reason at all why these cannot be linked to business policies and rules.
Finally, the management framework itself should be cost-effective out of the box. Licensing has long been an issue for management tools, which have priced themselves out of all but the most expensive markets, leaving smaller organizations in the lurch. It should at least be possible for these companies to have some visibility onto the services they are being provided, for example if they use ISPs or Application Service Providers (ASPs); a low-cost management portal that supports smaller companies, but which can grow as they grow, is long overdue. Incorporation of management components that could be used in an on-demand basis would help smaller organizations benefit from better management, as well as enabling companies large and small to operate their IT on a more utility basis. Simply put, “if you don’t need it, don’t use it and don’t pay for it.” Facilities such as auto-discovery are indispensable to enable monitoring to start as soon as possible after installation; provision of management best practices as policy templates would help reduce the learning curve. It goes without saying that deployment of any management facility should minimize disruption to the services it is designed to oversee: there should be no performance overhead for applications or networking, there should be minimal security implications for the applications being monitored, and of course, the framework’s resilience should exceed that of the infrastructure being managed. Last but not least, a transparent migration path should be offered from other tools.
Sounds great – but does such a framework exist? Not quite. Current platforms tend to remain inside their comfort zone of managing infrastructure, without too much attention to what is happening outside this space. Delivery of business information is possible, and even tested in some cases, but is not yet the norm; meanwhile, standards-based integration with the lowest levels of the stack may become possible with the recent, joint announcement of collaboration between the Distributed Management Task Force (think infrastructure) and the TeleManagement Forum (think telecommunications). Perhaps the last remaining issue is how to get a shared understanding between all parties – the framework software companies, the IT vendors and service providers - about what businesses really need to gain true, service-based management of their IT, to enable it to be delivered as a utility. Like so many innovations of the 1990’s, management frameworks were not wrong, but premature. And frankly, they still have a way to go.
06-03 – Through The Fog User To Technology
Through The Fog User To Technology
Through the Fog – Connecting the user to technology
Jon Collins, 3/5/03
The history of information technology has not been without its problems. Even its name is a give away – the “information” in IT suggests that our focus has been more on storing stuff than on what we can do with it. There have been some remarkable advances in technology – even the processing power of the average mobile phone would have filled a room just a few decades ago – and these have been accompanied by mind boggling leaps of understanding. The Internet, or at least the Web, bears witness to how, when things come together, they do so very fast indeed and with dramatic impact. Don’t be fooled by the current downturn (we’re not allowed to use the word “recession”), the Web was and still is a remarkable, world-changing advance in both technology and our ability to harness its power.
All the same, despite these leaps and bounds, it feels sometimes that we are still sloshing around in the primeval slurry of technology. For organisations large and small, there are no right answers. The latest enterprise offerings provide infinite scope for customisation and enhancement, but first we must understand what it is we want to achieve, a task that seems impossible without spending the GDP of a small country on management consultancy. The flip side, of packaged applications offering a one-size-fits-all approach, requires us to somehow squeeze and pummel current working practices to fit the intricacies of the product. It’s not all bad news: at either end of the scale, the products (in isolation or combination) do just about everything that might be required of them, however there would appear to remain a gap between need and reality. The question of functionality seems to be largely solved, meaning that today, the issue lies not with the question, “what can I do with the computer?” but with, “how can the computer best fit with the way I want to work?”
This issue is not new, but it remains the unsolved problem of computing, namely how machine integrates into the very human environments that require them. People need to interact with computers, and people need to interact with each other. So – is this about user interface design and collaboration software? Well, sort of – but remember, this is an unsolved problem. Nobody (well, perhaps Bill Gates) would maintain that the Windows interface is the epitome of user interaction design, an example of perfect harmony between man and machine. The same could be said for PalmOS, or any number of mobile phone input mechanisms. Nice try, but no cigar. Similarly, users of groupware packages such as Lotus Notes or Microsoft Outlook know too well that the term “collaboration” is a misnomer for “email and calendar sharing”. All of these things have a long way to go if they really want to make their users’ lives easier.
The problem lies squarely in the user interface. Fortunately for the IT industry, computer users are a loyal bunch, otherwise they might have binned the whole lot years ago. Companies talk in terms of “leveraging assets” and “maximising the return on investment”; what they really mean is, “we’re stuck with it now, we can’t afford to replace it so we’d better make the best of it.” And, truth be told, is there really anything that can be done outside the research labs? Funnily enough, yes.
Here at Quocirca, we’re not great believers in silver bullets. We would be, you understand, but we’ve seen too many, and the vampires are still there. However, things are moving forward. Rather than putting all the money on a single horse in the race, it is worth looking at how the race is evolving. In previous Through The Fog articles, we have looked at the evolution of voice technologies, and how these are coming of age given the arrival of hardware platforms that can actually support them. We have also looked at predictive text messaging and MMS. And more recently, we’ve been road testing Tablet PC’s and their suitability for various application types, and looking at peer to peer collaboration tools such as Groove Networks. And it is in these areas where things have been getting interesting, and indeed disruptive in no small measure.
First, the Tablet PC. At first glance, a gadget-lover’s must-have, with the I’m-a-laptop-no-I’m-a-workpad gimmick available at the flick of a screen. Second comes the handwriting recognition, which works like a dream, picking up genuine handwriting rather than some reductionist character set. Out of the box the applications seem useful but not compelling, making one wonder whether the Tablet really is no more than a gadget. Surely there must be more that can be done with this device other than its operation as a keyboardless version of a standard laptop?
Indeed there is – if applications such as Mind Manager are anything to go by. Traditionally, Tony Buzan’s mind mapping techniques (think spider diagrams on speed) have translated into adequate, yet somehow lacking, on-screen applications. Mind Manager is one of these, its capabilities for managing and sharing multiple maps notwithstanding, there is still nothing quite like getting out a fresh piece of paper and a pack of coloured pens to really get the best out of mind maps. After all, they are all about stimulating the right side of the brain, and the combination of a computer screen, keyboard and mouse will not have the stimulating effect for everyone. Enter the Tablet, armed with the latest version of Mind Manager. Maps can be drawn as they were meant to be drawn, they can be edited with the swish of a stylus and then can benefit from all the additional facilities that electronics can provide. So far, so good.
Second, Groove Networks. “Invented” by Lotus Notes founder Ray Ozzie, Groove takes all the best elements of peer to peer file swapping packages (think the now-martyred Napster and its offspring, Kazaa) and uses them as a foundation for a business information sharing environment. All the standard stuff is there, such as shared files, calendar management and discussion lists. But these are just scratching the surface. In addition, there are numerous plug-in tools including document reviewing, meeting management and project planning. Drill down further and there are visual modelling, CRM and, you guessed it, collaborative mind mapping tools available.
Groove is not without its problems, for example the way in which files are shared makes it an uncomfortable tool to use in low-bandwidth environments. However, like the Tablet PC it does make possible things that were uncomfortable to do, for example enabling both office and home workers to access a single pool of information. Also, because there is no server, a multi-user Groove environment delivers a level of availability out of the box: if you lose your hard drive, those vital files are just a download away.
The Tablet PC and its synergy with Mind Manager, or the application of peer to peer networking with tools like Groove networks, are not just examples of how technologies can evolve, and neither are they the only examples, but they make the point. By putting new interaction mechanisms together with new collaboration mechanisms we can start to present ways that will help us towards the delivery of real solutions in this as-yet uncharted human layer of the technology stack.
At the enterprise level, we only have to look at the amount of discussion at the moment around business processes. Barely an analyst report nor a marketing brochure fails to mention the P-word, as it is rightly understood that it is these that define the activities of the business, and hence what needs to be automated in order to deliver business value. But can life really be as simple as dragging and dropping a few business processes from an online repository? The answer is a resounding “no”,
Let’s consider. In 198x, Mike Hammer and James Champy published what they called “a manifesto for business revolution” At its heart was a simple premise, that delivery of value happened across organisations, and not down them. By concentrating on the process rather than the organisations structure, it would be possible to vastly improve organisational efficiency.
Of course, Hammer and Champy had invented nothing new. It is the same principle that is espoused in TQM, in agile manufacturing and so on. What the pair managed to do however, was to capture the hearts and minds of the business. Of course this was no different to quality management, but quality is for clerks, not for directors and VPs. Business Processes, now there’s a more attractive term. And it worked.
Companies like iLog claim to enable you
More powerful products of the genre, such as Notes, only come into their own if they have had sufficient time spent to configure them to mee the needs of the organization.
In both cases, interactions can occur on the macro level or on the micro level. Let’s consider some examples of how people can interact with others and with computers at a macro and micro level.
| People | Computers | |
|---|---|---|
| People at macro level | Projects, schedules, contracts | Tasks, business processes |
| People at micro level | Handshakes, jokes, eyebrow movements |
The solution lies in the user interface of an application:
meets the needs of the business process.
enables the user to interoperate in the most appropriate way
To put it another way, that’s what application vendors are talking about when they mention “business processes”. Unfortunately for them however, things are more complicated than that.
There have been some good efforts from a number of companies, notably IBM in the 70’s, HP in the 80’s and just about everybody in the 90’s.
We can consider this at two levels:
The macro level, which is defined in terms of processes, activities or tasks. You too can spend a happy workshop trying to work out what the difference is, before realising that its how you apply the terms in your organisation that counts. Don’t worry, we’ve all been there.
The micro level, which is defined in terms of interactions. Usability is king here, and this is a black art, you only have to attend a usability event to realise that computers are of secondary importance.
In summary, let us look at where we are.
Since the arrival of the first computer, information technologies have been targeted at solving business problems. They haven’t yet succeeded in automating the organization, though they have made useful tools.
Services provide a common thread to link business and technology. We should use the concept of services in our quest to achieve what has thus far proved unachievable.
As we move on, we shall look at concepts of processes versus patterns, dynamic versus static. Only by balancing the what with the how, can we succeed.
Technology at the macro level is starting to have an impact, at least if you believe the marketing. Companies like iLog with their business rules engines certainly think so.
Meanwhile, at the micro level, there is still a long way to go. Perhaps this is a good thing: if anyone ever really worked out the answer, we’d all be out of a job.
Voice recognition
M-Urge
Focus 5
Let’s face it, we don’t even know how to model some things, never moind how to autopmate them – mind mapping is one example of a relatively recent approach which then drives automation mechanisms. We can only imagine what software could look like if it was actually fitting the real needs of inter-human communications. Throw voice recognition into the pot and new directions start to emerge. Not only input but output mechanisms – how to present data – computer associates’ visualisation techniques were valid perhaps when they were presented in 1999, but maybe before their time.
Technology doing aggregation
McClewen
The medium is the message –
Meta-activity – training people to use certain thinking techniques and amplifying knowledge of strategies at hand, entrains best practice. Populist orientation.
Matrix of human minds sharing concepts with eachother – human layer
October 2003
10-02 – Asset Stripping By Jc
Asset Stripping By Jc
Asset Stripping by JC
I’m starting to woinder about these multi function devices, PDAs and so on. Frankly I’m not using my Dell Axim PDA half as much as I used to and I’m not absolutely sure why as it does everything that I need it to. To my surprise I found myself this morning putting in an order for a digital voice recorder even when there’s one on my phone. The other thing is I’ve found out that I can check my emails from my mobile phone using GPRS and WAP and I find that an awful lot more useful than trying to do it from a PDA ause, for a start, the messages don’t get downloaded they’re just, I’m just reading the,
I woinder about how you’d look at – in someone’s garage, these multitools are all very weel, you know, these things with pliers, screwdrivers, tin openers and the thing for getting cub scouts out of horses hooves, but you don’t walk into a workshop and see a single multitool, you’re more likely to see more than one of the same thing with very similar functions, take a handful of chisels, or a drill and electric screwdriver, at the end of the day its about having the functionality to hand – case in point is imagine wanting to cut a sheet of paper in half – what you can do is open it up, find the right attachment, lever it out, cut the paper with a pair of scissors (that are actually a bit small, but you cope), then fold the multitool (not always easy), and put it back. Alternatively – take the scissors, cut the paper. Now I think there’s a lesson to be learned by manufacturewrs – there will not be one device to rule them all. People will have multiple devices, each optimized for its particular function. If that’s the case, then the most important is that they work together where necessary, all very well building in Bluetooth so that you can transfer information back and forth but build in USB so all talk to big mothership that’s the computer and life will be as it should.
Meanwhile, back at the Dell Axim, I would conclude that it’s not the perfect device. Maybe I’ll go back to the palm. It does all these wonderful things – music, email, word processing, games – but the main thing I’ve used it for is to play cards. The beginning of the end was trying a wireless card and the card didn’t work as well as hoped, here I am with multicomputer, most important thing being access to information from any device, rather than being device limited/constrained. It’s how devices share information that is most important, a management problem and a usability problem. Any device should be able to do what it does with a minimum of grief. These are tools after all. This isn’t about laziness, this is the convenience required by someone with a multitude of tools and who has the will to choose the right one for the job.
You have to find the right thing off the menu, it doesn’t give you the album in the right order with the software suppied, you could argue, well, why don’t I install a package that does a better job, but why should I have to? Wwhy don’t I just get a custom device that plays MP3’s? Which is in fact what I have done, I have a Digisette which I can load an album onto and I can go forward and back on it, and when I want to play an album I just hit play. Similarly other devices, like a mobile phone. Lots of crossovers between all these devices, it crushed the idea that one device to rule them all. That’s without even talking about desktops and laptops and servers and tablets and so on and so forth.
It could be argued that devices still can’t do everything in a small enough form factor, which has an element of truth. However the usability factor remains the most important. Its true at the highest end as well – I was talking to a friend who works in a recording studio, which has just acquired a new, analog mixing desk. Why not digital? I asked.
There isn’t an ideal form factor, and the component costs are that it should be possible to have multiple devices in multiple shapes. You wouldn’t have a single chisel, you’d have ten chisels. But if chisels were very expensive, you’d only have one and you’d make do. Chisels are affordable, therefore have a set of ten, choose the best one for the job. So, there is a role for the combo device, when you weight and convenience and not having to carry 50 things, something to be said for not having multiple devices for security – these things can get lost and stolen and broken - and complexity reasons, but that can be like saying that buying a new biro is like adding to the complexity of a pencil case. At end of day, its about simplicity but with digital cameras, video cameras, but they will require different things like battery life. Still can’t beat SLR digital or analog, because of the quality of the lens. Never going to get better than an instamatic lens quality. It becomes far more than squashing features into a device because it becomes impossible to find them. Dell Axim and why I’ve gone off it. The reason is I find it more useful to have multiple cheap devices, each with a simple, defined purpose, rather than a single multi-function device that falls between a rock and a hard place. Not going to get rid of it, still a role for it but its not going to replace everything else.
The Pocket PC interface is pretty but it doesn’t actually get you to the information or functionality you want in the way that you as a user would want to get to it. There are all kinds of the menus, today screen, settings and programs areas, as well as the file manager, all of which might reveal the information I need. Same files appear in different places, no obvious places for keeping things. There are also bolt-on shareware window managers.
All kinds of bolt-on window managers, but again, I shouldn’t need them. This is the device mentality – it should give me access to what I need, out of the box or not at all. Pick up and go, not pick up, spend a couple of days looking for replacement interface managers, and downloading and paying and adding to the price, it should be pick up and go. With Pocket PC, many different ways of getting to an application or a file, the same files appear in different places, or so it seems, there’s no obvious place to put stuff.
Setting up connections to access the Internet is mind-bogglingly complicated, it can be done but it’s certainly not intuitive, phone numbers, dialing locations, internet accounts, all the three things are linked but not in a way that makes sense when come back six months later, hunting around for how I set it up last time and that can’t be right.
So – if we look at the device market, there’s – what is a mobile, what is a PDA. There are certain functions that you’re going to want and certain contexts you’re going to want. Device should be a factor of function and context, soi if you’re going on holiday and you want to take some photos and shoot some video on the beach, the device should fit that need. If you are on a business trip want to take some notes and check email, device should fit that need. If you’re going to a party and you want to coordinate with your friends where and when to meet, you want to fit that need. If you’re driving in a car and you want to dictate an article about a Dell PDA no longer being the thing for you, you want a device to fit that need. In this case – and Olympus VN-90 voice recorder. And so on.
What we have is a device version of the Pareto principle – there will be a subset of devices that fit the majority of requirements. Manufacturers would do well to identify what those needs are, then they can address 80% of the market and leave the rest to specialized devices. There will be technogeeks who want to have absolutely everything, and there will be people working in places where they can’t carry a selection of devices, where they will need specialized devices as well. Technogeeks will want all the devices, and anal obsessives will want a specific device for every job, and so on and so forth. No different to pencil cases or toolboxes, its about having the right tool for the job. Can’t say that the Dell PDA – less to do with Dell and more to do with PocketPC – can’t say that it has arrived, it’s a step along the way.
The keyboard doesn’t enable you to type – you have to hit the keys pretty hard, particularly the spacebar otherwise youendupwithasentencelikethis. So, its not ideal.
Do I really want a PDA to go jogging? No, I want something that is light, portable and easy to use.
As it is, every tool-man has a set of tools he uses the most.
Is there a place for a streamer device – a cartridge that operates like an MP3 player?
10-07 – Buffalo 1
Buffalo 1
Wireless Networking, Home and Away
I have a Buffalo ISDN router, unfortunately we live outside the required distance form a branch exchange which is unlikely to achieve critical mass, so it is a necessity.
We have three computers – one which is connected via a Buffalo USB adapter, a laptop which has a wireless PCMCIA card (sorry, PC-card) and a second desktop which is close enough to the router to merit a wired connection. Oh, and let’s not forget the compact flash wireless card for Little Dell, the PDA.
The most obvious advantage is that, no longer are there wires trailing all over the house. There are some other benefits too. Backups are a theoretical breeze – theoretical because one still has to remember to set them off. With the addition of printer sharing, anybody can print to any printer, very handy when that inkjet cartridge runs out and there aren’t any more in the drawer – a common phenomenon with two pre-teens in the house.
There are some problems, but nothing is particularly insurmountable. The range of the PocketPC card is, well, hopeless – probably about 5 paces at best. This might be half useful if the router was in the middle of the house, but it is in the bottom corner. Of course, I could buy a wireless repeater or a higher power aerial, but for now it’s not the end of the world.
The Buffalo range itself – well, let’s just say its not for the faint hearted. Things have improved on the Web site since I first started going there, but the PC-Card (Sorry, PCMCIA) driver is offered without warranty or support, hardly a motivating factor. As far as I can tell, this is the same version that was available two years ago, so I would have hoped they’d have things worked out by now. Still, everything works – mostly. The line doesn’t always drop when it should, and it is difficult to suspend a laptop PC.
Buffalo seem to deny they ever made an ISDN box, which makes me just a tad concerned when I have one right next to me.
Data access speeds are – well, slow. This is 802.11b here, but don’t expect to be pumpung a DVD from one computer to another with 802.11g either, particularly if there’s someone else contending for the packet space.
Works quite well for a small business operating from home, but the complications for a larger company in terms of who pays, or worse, who supports.
Out and about is – well, interesting. Having walked the streets of Soho looking for a Hot Spot (honest!), I can say that there is still some work to be done here. It’s the same in other parts of London, never mind other cities. And don’t be fooled into thinking that things are different in the land of the brave, either – while I was in Boston I stumbled across a wireless provider in a backstreet and he recommended me a bar where he’d installed a connection. After a boring time trying to work out what was wrong with my laptop, I went back to the provider and we ended up down in the kitchens, trying to resolve problems with the connection.
Now, don’t get me wrong but most normal people will not be in a position to try to repair the wireless link, but it does illustrate the need for highly reliable equipment. Many places are just not designed to be wired for wireless.
The reach can be extended as well. I happened to be at a conference in desperate need of sending a 12 Megabyte file to a client (via Groove, if you’re interested), and the wireless came into its own. It did help inordinately that there was a BT Openzone stand, and some bright technical young things to resolve the teething problems
Perhaps the real advantages will come when other devices start to plug in, such as the Linksys media converter, that lets me watch photo slide shows on my TV, or listen to MP3’s through my stereo.
USB
If you want to see what’s coming next, you only have to check on eBay, as enterprising Korean manufacturers have discovered it is an efficient way of missing out the middle man and shipping direct.
The Aladdin USB dongle has been used in the past, and Aladdin now have a dongle that can
One thing’s for sure – this is a form factor we’re going to be seeing plenty more of.
MP3’s
There is an issue however – for a start, copying CD’s to MP3’s is not legal in this country. Hum. Also, the most bizarre mechanisms that the music companies are putting in place is preventing CD’s from being listened to in all stereo’s let alone CD walkmans and car CD players. That’s before even getting to the car CD players.
I propose a bit of consumer power here. I understand that there need to be some legal rules imposed, and stealing is still stealing. What I can’t understand is that it should be the record companies that define these things, and not the consumers. Whatever happened to consumer rights?
Here’s some contractual statements that can be applied:
I will not buy this music unless:
It is readable on all devices I choose to play it on, including DVD players, computer CD-Roms, games consoles, car CD changers, and CD walkmans
I am permitted to make a copy for my own personal use in any format that I choose, which will probably be MP3. I can load the songs onto any device that takes my fancy, and play them to anybody who is interested.
I can lend them out to whoever I like, and they can listen to them as much as they like, and can copy them to any format they like
I understand that the purchase of this music is a license to play the music. Should I already have a copy on tape or record, I can purchase an upgrade at a reduced price. If I already have a copy, I am able to download, burn, digitise or otherwise acquire any other format I choose for my own listening.
There are some things that will not change. People love packaging – as illustrated by the repeat sales of DVD’s that include an additional, material, or a poster, or a balloon. People love stuff – would you give your mother a copy of the latest XX album that you had downloaded and burned yourself?
The record companies want to have their cake and eat it, prompted mostly by the costs of marketing and promotion in order to get their material played. The question is – would the alternative model fare any worse, for example spending half of what is currently spent on promotion, putting out a broader variety of artists, and spending more on the production and packaging?
Towards the perfect device
© Jon Collins July 2003
The question is, can one size fit all? Yes, and it’s small, argues Jon Collins
“There can be only one,” Sean Connery told Christophe Lambert in Highlander. That is the correct spelling, by the way, Christophe is a Frenchman, which is why he always appears to get less than his fair share of dialogue. But I digress. “There can be only one” is a mantra that could equally well be applied to the device market, as anyone who has lugged around a PDA, laptop, mobile phone, external CD player, MP3 player and camera will tell you. The list doesn’t include the other essential - a portable printer - but I don’t want you to know just how sad I really am. There can be only one, not least to save on osteopathy bills, but also to solve the inherent issues of integration, communication, synchronisation and software compatibility between the lot of them. Whatever this “one” is, it needs to perform all the functions of the rest without compromising on performance. Potentially an impossible goal. Indeed, maybe it is impossible, which is why I’d like to propose a different approach.
I have recently been experimenting with these USB storage devices that seem to be proliferating at the moment. Natty little things, they plug and play with the most recent versions of Windows and Linux, meaning that the data they contain can be accessed on any recent computer with a spare USB port. Handy for backups, neat for file transfer, a good little floppy disk replacement I thought to myself. But then I started thinking a bit harder, and their true potential started to become apparent.
For example. I’m currently using a USB storage device for all of those things. But also - and here’s the (hopefully) clever bit - I have an email application that I run from the device. It’s called nPOP, and the beauty of it is that it is self-contained - it doesn’t use the registry or any external files or directories to run. This means, I can plug my USB device into any Internet-connected computer and check email across all my email accounts, without having to specify them one by one and without relying on an email service provider. Sure, there are such things as email providers with Web access, and I could configure one to check my email, but they are rarely sufficiently functional to provide the full service. This package also provides an address book, it can work with attachments, and so on. So, when I travel, I can rely on the fact that there will be computers where I am going.
This USB-storage-device-centric model can be extended to encompass other applications. Of course, first it is necessary for them to be hot-plug compatible, and the majority of modern apps are not. Which brings me to the leap of faith, and something which I shall be checking out. What about Java? If a standalone Java virtual machine can be installed, then an entire operating environment is provided without recourse to the host operating system. There are already a plethora of productivity apps (i.e. small, useful ones) for the Java VM, including instant messaging software such as Tipic. And PIM software, though I haven’t tested the packages I have seen. Even if Java isn’t appropriate, maybe a browser-based approach would work, for example using CGI. It should be possible then, to develop a portable environment.
What about those common apps, such as Word and Excel, I hear you shout. Spot on, good point. The answer is, that the common bloatware packages are already commodity items, and do not require any user information. Therefore they can exist quite happily on the host computer, separate from the user’s own apps that exist on the USB device. When I plug in to a computer, I would expect there to be a basic suite of apps. Fortunately, in most internet cafes, they already are.
While we’re on the subject of what-abouts, what about all that other stuff, MP3, cameras and the like that I mentioned? USB storage devices already exist that are also MP3 players. There are already MP3 players that are cameras, or vice versa - it’s difficult to tell sometimes. So, given the above, it is not unreasonable to expect a device which is storage, camera, voice recorder and music player in one (not to mention mobile phone). Given developments in this area, it’s probably a matter of months away. All some savvy manufacturer has to do is bundle a similarly useful software suite on the device, enabling it to plug and play and become a truly portable environment. Provide Java VM’s for multiple platforms, and support SD Cards, and it really will enable anyone to work with anything on the move. Indeed, the software suite could equally well exist on an SD card as the device itself. We just need a mobile device supporting Java, which is USB storage compatible, with an SD-card slot, and we have everything we need. Given all this, it becomes apparent that maybe we’re not looking for a single device after all, rather a USB-compatible storage mechanism that will work with everything we throw at it.
There is no technological reason why everybody that wants to, couldn’t be carrying a complete application environment on their phone, MP3 player or just on an SD card in their wallet. That’s possible today. Add some imagination and we could talk about having some e-money on the device, which is deducted by the host computer in micropayments per minute used. We could also recommend public kiosks – such as those provided by BT – supporting this model, and proliferating as people realise they no longer have to lug unergonomic hardware whose weight is determined largely by their inadequate batteries.
It sounds simple, but imagine if everyone was doing it, it would bring the truly mobile world one step closer.
December 2003
12-04 – Think Instant Value
Think Instant Value
Think Instant Value
IT industry pundits spend plenty of time discussing the bigger picture, strategic solutions – the CRM rollouts that save millions of pounds (allegedly), or the consolidation projects that reduce the number of servers from three hundred to ten. Strategic solutions require to be architected at every level of the IT stack, deliver whole business processes and recognisable value to the customer. At Quocirca, to ensure we cover the tangibles and the intangibles, we define the Total Value Proposition (TVP) to be the sum of the benefits minus the costs of having something, compared to the benefits minus the costs of not having it. This works well for those major, structural projects but there exist more tactical solutions. These are equally valuable, and also require top-to-bottom integration but they differ significantly in approach and result from their strategic brethren.
Tactical solutions act in support of specific activities. An example of a tactical solution is the lowly-yet-powerful pager: you call a number, the pager beeps, and the owner of the pager knows to call you back. Other examples include mobile telephony, video conferencing, voicemail, email and scheduling – indeed many tactical solutions exist to enable communications between individuals; many others help support quick decision making. They operate, as Lawrence Webb, director of human-oriented software reseller M-Urge would say, “in the human layer of the IT stack.” Tactical solutions such are highly commoditised human-enablers, characterised by a mini-stack that uses established standards from top to bottom. Most tactical applications, if not all of them, are delivered off the shelf – people aren’t developing these things in-house.
There is nothing to stop tactical solutions from being integrated as part of strategic solutions, but their strength is that they can be delivered to operate stand alone. As they exist in the same, recognisable form for multiple people in multiple domains, this means that:
Tactical solutions require very solid vertical integration. They can be considered as a thin slice down the stack, in which every layer has been highly optimised to deliver the specific function.
Tactical solutions should have a consistent look and feel wherever they are found. For example, retrieval of an email should be the same experience whatever the device that is being accessed, or whether the email is being accessed from within a strategic solution.
Tactical solutions do not deliver business value as such, and there is little merit in reporting on their success upwards into the business. However, their existence can make or break a business process, particularly when the process needs to adapt to an unexpected circumstance. Because they are human enablers, their success or failure is determined almost entirely on the basis of human factors.
One mechanism to determine the value of a tactical solution is the concept of instant value. Just like the value of a strategic solution equates to benefits minus costs, instant value equals instant benefits minus instant costs, and is a calculation performed directly and subconsciously by the more primitive lobes of the human brain. At every moment, networks of neurones are calculating the most efficient way to achieve instant value. These make us drink the cold cup of coffee beside us rather than sending us to go and boil the kettle, or they prevent us from running to catch a bus (remember them) as there will be another one along in a minute. They’re not foolproof – it is instant value that causes us to drive on, lost, rather than stopping to check the map or ask for directions.
A tactical solution is functioning successfully if it is able to deliver instant value in a given scenario. At the moment when the power cut comes, all the servers go down and there isn’t a support engineer in sight, it is the trusty telephone and paper directory that we reach for. There’s nothing wrong with that. The upside is that, when they are deployed successfully, good tactical solutions have enormous benefit within strategic solutions. Conversely, if the criteria of instant value are not achieved, the tactical solution concerned will be worthless or even counterproductive.
Instant value may seem trivial but it is of fundamental importance, particularly for new technologies. Product managers and early adopters can concoct plausible uses of the latest gadgets, products and packages. However, these artificial scenarios are not the final markets for such goods. Out there, on the streets, in the machine rooms and on the shop floors, the true value of the tactical solution will become apparent, both at the moment of purchase and when the product is used in anger. Recent gizmos, such as USB memory sticks, have tangible instant value. More complex functionality in office applications, such as the Microsoft Binder, fail the test and are unused despite their usefulness. We can apply this to the old chestnut WAP, for example – why did it fail? “Because it was unusable” is the standard response. This may be true but it doesn’t help anyone understand what should have been done differently. WAP is not rubbish – it merely fails the instant value test. The benefits of WAP – information anywhere – were outweighed by the time it took to make a GSM connection, combined with the number of key clicks required and the quality of the information it presented. Now, with GPRS providing a more usable connection (and repackaged as Vodafone Live!), WAP starts to offer benefits in certain scenarios. E-mail checking, for example, using WAP is good enough to be usable.
Instant value also puts paid to the idea of the killer application. What has made the Web successful has not been (as much mooted) a killer app in any shape or form, but the fact there were sufficient successful applications of instant value to a variety of scenarios, to make it tip the balance. As we have all seen, web sites that don’t provide instant value are passed over. We go to web sites we know, rather than looking for something cheaper or better.
Instant value is all around us. Consider SMS and instant custard, MP3 players and toasters, and think how it applies to the technologies you are considering purchasing, or the products you have on the roadmap. To see what has failed the test, you only have to look in the cupboard for discarded gadgets, or software packages that never made it to their first upgrade. Of course, it may be that the costs of the things that fail, are outweighed by the benefits of those that succeed – as a colleague pointed out, it might be that “buying Instant Custard makes up for the woolly bobble remover, the
electric carving knife, the CD polisher, the TENS device, the device to make your Sky box work with your video when you are not there, the teddy bear that also acts as a pyjama case, the original 640x320 digital camera bought when they first came out, the Video2000 recorder from 1992 and the Satellite mobile phone that I bought to replace the Rabbit mobile I had…“ But then again it might not, and this is hardly a good way to do, or to run, a business. No instant value, no cigar.
2004
Posts from 2004.
March 2004
03-31 – Pushing Out The Web
Pushing Out The Web
Pushing Out the Web – The Out Of the Cloud Experience
Jon Collins, 31 March 2004
If there’s one thing we can take away from the triumph and disaster that was the dot-com boom, it is the fact that technology though can have a transforming effect, it is not the ultimate answer to every business and IT problem. In particular, the Web has given us a grasp of the concept of risk. We use the Internet in the knowledge that it is not perfect, and we make decisions about how appropriate it is for each use based our expectations of what the Internet can do, and what it is less able to do. Here are some of the everyday realities:
Dubious confidentiality. We use the Web for email, and we are generally quite happy to send emails “in clear”, that is without any encryption, on the assumption that any message will get lost in the noise. Employees regularly send quite confidential or even embarrassing information by email - how likely is it, for example, that the emails that indicted Enron were encrypted? It’s not just email – corporate files are regularly shared using peer to peer facilities or even using online groups such as Yahoo!, which offers only the flimsiest of guarantees of security or service.
Unpredictable performance. The Internet lives forever under the shadow of the potential for brown-out, where the whole thing grinds to a halt. It never has, but nobody would be that surprised if it did. Meanwhile, broadband access is often considerably less than broad, and mobile GPRS access is only fun for those who miss the good old days of 9,600 baud modems. The time that an email takes to cross the Web is sometimes measurable in hours, and other times, seconds, and many Web sites are slow to the point of being barely usable.
Less than 100% availability. There is nothing wrong with the expectation of 100% service, but the Internet does not provide that. Nobody quite knows whether how big is the maximum email we are able to send, but it is generally assumed that anything over 10 Mb is unlikely to reach its destination. The Internet has often been lauded for its DoD-hardened architecture, which in reality may have been more by luck than judgement, but it does seem to keep the connection going – most of the time.
If we have an option, we go elsewhere but otherwise we soldier on. The truth is that, with all its foibles, the Internet is perfectly adequate for the uses it is put to. One of its main strengths is the commoditisation of access, a business model which stems directly from the “pile ‘em high, sell ‘em cheap” school of supermarketing. Indeed, it should be applauded for that – after all, it was low-cost access (remember Cliff Stanford’s tenner a month Demon Internet?) that made the Internet such a success in the first place. However it is unsurprising that the Web has not displaced such services as EDI for financial transactions, or X.400 for military messaging. And a good thing, too.
The Web has a number of other strengths, differentiators or “Unique Selling Points”, to use the lingo. One is its ubiquity – any time, any place, anywhere, you can get access to the Internet, even if it is not very good access. Put a file on the Web, and it will be available from anywhere. Second is that it is a global standard – there are very few applications or devices these days that don’t offer some kind of Web facility, be it browser-based access or reporting by email. Finally, it offers a true service provider model, in which you are free to choose who you use for your site connectivity, hosting your Web site or managing your domain. The price pressures of the commoditised model coupled with Moore’s Law have ensured a highly competitive market.
In summary – as long as you’re not so worried about security, availability or performance, the Internet offers global, standardised access at low cost, using a service-based model. This is not to say that security, availability and performance are unimportant; rather, that they need to be treated in terms of risk management, rather than in absolute terms.
Given these strengths, it seems surprising that companies are not looking for new opportunities to benefit from the Web. One of the major casualties of the dot-com bust was the Application Service Provider (ASP): an argument often levied against such companies was that no corporation would ever hand over the keys to its corporate data, especially given the impossibility of guaranteeing 100% service. Neither should it – but there may be areas that companies can benefit, without taking such enormous strides and without setting the hurdles so very high.
Perhaps the main contender is email. There is a paradox with email, namely that we let it run over the dubious backbone for 99% of its global journey, then once it has arrived at its destination, we insist on trying to manage it like the CEO’s limousine, hallowing each message like some sacred object, with full backup, archiving and so on. If nothing else, this is hypocritical. There are a number of new pressures on email, in particular deriving from the aforementioned Enron scandal and the resulting Sarbanes-Oxley guidance. The fact remains however, that every organization has exactly the same requirements on what it considers to be email functionality. The latest versions of Microsoft Exchange and Lotus Notes are installed in exactly the same way in companies across the globe, and are a low-value drain on often stretched IT operations resource.
Why do we do this? Quite possibly, to keep some semblance of control; perhaps in the past, providers were not evolved enough to deliver the correct service levels; perhaps the applications were not ready. Perhaps, today, we do it because we always have, and this is no reason to resist change.
An alternative is to let the email provider that we already use – the ISP – to push a bit further out and add capabilities to what they already offer. We already trust them with our security and availability (after all, what’s to stop even the most reputable ISP from snooping through our email)? So, let us work with ISPs to extend their offerings. One of the first extensions is email virus protection, indeed would come as a shock that this is not already done, if we were not already used to what we have. SPAM-prevention is also a no-brainer. Neither should we stop with email – FTP and HTTP downloads can be checked for virus signatures on the highly scalable ISP servers, far more efficiently than they can on individual desktops. This suggests a level of caching at the ISP, which many provide already.
If the ISP is providing an enhanced email service, why can’t it go the whole hog and host the email altogether? We have already determined that we trust the ISP to store and forward email, so it makes sense to let them take as much of the load as possible. So why don’t we hand over our exchange servers? If we are worried about compliance, we can work with an ISP that offers a standards-compliant email service – in this way, compliance becomes a tick in the box, rather than having to first understand the guidelines, then implement them. If we want to spread the load, we can work with a third party email provider such as Cobweb, a compliance specialist such as Zantaz, or a third party email security service provider such as BlackSpider or MessageLabs. The result is the same – another company is dealing with the bulk of the email service and mitigating the potential email security risk, taking the load away from the corporate servers and the IT pro’s that manage and support them. They can also be dealing with ensuring that emails are backed up and disaster tolerant – indeed, most ISP’s can offer a far more resilient service than a most medium sized organisations.
Another way ISP’s can extend the service is by providing VPN functionality. As already discussed, we already trust the ISP not to snoop on the data as it passes through their servers and routers. VPNs can terminate at the service provider, for example, and the ISP assure the confidentiality of the data to the corporate site. This may not be sufficient protection against Robert Redford’s Sneakers, but we are looking to minimise the risk, not eliminate it, and there are probably bigger risks than that to our corporate data – are we going to start TEMPEST-protecting our computers, and don’t we already trust providers in a similar way for our voice calls?
Finally, we have data backup. Given what has already been said, how many organisations can truly depend on their own, internal backups – so why not use an online service? Connectivity that is good enough for a mission-critical service such as email, is also good enough to support data backup. Companies that fear eavesdropping can use online, encrypted backup services such as those offered by Connected.com. Users of hosted exchange services can already benefit, for example by creating a private Exchange folder and dragging stuff into it. Bingo – it’ll be kept securely, resiliently, and will be there whenever and wherever you need it. Online Sharepoint facilities are the same, or indeed the file stores offered by Akamai or eProject. Akamai’s offering adds a great deal of value – not only do you get the resilience, but because you are leveraging what amounts to a global file store, you benefit from high levels of performance.
There is a caveat to all of this – it is perfectly possible to offer a poor service, and one that is insufficient for email, VPN or backup, or anything else. The point is that most ISP’s do not – companies tend to vote with their feet, and ISP’s offering a poor service will not last long. Note also that a chain is as strong as its weakest link – online virus protection does not protect a business from a virus brought in on a CD, for example. Finally, the strength of any service should be determined by using it – for example, to find out how configurable the service is to meet a company’s own needs. Self service is the key.
It is still early days, but there are signs that the Web will be widening still further. One example, from Cisco, is a call centre infrastructure that can be shared between multiple companies for call routing and handling. Once again, we already rely on service providers for the bulk of call routing, so it is only a small step to consider the addition of call centre features. As with the other examples, the benefits are reduced costs, better security and enabling businesses to focus on their own activities.
Once again, self service is a driver to make it work – call centre routing for example can only work efficiently if it can be configured by the organisation. It is important to ensure that the facilities work with what is already there, and you need to choose your providers carefully, but the bottom line is, there is significant extra value you can derive from your service providers at significantly less extra cost. It would be folly not to.
April 2004
04-02 – Connected
Connected
Getting Connected To Online Backups
Jon Collins, 18 March 2004
I’ve long been an advocate of getting other people to do things for me. There are several reasons for this, which I’d like to claim are mostly down to time and will, but I must also confess that competence is a key factor. Fortunately, I know I’m not alone, particularly in the area of data protection.
Let’s do a quick straw poll. How many people out there back up their work laptops? Now, there will be a certain number who have no choice – the IT department has configured things to make sure it happens. Many others, however, will not. Now, how many people know somebody who has lost all the data on their laptop, through theft, hardware failure or accident? Look at all those hands – it doesn’t look good, does it? And that’s in the corporate environment.
Things get even more scary in the home, the small-office-home-office or the small business, where only the most anal are actually making sure that their highly valued data, be it digital photos, email, sales reports or partially developed programs, are being backed up. Given that computers are still so fragile, begs the question – why? Why do we leave ourselves so vulnerable? The only answer, I’ve come to the conclusion, is that it is actually quite tricky to protect data. Sure, anyone can burn a CD, but where do you put it? If you do it every week, how do you know which is the one you should refer to? If you do a full backup, it will be too slow, but an incremental backup (picking up only those things that have changed) is almost impossible to manage, as anyone who has trawled through the past six months of increments looking for a particular file, will attest. The proprietary formats used by many backup packages don’t help, meaning you often have to remember what you used for the backup, so you can do a restore. Now, there may be a package out there that has resolved these issues, but I’m not aware of it off the top of my head, and if I’m not (just your average, computer-literate soul), what chance has anybody who hasn’t spent the last 15 years working in IT?
The point is, there are answers, but you really have to work hard to find them out. There has to be a better way. In the quest for the answer, I realised that the only solution that would work for someone like me is where somebody else takes the problem away.
To this end, I’ve been trialling a service from a company called Connected.com ( HYPERLINK “http://www.connected.com” www.connected.com). Essentially, what this does is take backups, of my information, to their servers, somewhere out there on the Internet. Now, before anyone gets alarmed by the concept of putting one’s private data in the hands of an unknown third party, let me stress that (a) it is encrypted, and (b) Connected has some very big customers, here’s a list ( HYPERLINK “http://www.connected.com/customers/index.asp” http://www.connected.com/customers/index.asp) if you’re interested. Following the adage that security is about risk management, each company and individual needs to assess the risk for themselves, but I have taken the view that if it’s good enough for Deloitte & Touche, Visa International and Lockheed Martin, it’s good enough for me.
As for the service itself, there is remarkably little to it. To get up and running I had to download and install an agent, which runs on my PC as a little icon in the system tray. I then had to specify which files I wanted to back up – for me, this was the My Documents directory, and the elusive little place where Microsoft insists on hiding my Outlook inbox. Yes, I did try moving it a couple of years ago, but nothing ever worked as it should after that, so I learned my lesson – and if you’re looking, try right-clicking on “Personal Folders” in Outlook and selecting “Properties”, it’s in there somewhere. There were some other bits and bobs – nothing is ever that simple – but to select the backup set was quite a straightforward operation. And then – I just let things run. The first backup was decidedly, hopelessly uncomfortable, but perhaps understandable given that 1.5Gb of files were identified and needed to be sent over an ISDN line – fifteen years is a long time to be squirreling things away. It wasn’t so bad, files were compressed by about 50% and I let things run over the weekend.
After that, once every couple of days, I have let the backup software chunter away and do its stuff. Personally, I think there is probably a more efficient way to identify whether any files have changed, than have to scan the entire directory structure every time, but this has not been a major issue.
Indeed, six months on, there have been very few major issues. The only one I can think of is how the Connected client works with large files – again, it’s not all that bright. If I shift stuff around from one place to another on my hard drive, it doesn’t tend to realise that they are the same, and so it performs a new backup. On the same subject, while I understand that the client should be coping with Outlook files (which can become very large), it doesn’t always seem to do what is necessary particularly with offline folders. This can be a pain if I’ve been shifting information from one offline folder to another. Really though, these are niggles rather than showstoppers.
On the plus side, I now have a facility on my computer that is ensuring that every change to my own information is being saved somewhere offsite. The Explorer view in the Connected client shows me what’s been backed up, and restoration is a simple matter of selecting a file or folder and requesting the restore. I can choose to see only the most recent versions of files, or multiple versions, so I can restore, say, the version preceding when I had the moment of euphoric wisdom after last week’s half bottle of Chianti. There’s no hunting through CDs, no chasing which person who knows how to load the jukebox. Job done. I confess, I haven’t done a complete bare-metal restore of all my data using this service – to be honest, the 1.5Gb download that would be incurred was a little off-putting – but the omens are good.
From a wider industry perspective, what Connected.com demonstrates is that it is possible to deliver an Internet based service cost-effectively and deliver real value to the user. The company does a single thing well, which is probably the best approach for all parties. Connected can concentrate on how to make the best use of its own IT infrastructure, for example, using the most cost effective storage arrays, and shifting data between fast, expensive disks and slower, cheaper storage as necessary. Its pricing makes the package accessible to all but the most tight fisted of users (these are likely to be the anal ones anyway, so they’re more than capable to do their own backups). ASPs died because the level of service they could provide wasn’t considered up to scratch. There are some examples of companies that have emerged from the ashes, the (too-)frequently quoted example is salesforce.com ( HYPERLINK “http://www.salesforce.com” www.salesforce.com) but good on ’em, if they’ve made the business work. There will be others, each meeting different needs – for example, Web-based email security and compliance providers, such as MessageLabs (www.messagelabls.com), Black Spider ( HYPERLINK “http://www.blackspider.com” www.blackspider.com) and Zantaz ( HYPERLINK “http://www.zantaz.com” www.zantaz.com) – each is doing one thing well, meaning that the onus is on each organisation to decide how best to use the right combination of services.
Ironically there are a number of other companies that provide an awfully similar service to Connected.com, but they’re all arriving from different perspectives. Hosted providers of Microsoft Sharepoint such as Cobweb ( HYPERLINK “http://www.cobweb.co.uk” www.cobweb.co.uk), for example, enable users to upload information to Web-based servers so they can be shared – and the backup is delivered as part of the package. Companies delivering file transfer management, for example Akamai ( HYPERLINK “http://www.akamai.net” www.akamai.net), that provides the backbone for Apple’s iTunes, are also enabling resilient file sharing. Peer to peer file sharing companies, for example Groove ( HYPERLINK “http://www.groove.net” www.groove.net), provide a level of data integrity by virtue of the fact that files are replicated between everyone in a workgroup. None of these companies would consider themselves to be in competition with each other, but they offer a set of overlapping facets of a nebulous, nameless service that ensures both data accessibility and integrity, anywhere, any time. We are already used to somebody else having access to most of our data (how many people encrypt their emails before sending them?) so the idea of Web-based storage should not be too offensive.
The biggest question this raises – to my mind, anyway – is why such facilities as online backup are not delivered as bundled services from the likes of Dell or HP, in the same way that they bundle Antivirus. Perhaps it’s only a matter of time, and it is highly likely that it will be only though OEM deals that offerings such as Connected.com will reach the mainstream. At the end of the day, use of online services is, to many small businesses and home users, no more or less than an admittance that we can’t do everything ourselves. And the sooner we own up to that, the better.
04-21 – Silicon Mda
Silicon Mda
Model Driven Architecture – is Programming Dead?
Jon Collins, 20 April 04
In the beginning, there were programmers. They wrote software and, mostly, people found it was good. Then the platform changed, so they wrote it again. And again, and again, software was rewritten through changes of operating systems, databases and architectures. Sometimes, there were even improvements on the time before but often it was little more than a port.
This is being a little glib. All those changes were part and parcel of the technological evolution we have experienced over the past thirty years, punctuated from time to time by leaps in understanding that gave us the client server applications, say, or the commercial potential of the Web. Functionality such as shopping baskets wasn’t even a twinkle in the eyes of the first programmers. The fact remains however, that much functionality that exists today has changed very little over the decades. Payroll is still payroll; customer records are still customer records; insurance policies remain insurance policies, and so on. There remains scope for improvement, but certain business logic exists as it should, and is likely to continue to exist for some time to come. And why not, if it works.
Given this continuity, in programming circles, the quest has long been to make things so such functionality could be used again, either in other applications or on other platforms. It was not to be – apart from some exceptional cases where code has been reused, most organisations have decided it was more cost effective to rewrite. There are libraries, and there have been since the beginning, but these have rarely stretched all the way to the heart of the business logic.
Perhaps things are finally getting there, for a number of reasons. The first, most press-worthy reason is Web Services, the generic term for a set of standards for how application software elements talk to each other. Web Services wouldn’t exist without all the work that’s gone into the development of application architectures that take most of the work out of the software plumbing – Enterprise Java (J2EE) and Microsoft’s .NET architecture. One gave rise to the other, catalysed by other standardisation efforts such as the adoption of XML, and the acceptance of programming, design and analysis patterns that dictate common structures and where they should best be used.
The second reason, which is also inextricably linked into the application architecture work, is the unification of standards for describing what applications look like. The whole world has settled on a single modelling language to describe the innards of applications, known as the Unified Modelling Language (UML). The standard is now at 2.0, which means that it covers the vast majority of requirements. It’s not perfect, but it is certainly comprehensive.
All of these things together do not make reusable applications, but they certainly help. There is another knock-on effect: now the standards are virtually complete and universally accepted, the standards bodies themselves (such as the Object Management Group, OMG) have had to set their sights on new things to standardise. What they have come up with is rather intriguing. Essentially, having agreed the tools and techniques to be used, they have turned their attention to the process, defining what they call Model Driven Architecture, or MDA.
The goal of MDA is quite straightforward: to enable applications to be specified entirely in modelling terms, which are platform independent, rather than in code, which varies depending on the target architecture. From the OMG’s perspective this is largely a boon for application integrators, who spend (nay, waste) time converting perfectly good code to work on different systems. As it is defined, MDA enables such lower level programming to be automated by code generators, freeing up time to be spent on the more interesting stuff – the business logic.
Code generators are nothing new. Fourth generation languages using code generators followed 3GL’s (such as C, Pascal and FORTRAN) as sure as night followed day. Later, software tools such as software through pictures started a fashion in using models to represent code, continued into the land UML by companies such as Togethersoft (now part of Borland). Indeed, and as such, Borland, as well as IBM’s Rational Rose and other companies’ modelling tool offerings, has very quickly jumped on the MDA bandwagon. Fair enough – to an extent, they were doing it already.
There are a number of advantages to the MDA approach, not least that the design specification can never again be out of date, as it is the only thing that is changed. It also takes the onus well and truly away from the code and gives it to the design. One of the greatest flaws of the IT industry is the habit of implementing a solution before it has been specified or designed – if construction projects were treated in this way, we’d all be living in shacks. This is not saying that there is a place for programming; rather, that code-centric initiatives offer a carte blanche to less scrupulous development shops. In engineering, architects exist for a reason, which is perfectly applicable to software.
Some software tools companies were already ahead of what the OMG is trying to paint as “new and improved“. Relative newcomer Quovadx, for example, has taken the concepts of using models as a starting point for code generation to new levels, to a point where there becomes less and less need to write code at all. Admittedly most of Quovadx’s examples are specific to certain sectors, for example healthcare and financial services, but it begs the obvious question – how close are we to missing out on the code altogether?
Very close indeed, if the latest offerings from Select Business Solutions are to be believed. SBS, once the main competitor to Rational Rose in the UK, have combined the concepts of MDA with those of design patterns – saying, essentially, that every structure you can think of in code has been written before, and can be specified and reused in modelling terms. By selecting said design patterns and using them as input to the code generators, it should be possible to automate the production of the vast majority of code, if not all of it. SBS is not the only company to be thinking this way; however, they are one of the first to achieve it in practice.
Once again, it is easy to make glib statements, and any such claim should be tested carefully. Perhaps the question is not whether SBS or Quovadx, Rational or Borland have succeeded in this instance, but rather the fact that they are drawing us inexorably towards a situation where the majority of code is generated. There will be complaints about inefficiency, which is tantamount to saying that there will always be a need for experts at developing high-throughput code; for the rest however, assuming this falls within tolerances, it is worth a shot. Also, the generated code can incorporate things that the programmers wouldn’t necessarily consider, or even know about. As their software division encompasses both software development and enterprise management, IBM have a distinct advantage here: for example, they are looking at how standardised event logging can be built into code generators, as a support for their autonomic computing initiatives.
Meanwhile, we should consider all of this in the context of the major application vendors, all of whom have their work cut out developing various types of functionality that can be integrated into applications. In addition, there are the Independent Solution Vendors (ISV’s) working on vertical solutions aimed at specific geographies and markets. All of these can benefit from the integration-centric view of MDA, where models act as the glue between pre-fabricated components and services bought in from the specialists.
No programmers should be put out of a job by MDA, as there will always be new and interesting things to develop. However, MDA suggests one last, and quite seismic shift. In the future, if the models are king, the onus on programming will move outside the business and into the IT vendor marketplace. This would be no bad thing: insurance companies can get on and deal with insurance, and pharmaceutical companies can work on new drugs, rather than using up half their budgets redeveloping old code or trying to work out what the new stuff should look like.
The agreement on MDA as an approach is still to be proven, through its adoption and acceptance by the businesses that decide to use it. It would appear however, that it is only a matter of time.
June 2004
06-10 – Ilm 1 Year On 1.0
Ilm 1 Year On 1.0
Information Lifecycle Management (ILM) was coined as a term in late 2002. Its origins are obscure, but it is most likely that it came from the thinking of certain strategists at StorageTek.
It was always a great theory. Spawned from the idea that there was more to storing data than – well – storage, the thinkers set about making it work better. The central concept is that storage can be categorised according to how fast it is, and how expensive it is. This is a sliding scale – fast (think high-end disk) tends to be expensive, and slow (think tape) tends to be cheap, with several categories in between. Now, thought the thinkers, what if data could be similarly categorised? If so, high-value data could be stored on top-end disks, and so on.
Where the idea got really clever, was the inclusion of software to shift data from one storage platform to another, depending on how it was being used. Hang on, thought the storage diehards, wasn’t this just a re-hash of Hierarchical Storage Management, a dynamic archiving technique? Well, yes, and no. First, HSM was one-dimensional, covering only the time value of data and nothing to do with its value to the business. Second, and because of this, ILM requires proactive classification of data using real human beings. Finally, HSM was largely about archiving of old data. If ILM is done right, the concept of archiving goes away, replaced by a continuity of dynamic data movement.
So, as a practical reality ILM has existed for about a year now. How has it evolved? First off, there was a danger that (like many good ideas) it would be turned into a sales vehicle by the collective marketing departments of storage vendors. EMC was quick to jump on the bandwagon, and a number of other companies including Veritas have adopted the idea of “Lifecycle Management” – in Veritas’s case, it is Data Lifecycle Management. Hewlett Packard and IBM are also using the ILM phrase, but it is buried further down in the messaging.
As marketing took hold, it became clear that there were a number of issues with how ILM was being presented. First, while ILM might be a good idea in principle, it was not articulated in a straightforward manner and nobody understood it from the customer side. Second, in a number of cases (including StorageTek, who thought of it) it really was little more than marketing – both because companies saw it as so, and because they didn’t put their whole product lines behind delivering on it. Third and finally, to work, it required a much more comprehensive approach to storage solution delivery than had existed in the past, requiring a holistic architectural view and services to support its delivery.
Despite these issues, the vendors have stuck with ILM, indeed they have emphasised its importance. The reason being, that it really is the right answer for the same reason that a more service-oriented delivery of IT is the right answer: of course storage should be treated with far more attention to its business implications. There are other reasons, not least the emergence of compliance as a key driver for improved storage solutions. A compliant solution is an integrated solution, and ILM offers an approach to support such integration. As a result however, vendors have recognised that they need to make a bigger effort if they want to make things work.
Believe it or not, the vendors are starting to get what has to be done. Not least, they are on the whole recognising that it needs to be services driven: for example, EMC has announced an assessment and classification service that is an entry point to ILM, and StorageTek has augmented its services organisation with ILM in mind. Channel companies like Dimension Data and Glasshouse Technologies are also reacting to the trend, augmenting their storage offerings accordingly.
Second, and perhaps more important are the growing partnerships with content management companies. EMC acquired Documentum, and StorageTek has partnered with Ixos, which is merging with Opentext. The reasons why ILM can benefit from Content Management (CM) are long and complex, but suffice it to say, CM has long had both feet firmly planted in the business, even while Storage management is still trying to open the door to the data centre.
Progress is being made, but there is still a way to go. Computer Associates have just announced the addition of security features to their BrightStor range: laudable perhaps, but even considering the impact that the Internet has had, it begs the question, why weren’t such features considered as standard for the past three decades? There are ongoing integration requirements to improve management capabilities, for example by incorporating policy-based ILM with other aspects of enterprise management; also it is still early days for such ideas as adaptive storage and automated management.
Still newer requirements are dynamic provisioning and storage accountability, coupled with building in the financial aspects of storage service delivery. A data store should be able to report exactly how much it is costing to store the data, down to the byte if necessary: only in this way can the vision of ILM, to move data according to dynamic assessment of cost/benefit, be addressed. Once this exists it can be linked up into application costs, and ultimately the business can have a clear, transparent picture of how much its storage is costing. At that point, there is a question of whether it should still be called ILM, but it’s as good a name as any.
Ultimately however, even if ILM is the right answer, it can only work if businesses want it to be so. This is a bigger issue than some might imagine. Businesses don’t necessarily feel they need to know how they work, and therefore they’re not in any hurry to say what they need. However, they do expect IT to deliver, which suggests a paradox. We can come up with all the acronyms we like, but they are nothing more than ongoing efforts to make IT work for the business. For ILM to succeed, it needs to be considered in the round, alongside other aspects of IT delivery.
July 2004
07-30 – Voice Recognition Part 1 1.0
Voice Recognition Part 1 1.0
A voice in the wilderness – Part 1
This is an attempt to dictate a document using voice recognition. I am doing this slightly differently to how I would normally. For a start, I am also walking the dog. Therefore, it is not possible to look at my computer in the normal way, so I have put it in my rucksack. This makes it impossible to see the screen so I have had to resort to other means. I am therefore viewing my laptop screen via a wireless network connection from my Dell Axim PDA. From there, I am using a VNC (Virtual Network computing) software client to create a remote display of the laptop screen, on the PDA. When I require some mouse control, I have a Hand Track portable trackerball from Trust, or I can also use the touch screen of the PDA to control where I am on the laptop screen. The voice recognition software is Dragon NaturallySpeaking XP, running on Windows 98.
And, guess what. It all works - well mostly.
This is by no means a perfect solution. The laptop is one of the first Sony PictureBooks, running only a 400 MHz processor with 128 Meg of RAM and Windows 98. All the same, it would appear to be at adequate for the job. I would rather not have to walk around staring at a PDA, but it isn’t that much trouble, the occasional glance is enough. The particular microphone I am using works well, as long as there is no wind. I confess that for this particular, unplanned test I am not using a noise reducing headset, rather, a cheaper Plantronics model that clips to my ear. For some reason, when the breeze takes up, the recognition starts to favour the words inward, wooden and women. Why exactly this is, I leave to your imagination, but I do know that I have achieved better results with a noise reducing model.
Not all of us would want to be staggering along with a laptop in their backpack in order to dictate an article, but this is clearly a possibility, and it does the job. It wasn’t easy to set things up – getting a peer to peer network between two wireless cards took an age (until I found an undocumented checkbox), and then there was the mucking around with the temperamental VNC client to make the screen viewable on the PDA. Everything came together in the end, but it wasn’t a job for the faint hearted.
Perhaps what all of this illustrates is the power of integration, or the lack of it with mainstream vendors. If I could get things set up with old technology, why exactly have the big IT companies been unable to bring such capabilities to market? While Microsoft and Intel still struggle to deliver the perfect tablet PC with integrated voice recognition, an old PC with an old operating system and an old version of a software package were perfectly capable. Equally, while network operators and equipment vendors try to tackle the concepts of “mobility”, trying to turn it into some distant target that will make a great deal of money for whoever can crack the code, they missed the point. For the past five years, there have existed opportunities to mobilize the masses, and they didn’t require multi-billion, high-bandwidth infrastructures. Not everyone is going to want to use voice recognition, but let’s face it – the idea of people walking about chattering into space is no longer as unnerving as it was. And what if – just imagine – voice recognition turns out to be the missing piece in the entire mobility puzzle? Not that we should all be lugging laptops around, but many of us are doing this anyway.
Ultimately, if it all boils down to integration, the biggest problem is that nobody is doing the integrating. There are lots of options out there – IBM has a version of its own recognition package ViaVoice that runs on Linux, so there would be nothing to stop someone porting it to a Sharp Zaurus PDA (though, truth be told, users of ViaVoice in general have met with varying levels of success). The PDA device I have in my hand has a processor equally powerful to the laptop in my backpack, at least if the clock speed is anything to go by. Perhaps a smaller laptop (there’s some great ones available in Japan), with a Bluetooth-integrated remote screen and microphone, rather than VNC over WiFi? Great theory, but as anyone who’s tried to connect a Bluetooth headset to a computer will tell you, it just ain’t happening at the moment. There are lots of options, but each has to be tried and tested. Even if things did work as they should, the mass market of punters won’t be spending the time using computer equipment like Lego sets, and nor should they have to.
Perhaps things have been moving too fast for even the vendors to stop and think. In our struggle to look for the latest and greatest gadgets and (and I confess, I have reverted from my new Nokia 6600 phone to my old 6310i because it was better at the basic job of making calls), it is possible to take our eyes off the ball. Or perhaps – but surely not – there is something more insidious going on here – the big guns don’t want us to have such capabilities just yet? A bit like dodgy accounting practices, maybe they prefer to spread out innovations over a number of years?
Before the conspiracy theorists pick this up and run with it, they should recognize that the truth is a little more mundane – driven by fear and greed, even the biggest companies are still insisting on following technological rainbows rather than making existing products work together as they should. Networking with Blurtooth is a good example - rather than trying to fix existing “standards” they are already pursuing the next generation. Ultra Wide Band (UWB) will begin to appear next year (100Mb/s bandwidth to start going to 400Mb/s), not to mention the short-distance WiFi version that’s just been announced – hopefully somebody will treat the issue of compatibility at the outset, rather than leaving it down to the consumer to fix yet again.
That’s not to say that new developments won’t be very welcome. Part two of this article considers how to start a voice recognition revolution, if only the price was right. Meanwhile, as I walk along watching the sunset, my faithful mutt off in some bushes, I think to myself how this was, without doubt, one of the most enjoyable experiences I have ever had writing an article. If this is the future of portable computing, I can’t wait.
07-30 – Voice Recognition Part 2 1.0
Voice Recognition Part 2 1.0
A voice in the wilderness – Part 2
So, given a bit of effort and a few old components, it is possible to start using technology in new ways. Clearly however, just because something works, that doesn’t mean anyone will want it, or be prepared to pay for it. There are a number of marketing criteria that the big guns like to apply, which boil down to basic questions such as – is it useful, is it usable, is it affordable and is it desirable?
Before considering usefulness, I thought I’d tackle usability. The solution previously described had a large number of dependencies (you’d need both a PC and a PDA, for example) so I wanted to shorten the list a little. Picking up the mantle of integrator could be fun, I thought to myself as I browsed for head mounted displays on the Web. A couple of phone calls later and I was heading to London-based high tech reseller Inition ( HYPERLINK “http://www.inition.co.uk” www.inition.co.uk) for a visit. What had particularly caught my eye (sorry) was a tiny screen (from a company called MicroOptical – www.micropticalcorp.com) that could be mounted on a pair of glasses. Inition had some other products that would slip comfortably into the “cool stuff” category, such as laptops with 3D displays, VR gloves and so on, but I tried not to get too distracted.
The micro screen plugged into a standard VGA port on my laptop, and was self-powered from a camcorder battery, so within a short period of time I was ready to go. Frankly I was worried that the experience might be an anticlimax (“two hours on the train to London – for this?”) until, like Joe 90, I put on the glasses and my world was transformed. There, in the corner of my vision, was a computer screen. Small but adequate, it floated in space like real life with a picture in picture setting, which I suppose was exactly what it was. Five seconds later I had clipped on my microphone and I was dictating into the computer in my hand. A few seconds more and I could be browsing the Web, sending and receiving email, checking the traffic news or buying a pizza – I knew this as I had already played with the voice commands available in Dragon NaturallySpeaking, and I’d found them comprehensive enough.
The little neon sign flickered on in my head – you know the one, bearing the words: “I want one”. I was sold. The whole experience gave me the impression that nothing would ever be the same again – once I could afford such a gadget, that is. But, was it really useful? The good people at Inition told me some of the reasons people used their displays – orchestral conductors reading music, surgeons consulting manuals – but the display/recognition combination seemed to have a more profound value.
As I used the voice/display combination, it felt immediately apparent that this was not some niche application, but a core productivity tool. Consider for example, auditors and surveyors who create reports containing their observations. Surveyors, for example, already use voice recognition, however they usually use some intermediary recorder, which then requires to be played back and edited. How much faster could things be done if the report could be created, edited and delivered within minutes of the observations being noted?
Indeed, there are plenty of workers who combine a dependency on the written word, with a reality that they are not always in front of a keyboard-driven computer. Meeting rooms and the corridors of power, not to mention airport lounges, planes, trains and automobiles, all so much dead time spent in transit, couldn’t this be better spent? To give you an example – following one meeting I used a twenty minute walk back to the station, to collate my thoughts and send some immediate feedback. Had I not had such a facility, the feedback would have been a couple of days, indeed if it had happened at all.
While it is clear that the computer keyboard will not be going away, equally, other input mechanisms remain largely unexploited. There are potential issues – is it safe to dictate while driving, for example, what of the eye strain, and perhaps people need empty time to keep on top of stress – but few would deny there are moments when moving from one place to another that we would love to be doing something more useful. I once told someone I was writing a report on when I was sitting at the beach at Nice. They said to me, of all the things to do on the beach at Nice, you write a report. I replied, of all the places to be writing a report, and where would you rather be but on the beach at Nice!
That’s usefulness, usability and even desirability covered to an extent, but then comes the question of cost. At 1200 pounds a pop, head mounted displays are not going to hit the mainstream anytime soon. There are cheaper versions, but this is just one component: it is the integrated package that needs to be delivered at a reasonable price. iPod sales would suggest this needs to be the sub-500 pound mark before any such package would register on peoples’ radar.
If integration is the answer, then somebody needs to start integrating, and getting products out to the early adopters. This is of course the model applied by consumer networking companies such as LinkSys, as well as credit card companies such as MBNA. The issue is not whether it is the best product, but to get as many potentially useful products to market as cheaply as possible. In this way, the market can decide which are worth having and which are not. It should be possible to do it with old technology – indeed, given the bloated size of Windows XP, newer technology would push the hardware requirement back into the unaffordable so we’d be better with the old.
Meanwhile, Microsoft has tried to achieve something similar with their tablet PC specification, but clearly something went wrong there. If this article illustrates anything, it is that we do not need some new and improved spec; instead, design shops should be concentrating on integrating what exists, and delivering it in a package that thinks more about function than form. Once this delivers a package at a price people can afford, then we might see a major advance in voice recognition use, and with it significant gains in productivity. All it needs is for the industry to get its act together.
August 2004
08-02 – Full Circle It 0.3
Full Circle It 0.3
Full Circle IT: the Race is On
Draft 0.3 Jon Collins, Quocirca, August 2004
There is a competition in Information Technology. It started decades ago, but nobody talks about it directly. Technology companies have attempted to capture the prize many times, but thus far none have succeeded. Many of the players have been knocked out, or they have teamed up with others to give them a better chance of winning. Today the competition is only between the largest computer software vendors, such as IBM, HP, Computer Associates and Microsoft, because only they have the necessary tools and technologies. Thus far, the obstacles have been too large to be overcome with available facilities, and progress has been slowed for a wide variety of non-technological reasons. The time is drawing near however, for the competition to conclude: the pieces of the puzzle are nearly in place.
So, what is the competition? It is to complete the circle between IT delivery and business demand.
The latest manifestation of this competition is described differently by each vendor. IBM refers to the “On Demand Business”, and HP talks about its “Adaptive Infrastructure”. There are other terms – “Utility IT”, “IT as a Service”, “Agile Enterprise”, and so on: each company has its angle, its unique selling point, but in the past they have all suffered from the same weakness – they have only really considered the IT side of the coin. The other side, as many vendors are working out, is that these models can only succeed if the correct linkages exist back into the business. This goal, the one which all sides are striving for, may be characterised according to three straightforward criteria:
1. Business Drives IT: A business articulates how it works and what it is trying to achieve. It describes this strategy as a series of business policies, processes and products, and incorporates a set of financial measures and metrics to give visibility on how closely it is achieving its goals. This information is directly linked to the finances of the business, so, for example, a sales process should be able to demonstrate the revenue generated and the cost of sale. All of this information is captured and modelled using software tools, and the resulting models are the starting point for any use of technology.
2. IT Manages Itself: Frameworks exist for the creation of applications that demonstrate a direct linkage to the business processes they are supporting. These applications run on an infrastructure that works in such a way so that technical resources and information assets are managed and provisioned to the applications in the most cost-effective, risk-managed manner, supporting the policies defined for the business.
3. IT Drives Business: Management tools deliver information to demonstrate how well the technology platform is functioning and how much it is costing, and also how well business objectives are being achieved and what benefits are being delivered. By delivering performance and financial information back into the business models, IT and its contribution to business success can be clearly demonstrated, as well as paid for directly. Changes in business strategy can be decided based on real information, completing the cycle.
None of these criteria is new, but it is the ability to link them together in any meaningful way, which has thus far proved elusive. This is not to downgrade the validity of the goal: indeed, all the major vendors are working towards it. Closing the loop and delivering full cycle IT brings with it huge potential benefits, not least in terms of enhanced productivity and business efficiency. This is not marketing rhetoric, but common sense. The fact that IT has failed to live up to its promises thus far, does not devalue the desire for IT to deliver its full potential. Indeed, there is not a company out there that does not hope that this time, the market-speak might turn out to be true.
Why hasn’t it yet been possible to close the loop between business and IT? One reason is the downright complexity of existing IT environments. Implementation of management tools, technologies and practices can be likened to parachuting a Portakabin into the jungle. Whatever our capabilities within the cabin, that doesn’t change the fact that outside of it there is still a jungle – a nefarious mix of ill-connected systems, poorly documented applications and dirty data. Confirms Peter Matthews at Computer Associates, “The difficulty is actually implementing it all. Many organisations can’t even do a cross-enterprise server uptime survey, we shouldn’t lose sight of the fact that there is a body of work to be done before we can get to this stage.”
This may be true, but too often the slow progress has had little to do with technology, but more to do with self interest. All sides are guilty: for a start, a great number of IT vendors have little to gain from supporting such a model. This does not necessarily include the companies listed here; it is more the niche players that have neither the vision not the capability to deliver on the vision. Similarly, within the corporations, there have been too many reasons to prevent such a closed loop from happening. A more efficient infrastructure requires less people, so why would anyone want to put themselves out of a job? Similarly, a better understanding of how business is conducted might be seen as unduly revelatory, or even an admission of failure.
There are other reasons, and each plays its part but again, the validity of the goal should not be in doubt. It is laudable to want IT to work for business, and closing the loop will act as teh major proof point. Whether all vendors and businesses subscribe to the philosophy of full circle IT, these is little doubt that all of the mentioned computer companies are trying to achieve this, and clearly there will be a certain amount of kudos for whoever gets there first. As Stephen Martin of Microsoft’s Biztalk division commented recently, “we are dangerously close to cracking the code.” There is more than kudos at stake, however. It is not unreasonable to suggest, that, just as Microsoft won the battle for the desktop and Cisco won the battle of the network, there may be a winner of the battle for full circle IT. If it is so, then the winner has much to gain; and none of these companies would want to miss out on the chance for success. The same, first mover advantage goes also to integrators and to the end user companies. Despite the current difficulties, past experience suggests that a few examples of companies that “get” the closed loop idea, and that have the wherewithal to make it happen, would be enough to spur their competitors into action.
The competition is only just shimmering into existence but it is time for full circle IT to be recognised as a key battleground for IT vendors, and as a strategic goal for end-user businesses. If they do, it can only mean that computer companies drive even harder to bring the competition to its logical conclusion. At its heart, this is about removing complexity, not adding to it, says Peter Matthews: “The mechanism to deliver technology and business alignment, involves making IT simpler to use.” For IT as well as for business, this can only be a good thing.
2011
Posts from 2011.
June 2011
06-16 – Cloud Security 0.2
Cloud Security 0.2
The security aspects of cloud computing
In the history of information technology, a number of themes continue to repeat themselves. Not least the idea that computer power should be delivered as a service - back in the 1970’s organisations used computer bureaus for their processing needs, and over the years we’ve seen the same principle delivered in various guises.
At the turn of the new century, ‘utility computing’ was the phrase being used - comparing IT delivery to that of electricity or water. Security was seen as a fundamental element: “Data should arrive like water from taps - already clean,” as one executive was heard to say.
Today we talk about cloud computing. The experts aren’t wrong when they say there is nothing new in principle - perhaps the biggest change is in the fact that today’s computers are now powerful enough to run multiple virtual machines, adding a level of flexibility that wasn’t possible in the past outside the traditionally more expensive world of the mainframe.
All the same, the principle - the aspiration - is much the same as it always was. Not only to benefit from the cost efficiencies that multi-tenancy should bring, but also to reduce the risks associated with IT service delivery. It’s worth reflecting on the “utility” nature of cloud computing, and considering what comparisons can be made with other utility services with regard to security. Put simply, we can consider cloud computing security in terms of processing and delivery.
From a processing standpoint, we are essentially outsourcing our computing activity to third parties and as such, we need to trust them to succeed on our behalf. That is - not to lose or damage our data; not to break down, slow down or otherwise fail; and not to respond appropriately in the case of external threat, be it malicious, unintended or natural causes.
While many providers take such responsibilities very seriously indeed, not all do, and for good reasons organisations have been reluctant to hand over their data to cloud-based service providers. Understandable - and a symptom of just how early in the process we are towards using the cloud for mission-critical data.
Meanwhile, we have delivery. Utilities incorporate treatment plants, filters and other protection mechanisms to ensure that the water we drink is as clean as it can be, and electricity suffers from a minimum of spikes.
Data can also be treated in the same way - not least in terms of protection against spam, spyware and other content-related threats. Just as consumers and businesses would not be expected to treat their own water, neither does it make sense to protect against all kinds of threat in-house.
It’s not just a question of scalability, nor the fact that the best place to deal with such things is before they reach the organisational boundary. In addition, many companies, particularly smaller ones lack both the skills and the resources to monitor against what is an increasingly complex threat landscape.
The principles behind cloud computing may have been with us for many decades, and it may be many more years before the aspiration of delivering IT completely as a service becomes a reality. In the meantime, it is worth looking for specific places where cloud computing can take the strain. Not least in security which is an essential element of any cloud computing strategy, whether in terms of processing or delivery.
November 2011
11-14 – Lessons From Building The Classroom In The Cloud 0.2
Lessons From Building The Classroom In The Cloud 0.2
Lessons from building the classroom in the cloud
Computers have long been associated with learning, and many educational services use technology in one way or another. But what if you want to take an entire learning environment and deliver it online? That’s precisely the challenge that those responsible for training Microsoft’s technical field were set. Traditionally, Microsoft has delivered a week long technology training conference covering a broad range of technical subjects, with classroom-based courses before and after the main event.
But given current constraints on travel budgets and time, is it possible to deliver the same training from the cloud? The solution, according to Microsoft, is a programme called MS Involve, available twice a year to the same consultants and field engineers. Following an interview with Zaakera Stratman, Senior Program Manager for MS Involve, and Alison Woolford, Operations Manager at Microsoft learning partner CM Group (LINK: http://www.cm-group.co.uk/), here we distil into bite-sized pieces the lessons learned, challenges overcome and the plans for the future.
- First, aim to replicate the learning experience with skilled trainers
Technologists may feel they know best how computers can be used to aid collaboration, deliver content and so on, but education involves a far older set of disciplines. “Classroom training is as old as education in tribal societies,” explains Zaakera Stratman. “You need to replicate that experience as closely as possible, build on best practices, otherwise it just doesn’t work as well.” Experience learned on the programme demonstrates that the old techniques are the best, and there’s no substitute for having a seasoned instructor in place: “It’s important to use skilled trainers rather than just subject matter experts - that is, people who understand how to train, rather than present a set of PowerPoint slides. At the beginning we had people turn up, speak for half an hour and leave - which isn’t enough for imparting knowledge!”
- Plan the content to the task, using available technology
Not everything works as well online as in a real classroom - it’s not possible, for example, to write on a flip-chart then stick the result on the wall for the duration of a course. However technology can bring new techniques to the party, such as using instant messaging to log questions and answers. For Microsoft this is a work in progress as it reaches right back into the content development process. “There’s been some changes in the way content is planned, put together for a subset of the courses,” says Zaakera. “We also need some way to deal with smaller group break-out sessions and we shall continue to look at student evaluation of individual content within the course.”
- Schedule like a live event, but build in learner and instructor flexibility
While it is important to set expectations and plan lecturers’ time, it’s important to ensure schedules fit with learners’ availability as well - particularly across time zones. “People needed to be able to take training in situ, to self-consume,” recalls Zaakera. “Students needed more flexibility in terms of the scheduled pieces: the number one piece of feedback was to deliver over a three-week period, then leave the site open for a further week. Initially we only had one time to do a lecture - that didn’t work so well either, so we changed it to multiple times.” Greater flexibility suits instructors better as well, recalls Alison Woolford. “We had one instructor in Prague, he was needed for weeks one and three, so he was able to get on with another assignment on the middle week.”
- Enable the pedagogue to connect to the pupil.
For learning to succeed, it’s important to ensure that the connection between the trainer and the person they are trying to train is as good as possible. The better the connection, the better the learning: this requires more than a simple providing a video and audio feed, says Zaakera. “Instruction can be very visual - instructors thrive on interactions. We thought very hard about how to enable instructors to really collaborate.” For Alison, this is equally about putting instructors in the driving seat. “The instructor needs to ‘own’ the course if they are really going to get a rapport going. It’s their domain for the time they are training.”
- Be absolutely sure it all works, and have support in situ
Live training events require the very best cat-herding and techno-logistical skills, but at least they are all held at a fixed venue. While online instruction enables more flexibility, a large number of things can still go wrong at any moment - particularly as the instructors can be anywhere in the world. “Instructor access absolutely needs belt and braces,” comments Alison. “One instructor lives in a remote location, his ISP had an outage so he jumped in a land rover and drove to the top of the nearest hill, pulled into a farm gateway and delivered an hour-long lecture using a mobile 3G dongle!” Microsoft has implemented a system of lab ‘proctors’, available 24x6 - “On-hand experts who moderate forums, help with live lectures and provide technical support,” explains Zaakera. “Proctors have been invaluable to ensure that the flow of the training isn’t interrupted.”
- Encourage interaction, within and beyond the classroom.
“You can throw a lot of content together but if you don’t have people supporting the students, the delivery will be less effective. That’s why it shouldn’t all be on demand - people do want to be led through content by an instructor, and interact with their peers,” says Zaakera. As well as enabling the connection between teacher and learner, online technologies offer opportunities for higher levels of interaction between students. “We are enabling greater levels of collaboration within the class, and there’s still more we can do on how collaboration extends beyond the classroom - for example, using social tools or enabling people to see where their colleagues are up to.” This process can extend beyond the MS Involve scheduled period, and to a broader audience. “Already, two hours afterward, sessions can be played on demand and students can access the hosted labs whenever they need.”
The $64K question is - is it working? “Our registrations and attendance have grown, people are saying, ‘this is great’ which is a good sign!” laughs Zaakera. “The bottom line is, this is all about using technology to enable a relationship between whoever is imparting knowledge, and whoever is coming to learn. Success is all about the people: people are expensive, but cost per head has continued to go down even as attendance numbers have grown, largely because of significantly reduced travel costs. In the future we’re going to be focusing more and more on the people aspects - these have the biggest influence on the event. It’s very important that people hit their training goals, but equally important that they will want to come back!”
With all the potential technology can bring, focusing on the people and the learning experience is the way to go.
2012
Posts from 2012.
March 2012
03-05 – Not Your Father’S Data Centre 0.2
Not Your Father’S Data Centre 0.2
Views from the cloud
Cost isn’t the only question
Every now and then it’s worth being re-acquainted with what this cloud stuff is all about. After all, there is so much marketing flying around that one can lose touch with the technology that underlies cloud computing – to the extent that it becomes little more than a few coloured squares on a Powerpoint slide.
A couple of weeks ago, I joined a number of journalists on a visit to two Equinix data centres in Paris (one complete, one under construction), on the invitation of one of their prime tenants, BSO Network Solutions. Now that the air conditioning noise has stopped ringing in my ears, what did I learn?
First, that it’s still about computers, storage and networking. I know this is an obvious thing to say; nether do I underestimate the amount of engineering effort that goes into every layer of technology, from chips to servers and storage devices. Indeed, one could not fail to be impressed by the rows upon rows of rack-mounted devices, each being maintained at a constant temperature using the latest in cold-aisle cooling.
But all the same, computers are still computers, even if they are now powerful enough to run several virtual machines in parallel. And an MP3 file will look exactly the same whether it is stored on a 50-terabyte array or a memory card. Whether we’re talking about the largest server or the processing capabilities of a smart phone, ultimately what matters is how tasks are allocated to which processors, and how information moves between them.
Cloud computing is, from the provider’s perspective, about being able to squeeze more value out of hardware and software (using virtualisation, dynamic provisioning and so on) than its customers could by themselves – otherwise there would be no point. Of course, there is nothing to stop any organisation deciding to run its own IT, so there has to be enough reason to hand over the keys the the corporate kingdom to a hosting provider.
For some organisations, this is a simple question of priority. A frequent mantra of business consulting is to stick to one’s knitting – focus on the things the business does well, and outsource those which don’t add value. In reality, though, outsourcing decisions take place more often on the basis of the financials, that is, if third party providers can deliver a service cheaper than running it in house.
However, cycle for cycle, the cost differential between running systems in-house and using resources from the cloud depends on utilisation. A single server, a cluster or an entire data centre, if used at 100% capacity, will be cheaper to buy and run, than to rent. Given that this equation depends on the applications and workloads being run, the server and software architecture, the SLAs and the quality of the operational management, it becomes impossible to compare internal IT with public cloud on costs alone.
So, is it all a simple case of, “Get it right in-house, and use the cloud for the occasional job?” No, not really, due to an additional factor, which could be the make or break for many organisations. Compliance has often been cited as a reason to keep computer systems within the corporate firewall, but this needs to be balanced with the risks of doing so. And all organisations need to take a long, hard look at just how good they are at doing this.
On the visit, what gave most pause for thought was the attention to risk built into the infrastructure. Power supplies from two different utilities, from different routes. Failover generators with enough diesel to last several weeks, and supply contracts already in place. Four different fire sensing systems, pre-action extinguishers and a variety of other mechanisms. Sealed doors and turnstiles, keycards and video cameras, dogs and biometric security that could detect the presence of a pulse (so no more chopping thumbs, Arnie!).
All of which brings back to the question of sticking to knitting. Sure, anyone can run IT systems (I should know). The question is whether an organisation has the time, priority or indeed money to deploy not only servers and software, but all the layers of infrastructure and management required to protect against potential disasters. Or indeed, wants to.
May 2012
05-21 – Open Standards
Open Standards
Cutting through the noise: why YOU should join in the open standards debate
A central government consultation on ‘Open Standards’ is underway right now, as instigated by Francis Maude MP and the Cabinet Office. The goal, as stated on the Web site (LINK: http://consultation.cabinetoffice.gov.uk/openstandards/), is to support ‘sharing information across government boundaries and to deliver a common platform and systems that more easily interconnect.’
Obscured by the quite animated debate taking place both online and offline lie a number of quite serious issues that could affect both government departments and citizens alike. So, what’s it all about? The rationale for the consultation is based on UK.gov’s current mantra of ‘better for less’, which means putting less money in the hands of suppliers, whilst improving the delivery of public services.
From a tax payer’s perspective that sounds like a goal worth striving for, so just how can open standards save money? Taking the two words in reverse order, let’s start with the easy one – ‘standards’. The history of computing is a tale of getting things to connect together, pass information or share a common data store. It’s a fair assumption that interoperable systems and services run more efficiently, and therefore cost less for the same result, than their isolated or difficult to integrate equivalents.
Of course, not all prospective standards make the grade. X.400, that super-resilient messaging standard, has shuffled off to the elephant graveyard, as has Open Systems Interconnection. Some get superseded, some (like punched card layouts) now lost in the sands of Internet time. And others become de facto even though they’re not the best – or example, rather than adopting X.400 we slog on with the less reliable SMTP. And it seems to be doing alright.
So, yes, standards are A.Good.Thing. A bigger challenge appears to be around that delightfully vague and multi-faceted word, ‘open’. Ignoring the publicly broadcast debate for a moment, the battle lines seem to be drawn around a single core meaning of the word – that the cost of storing, transmitting or accessing public information in any given format should be zero. Zilch. Nada.
Which sounds reasonable. If you’re wanting to file your tax return, you shouldn’t have to pay for the privilege. Equally, if you are a researcher wanting to access census data, a healthcare worker wanting to access medical guidance, or a commuter checking bus times. The problem isn’t so much the actual information – the mapping co-ordinates, meeting minutes or contact lists – but more the formats in which such information is stored.
Commercial organisations are currently making quite a success out of selling (aka licensing) certain information formats, and/or ensuring that their own tools are the de facto access mechanism. There’s nothing wrong with this in principle - a TV manufacturer for example will have a complex set of arrangements with its suppliers, licensing a video compression standard here, buying in chipsets and other components there. They’ll then build the TV and flog that, and expect to make a reasonable sum on the deal.
But what about public information? Governments are moving service delivery online because of all the supposed benefits – cheaper, less paperwork and so on. Indeed, the recently launched ‘Digital by Default’ initiative is focused on exactly that. But is it reasonable to expect departments or citizens to pay a premium to certain companies, simply to do the things they used to be able to do via paper? Similarly, if a government can adopt (or otherwise come up with) a standard for document storage that is free-to-use, should they be spending tax payers’ money on licensing formats from computer companies?
The answer, of course, is ‘it depends’ – but while every case needs to be considered on its merits, a number of gating factors exist. The first concerns just how many people are impacted, and by how much – for example, filing tax returns affects every adult in the country. The second concerns the benefits which may be achieved by using a certain pot of data, or a certain format, over the costs to the nation of doing so.
Third, we have the question about whether any alternatives exist – there is no point (particularly with technology) sticking with a certain way of doing things simply because that’s the way it’s done at the moment. This does, however, have to be balanced against the costs of making a switch. Finally there’s the question of how much the absence of open standards get in the way of other activities, such as starting a company, delivering new products and services, providing jobs or otherwise influencing the competitive landscape.
That all sounds pretty reasonable, and well worth discussion. However, the only topic that appears to be debated is the last one, and this from the perspective of vested interests. Incumbent vendors are acting like protectionist oligarchs, complaining how they will be unable to innovate if their own approaches (which involve paying them money to license formats) are not adopted. No doubt this argument is partially true, but their real panic comes from the potential of losing a lucrative revenue stream. From a hard-nosed taxpayer’s perspective the response is, clearly: tough. Or to put it more politely – as a nation, let’s pay for things that add genuine value, not those that don’t.
Meanwhile we have ‘open’ lobbyists who seem to think that everything commercial companies say is immediately, and irretrievably corrupted by the forces of capitalism. That may also be philosophically unassailable, but the resulting polarisation has led to much of today’s confusion – particularly when more time appears to be spent on the suitability of those involved, or on the meaning (all parties are guilty of this) of terms such as “free” and “reasonable” when it comes to licensing models, than more important questions that ensure such standardisation activities lead to UK-wide value.
Perhaps the reasons such narrowly focused discussions have come to the fore is because, quite simply, nobody else is doing any talking. Given that the consultation has been extended until June 6^th^, let’s finish with a call to action: for public sector organisations and their broader suppliers, for strategists and front-line staff, for citizens across the board to add their voices to the debate (LINK: http://consultation.cabinetoffice.gov.uk/openstandards/). Only by understanding the broadest range of views will the Cabinet Office have the information it needs to make a decision. And saying nothing is tantamount to accepting the outcome, whichever party ends up benefiting the most.
2013
Posts from 2013.
January 2013
01-24 – Lcsits Cloud
Lcsits Cloud
LSCITS Cloud
As humans, we’re very good at seeing things from certain perspectives (notably our own), but the broader picture can be elusive. This factor is particularly relevant for IT, in which conversations so often start from the technological point of view, and frequently end in discussions about people, process and politics. We’ve seen this in cloud computing as much as anywhere else. But what if we could start with the latter, rather than the former?
One UK initiative which has been tussling with this challenge is the Large Scale Complex IT Systems (LSCITS) group. Overseen by Professor Dave Cliff of the University of Bristol, LCSITS was set up five years ago to consider what happens when computer systems – or systems of computer systems – grow beyond a certain scale. For ‘large’ think ‘impossibly large’: for example, the NHS computing environment (in its entirety) is one such ‘large’ system.
Cloud computing gets a direct mention (http://lscits.cs.bris.ac.uk/cloud.html) in the models defined by LCSITS. The term was a mere twinkle in the eye of the IT industry at the time the initiative was started and as such, came under the banner of ‘novel computational approaches’. Cloud-based systems can form one part of the systems-of-systems covered in the LSCITS stack (LINK: http://lscits.cs.bris.ac.uk/overview.html#stack).
As we have seen, the term ‘Cloud’ has grown to become a standard part of industry parlance. At the same time it has hit a complexity glass ceiling of its own, manifested in the way that people have talked about hybrid clouds. “The future is public,” say one set of evangelists; “the future is private,” say another. “The future is hybrid,” said the pragmatists, as a consequence leaving organisations to work out what ‘hybrid’ means, in terms of architecture, process, and how it enables business requirements to be met.
As per usual, we have approached the debate around cloud from the bottom up, thinking about technology first rather than starting from the top of the layered model captured by LCSITS. Notably, it defines a new field of systems engineering to sit above the technical layers of the stack– that of socio-technical systems engineering. This covers, from the web site, “approaches and findings from sociology, psychology, and management theory.”
While enterprise architects work in the socio-technical domain, they can tend (once again) to focus on process, structure and control, rather then such ‘wooly’ areas as the above, which fall more into the camp of change management. In other words, current approaches are fragmented – and the starting point “this is a socio-technical issue” – remains a topic for bars and workshop sessions, rather than being front of mind during technology projects.
While we might all agree that socio-technical approaches are best in principle, good practice remains elusive. The good news is that LSCITS has been defining structures, principles and methodologies that include requirements capture, interface design, deployment and continuity planning. University syllabuses have been defined, handbooks produced (LINK: http://archive.cs.st-andrews.ac.uk/STSE-Handbook/). Indeed, the only real question which arises is – outside of the circles in which such work has taken place, why haven’t the lessons been given wider visibility?
Still, with the LCSITS programme coming to a close, the insights it has amassed are invaluable to those responsible for complex systems in general, and those involving large-scale distribution of resources – aka cloud computing – in particular. The future is indeed hybrid, but not just technologically. Only by understanding of both the Cloud’s place in the overall, complex architecture, and also the social aspects involved in any large-scale deployment, can we really hope to create large-scale, hybrid technology environments in the longer-term. With initiatives such as LSCITS, at least, we have made a start.
The Register
Posts published in The Register.
2003
Posts from 2003.
December 2003
12-22 – Usb Devices 4 Reg
Usb Devices 4 Reg
USB devices – a form factor worth plugging
Jon Collins, December 2003
I have recently been experimenting with these USB flash memory devices that seem to be proliferating at the moment. Natty little things, they hold anything from 16Mb of data and the plug and play with recent versions of Windows and Linux, meaning that the information they contain can be accessed on any current computer with a spare USB port. For older machines, drivers are normally supplied – for some reason every manufacturer seems to have a different device driver, which is a shame, but not the end of the world. They’re handy for backups, neat for file transfer, a good little floppy disk replacement. My personal favourite is from Corega, not least because it is bright yellow and easy to find, and also it’s a bit more robust than some of them. Of this, more later – what’s apparent is that the potential for the dod-of-plastic-with-USB form factor is yet to be fully exploited.
The basic USB storage “dongle” does indeed have a number of obvious uses. Some uses are less obvious however – I have an email application that I can run from the device. It’s called nPOPq, and the beauty of it is that it is self-contained - it doesn’t use the Windows registry or any external files or directories to run. This means, I can plug my dongle into any Internet-connected computer and check email across all my email accounts, without having to specify them one by one and without relying on an email service provider. I can send email and have it saved to refer to later, and I can copy myself in so that I can save the email properly when I return to the mother ship. This package also provides an address book, it can work with attachments, and so on. So, when I travel, I can rely on the fact that I can perform a minimal service even if I have left my computer at home. No doubt there is an IM client, an editor and a basic spreadsheet I could squeeze on, if I really needed, and what about a Java VM… but perhaps that’s taking things too far. Perhaps not – see later!
Since I started using one of these devices, I’ve been noticing a whole genus of the things springing up. Because of the limitations of the form factor and the early stage of evolution, these tend to be quite restricted in their function. According to Tim Mattox, VP of Client Marketing at Dell, a key feature requested by their customers before such devices could replace floppies is the ability to boot from the device. Dell are also looking to include Bluetooth functionality on a USB storage dongle, to consolidate functionality and to increase the take-up of Bluetooth, though the success of this latter plan remains to be seen. For myself, I have been road testing a couple of USB-based security tokens, notably the eToken from Aladdin and RSA’s SecurID 6100. These devices look the same as a storage device, but hold a database rather than files, which comes with strong encryption built in. There is a basic application included with each device designed to store Web usernames and passwords. Each has certain benefits over the other – the RSA capture approach is more intuitive and easier to use, and can manage network logons, whereas the Aladdin device allows editing of the resulting information and copes with more complex Web forms. With a bit of thought, the Aladdin token can also be used to store PIN numbers and other personal information. Given that it is impossible to manage all the bits of data that are thrown at us without some place to write them down, these devices give several orders of magnitude more security than post-it notes or password-protected Excel spreadsheets. The encryption on the little muckers means that it would take a supercomputer three years to decrypt, which is good enough for me!
These security tokens can do a great deal more than store username/password combinations, but to do so they require a bit more infrastructure. For example, they can serve as a user’s unique access key to the corporate network (note, they do require an additional password of their own!), and form there they can be used as the basis for signing and encrypting documents. They are even seen as providing sufficient security to meet the legal standards for electronic signatures, in certain countries – the key phrase, apparently, is that they provide a “cryptographically safe location” for a user’s private key. There are even devices from companies such as WISeKey (www.wisekey.ch) that incorporate a fingerprint scanner on the key itself, so biometric information can be incorporated into the authentication process – however, this is another indication of where we are in the evolution: the WISeKey device is a secure storage device, not an encryption key. Biometrics is another thing that requires infrastructure support for corporate use, for example using biometric authentication software from companies such as ISL (www.isl-secure.com).
Back with storage dongles, I said I’d explain what happened to my previous device before the robust Corega took its place. It’s a cautionary tale, so listen carefully. It must have been only a couple of days after I first started waxing lyrical about USB devices to my colleagues, indeed I was postulating that one day they might replace the entire user-specific part of the computer, leaving the latter to do what it does best – display and data entry. Indeed, I have since discovered that such technology already exists – devices are available such as the Xkey from Key Computing (www.key-computing.com) which incorporates the same internal processor as the Palm PDA, and which can be bundled with a number of applications such as a remote client for Microsoft Exchange (in this case, from Seaside Software) and it could, indeed, run a Java VM. But I digress – all would be wonderful about these devices if it wasn’t for one, tiny flaw. USB ports on computers tend to leave whatever is connected to them sticking out from the front, leaving them rather vulnerable from being removed inadvertently, or, in my case, snapped in half by the vacuum cleaner. You can imagine my chagrin when my original carry-anywhere email configuration remained inside the plastic, while the plug stayed steadfastly in the computer, the twain never again to be joined. The lesson we can all remember is that even these little, quasi-disposable devices should be subject to the same kinds of service requirements as the rest of the infrastructure. Fortunately, both the Corega and the Aladdin devices are a good deal more robust by design than my previous Pen Drive, and the RSA token in fact a smartcard inside a USB-based reader, so if the plug breaks, it can be replaced without losing the data.
Judging by my own behaviour, loss is perhaps an even higher risk than breakage. I confess also to have lost more than one of the little bleeders, and the Aladdin and RSA devices are both to remain resolutely attached to my key ring. Perhaps, over time this is where they should all end up, and they pose less of an encumbrance than I would have imagined – for a start, they are reasonably light and unobtrusive.
One thing’s for sure – this is a form factor we’re going to be seeing plenty more of. New device types are starting to appear – there are USB storage devices that are also MP3 players, for example, from the likes of Creative (www.creative.com). If you want to see what’s coming next, you only have to check on eBay, as enterprising Korean manufacturers have discovered it is an efficient way of missing out the middle man and shipping direct. Cameras will follow, no doubt, and anything else that can be squeezed into a device the size of a thumb. It is not unreasonable to expect a device which is storage, camera, voice recorder and music player in one (not to mention mobile phone): indeed, it’s probably a matter of months away.
2004
Posts from 2004.
March 2004
03-03 – The Sharp End
The Sharp End
The Sharp End
Fantastic marketing opportunity that will guarantee you access to 2.3 million IT professionals around the world. Independent study in association with The Register.
| | STRATEGIC | TACTICAL || EXISTING | PROSPECTIVE | |
| Tier 1 | Microsoft Citrix Sun Microsystems Computer Associates IBM Cisco Oracle Morgan Stanley EDS | HP EMC Vodafone StorageTek | |
|---|---|---|---|
| Tier X | Neoware | Appsense Black Spider |
Tier 1 Players
| Networking | Collaboration | Storage | Mobility | Management | Security | Linux | |
|---|---|---|---|---|---|---|---|
| Microsoft | 1 | 1 | 1 | 1 | |||
| IBM | 1 | 1 | 1 | 1 | 1 | 1 | |
| CA | 1 | 1 | 1 | 1 | |||
| Oracle | 1 | ||||||
| EMC | 1 | 1 | |||||
| Symantec | |||||||
| Network Associates | |||||||
| IBM | 1 | 1 | 1 | ||||
| HP | 1 | 1 | 1 | ||||
| Cisco | 1 | 1 | 1 | ||||
| RSA | 1 | ||||||
| Vodafone | 1 | 1 | 1 | ||||
| Apple | |||||||
| BMC | |||||||
| Brocade | |||||||
| Intel | |||||||
| Motorola | |||||||
| Ericsson | |||||||
| O2 | |||||||
| AMD | |||||||
| Nokia | |||||||
| Linksys | |||||||
| Nortel | |||||||
| Sybase | |||||||
| SAP | |||||||
| Siebel | |||||||
| JD Edwards | |||||||
| Veritas | |||||||
| Alcatel | |||||||
| Avaya | |||||||
| Red Hat | |||||||
| Novell | |||||||
| Citrix | |||||||
| tea | |||||||
Actions
Matrix of Tier 1 companies and mapping to subject areas - Jon
Updated Q&A document with Elevator Pitch – Phil
Tier 2 companies for first 2 studies – Linux, Security - Jon
Establish contacts
First survey – 22nd March
2011
Posts from 2011.
August 2011
08-26 – Seven Lessons From Hp Touchpad
Seven Lessons From Hp Touchpad
Seven lessons from the HP Touchpad fire sale
The unfolding saga surrounding the HP Touchpad contains a goldmine of salutary tales. So, just what can we learn from the last few days?
Anyone who says they expected the fire sale of HP touchpads to turn into a global gadget grab is a liar. Fortunately nobody has yet, not publicly anyway – indeed, apart from a few bits of coverage complaining about web site crashes (LINK: http://www.channelregister.co.uk/2011/08/24/misco_touchpad_failure/) and HP’s risky strategy (LINK: http://www.theregister.co.uk/2011/08/19/hp_autonomy_echoes_of_sun/), there hasn’t been that much written about it at all.
Which is a shame, because the overall effect was pretty profound. Think about it. Here was a tablet computer that nobody wanted, apparently: not the punters in Best Buy, and certainly not the company that had spent 1.2 billion dollars (LINK: http://www.theregister.co.uk/2010/04/28/hp_buys_palm/) to buy the technology just over a year before.
HP announced the Touchpad sell-off with very little additional information apart from vague statements about continued support, an over-the-air operating system update and general remarks (LINK: http://www.theregister.co.uk/2011/08/18/hp_kills_webos_tablets_and_phones/) to the effect of, “We’re not going to let WebOS die.” In theory, buying an HP Touchpad was – and still is, if you manage to get your hands on one – a huge risk.
But surely people aren’t so dumb as to buy a device when they don’t know what the future holds for it? Either yes they are – or just perhaps, the strategists and pundits got it wrong and there’s more to this whole tablet thing than a one-horse race with a few stragglers.
So, what lessons can we take away? First off, and to give the poor multibillion company some credit…
1. The Tablet Effect is real. Really.
HP’s CEO, Leo Apotheker was not the first to coin the term ‘tablet effect’. While it is barely a year old however, we should be in no doubt about the impact (LINK: http://www.theregister.co.uk/2011/08/24/acer_first_ever_quarterly_loss/) the forearm-top devices are having on the low-end PC and netbook markets.
However, the take-away lesson from the past week is that we’re talking about a tablet effect, not an iPad effect. Whatever fan boys may like to think, Apple didn’t invent the form factor; however (and unlike Microsoft) it did make the device usable, embracing simplicity and the user experience just as it did for smartphones.
We should all be grateful for the relentless pursuit of great design exhibited by Steve Jobs’ Apple. But let’s not think that the only tablets in the world in a few years’ time will be iPads, any more than we should think the tablet itself will become the dominant form factor. We’re too fickle a race for that, and no doubt there exist form factors we haven’t even thought of yet.
Indeed, as the events of the past few days have illustrated, a massive pent-up demand exists for tablet form-factor devices from any manufacturer, if only…
2. A price point can be identified for mass tablet adoption
While Apple may have been first to market with a workable design, application and content delivery model for the iPad, it remains a premium product. Cue massive mistake from just about every other manufacturer – that Apple’s pricing structures should be adopted by everyone else.
What a bad idea, for so many reasons – not least that Apple purchasers are prepared to pay a premium because it’s Apple. The rest of the world’s customers are not prepared to do so, or indeed, simply haven’t been able to afford them. In other words (hope you’re listening Google), get the pricing right and customers will follow.
It could be argued that HP went in too cheap, moving the price point well within the $99 justify-to-wife territory, as (the uncondonably sexist) Linksys used to say. Linksys’ other, tried and tested price point for commodity tech items was the $199 justify-to-self: recent events quite clearly illustrate the existence of a latent opportunity for commodity end-point devices that are both simple to use and affordable to buy.
Given that cheap Android devices from unknown manufacturers are already available, it’s worth nothing a third factor – that people buy from people and companies they trust. Even, apparently, if the companies are trying to divest themselves of the stuff in question. At least the quality won’t be in doubt.
Less relevant (sorry, geeks) is what’s happening under the bonnet, for example which OS is running. Which suggests, despite what the commentators might have you think, that…
3. WebOS has – or had – a market, as do other operating systems
A long time ago, the first Palm Pilot (LINK: http://en.wikipedia.org/wiki/Pilot_1000) devices became very popular very quickly, on two counts: device usability and developer platform. That was it. With WebOS, the (supposedly rejuvenated, but cash-strapped) company strove for similar goals, achieving the first in spades but struggling to grow the second.
While things like mass market adoption don’t happen overnight, the fact is that the two factors work together. People build apps for Android devices because of the now-present user base, and people buy such devices because they have the apps. It’s a lesson that in-for-the-long-haul Microsoft knows very well, which is why the company continues to chip away (LINK: http://www.reghardware.com/2011/07/29/windows_phone_mango_outed_in_feature_video/) with Windows Mobile 7 despite it being a seemingly thankless task.
HP knew it too – as HP’s CFO noted (LINK: http://www.theregister.co.uk/2011/08/19/hp_q3_restructuring/), the potential for ROI just wasn’t in a timescale which made sense for the company. That still offers an opportunity for others, and it would be folly to write off WebOS too quickly, particularly given that up to 2 million devices have just entered the market.
The fact that HP may have lost 400 million dollars in the process, on top of the acquisition costs, proves beyond reasonable doubt that the company is in a serious strategic mess. It proved itself tactically moronic (LINK: http://www.theregister.co.uk/2011/08/26/hp_pc_alive/) as well in how it approached the announcements and subsequent sale, reinforcing the ancient adage of…
4. Fail to plan, and plan to fail
The old ones are the best, eh? In this case, HP – that self-proclaimed biggest computer company in the world – demonstrated how to get something very wrong by not thinking through the ramifications. This happened not once – with the plan to ditch the PC and mobile hardware divisions, announced without warning even to senior executives in those divisions – but then a second time by flooding the market with low-cost hardware without informing its resellers on how to deal with the consequences.
The result was not only that multiple online sites were faced with a massive, though not unprecedented demand for technology. They then sent out mixed messages of their own, based one would imagine on the information they had received about stock availability, whether or not a discount was to be applied, whether indeed each reseller was eligible for HP’s discounted pricing.
You don’t have to be a specialist in the field to guess that confidence in HP’s channel operations has been undermined, as the company did the equivalent of pulling out the rug from under the feet of the its resellers and partners.
And even as this was happening, tablet-hungry geeks were whipping themselves into a frenzy of excitement. A problem exacerbated by the fact that…
5. We are not individuals, particularly where the Web is concerned
Anyone who followed the HP Tablet thread on hotukdeals.com – now with 22,000 posts the longest thread (LINK: http://www.hotukdeals.com/misc/palm-pre3-hp-touchpad-16gb-32gb-89-115-from-today-stock-levels-official-thread-p/1000474) the bargain hunters’ forum has seen – would know that just the potential to join in was enough to entice some people. To paraphrase, “I didn’t even want one until I started reading this!”
While lessons 2 and 3 are undeniable, we must also take into account the propensity of the human race to want to participate. Sometimes for better, sometimes for worse, the net effect is that people can follow the crowd, even when it’s not in their interests. Indeed, this factor is central to new technology adoption.
Not only was it demonstrated by the desire to own a device some may not even have heard of prior to the fire sale, but also the repeated behaviours that emerged once people had the bit between their teeth. Simply put, every time an online retailer mentioned the Touchpad, hordes of people would rush the site concerned, just about every time causing it to collapse under the load. Which led to the inevitable proof that…
6. Scalable deployment is even harder than clever people think
Call it what you like – adaptive IT, dynamic infrastructure, agile, whatever. If you believe the computer manufacturers, the whole point is that we can now build IT systems that can scale to meet unexpected spikes in demand. E-Commerce has been around for over ten years now, and site owners have had plenty of time to architect their systems in the right way to meet demand.
But site after site collapsed under the strain. Okay, the retailers exhibited some pretty daft behaviour of their own. You’d think you’d learn, if you were Misco or Staples say, and you’d just watched Dabs.com and shop.BT.com brought down, to keep stumm, or to invite people to a microsite by email, or indeed, to do absolutely anything other than say (LINK: http://twitter.com/#!/Misco_UK/status/105956654125809664), “If you’re looking for reduced prices on the HP TouchPad, stay tuned! We’ll have some news soon, watch this space!” Come on guys, what were you thinking?
Worst culprit of all perhaps was HP itself, whose US web site (LINK: http://news.cnet.com/8301-1001_3-20095208-92/hps-touchpad-fire-sale-the-fallout/) and employee shop both collapsed under the demand. Not a great advert for a company that sells world-class infrastructure, and who’s been banging on about it for longer than anyone cares to remember.
From which we can draw the only conclusion that…
7. Information technology still has a long way to go
So, to summarise: we continue to be hungry for easier to use, cheaper end-point device form factors; Apple’s great designs and monopolistic stance may grant it short term success but it’s probably not going to take over the world; technology infrastructure is as big and prone to failure as it ever was; and as both companies and individuals we are, in the words of a Matrix agent, “only human”.
Disappointing perhaps – after all, if you’ve worked in IT for more than a couple of decades, you might have hoped we’d have learned some of these lessons and moved on by now. On the upside, the market churn, continued problems and countless mistakes will keep most of us in jobs for the foreseeable future, so it’s not all bad.
The HP Touchpad saga has still to play out: perhaps the level of market interest will spur activity around WebOS, and if not, Android; perhaps the flood of tablets into the market will spur other manufacturers on to produce lower-cost devices; perhaps the impact will be felt more around other industry areas such as publishing or HTML5 adoption; who can say.
The final lesson of all should be around making assumptions about which way this business is going, based on media hype or hearsay. However smart people make themselves out to be, however much research we ingest or advice we take, nobody has a monopoly on the future.
September 2011
09-23 – Virtualization Recruitment 0.1B
Virtualization Recruitment 0.1B
Virtualization Jobs: Top 5 Questions You Should be Asking Potential New Sys Admins
When someone like Eric Schmidt, Chairman and ex-CEO of Google, says [LINK: http://www.eweekeurope.co.uk/news/it-skills-drive-for-british-school-kids-revealed-39929] there’s an IT skills shortage in the UK, it’s time to take notice. On the upside, there are moves to get IT back into the national curriculum, but that will be scant comfort to companies wishing to employ skilled IT professionals.
Exacerbating the issue is the fact that it isn’t absolutely clear what skills will be needed. In the words of one university lecturer, “We’re training students for jobs that don’t even exist.” This is true in all areas of IT, from information management and application development, to infrastructure design and systems administration.
The result is a challenge for both recruiters and those prospecting for a job.
Thinking specifically about the game-changing technology that is server virtualization, we know that it brings both opportunities and challenges for those working on IT’s front line. While virtualization changes how we think about physical infrastructure, it does not replace it – so we need to start with the assumption that the candidate has a well-grounded knowledge of IT architecture, how systems are built, fault-tolerance and so on.
The recruitment challenge is to ensure that candidates understand what changes when virtual machines add to the mix. As well as a check-box knowledge and experience of the different virtualization platforms such as VMware, Xen and Microsoft, find below our top 5 questions to separate the virtualization-savvy from the also-rans:
What do you see as the security challenges when it comes to virtualization?
While virtualization aids flexibility, it also increases the ‘threat surface’ of the IT environment, and introduces some new risks. The security challenges around virtualization are well-documented [LINK: http://www.theregister.co.uk/2009/08/05/virtualisation_security_oxymoron/], but not necessarily taken into account at the coal face of virtualization.
How do you see the relationship between virtualization and cloud computing?
Virtualization breaks the bond between processing and physical machines; cloud computing enables the use of scalable processing facilities – offered, by online providers, through virtualization. From an architectural standpoint this begs the question of what should run where, and a clear understanding of the relative cost of processing is essential.
How would you approach backup and data protection for virtual systems?
Done wrong, virtualization environments can make backups and data protection harder – not only because of the potential for the number of virtual machines to get out of hand (VM sprawl), but also because some existing backup technologies see virtual machines as single files, rather than containers. This causes problems in the length of time to back up, the ability to catalogue, and of course the need to restore efficiently.
What is your knowledge and experience of software licensing?
While licensing strategy is not strictly a system administration task, the ability to clone, store and run virtual machines at will has put the spotlight on how the systems, database and application software is licensed. Licensing best practice is currently in disarray as different vendors have adopted different strategies, but it is important for administrators to be clear on policy so they don’t inadvertently breach procurement guidelines or contractual law.
How would you implement management best practice for a virtual environment?
Last, but certainly not least, is the impact virtualization has on how IT services are delivered. Virtualization enables agility and responsiveness, but if it is left uncontrolled it can quickly degenerate. Best practice frameworks such as ITIL are evolving – some say too slowly – to respond to the need to balance flexibility with control. The simple answer to this question is that minimum necessary controls are implemented early and stuck to rigidly, however demanding the user base can be.
There we have it. If you have any feedback, or know of any other questions that would be relevant to ask, let us know through the comments.
Alternate beginning:
Server virtualization is a game-changing technology bringing both opportunities and challenges for those working on IT’s front line.






















































