Welcome
This is Jonno’s blog archive — a personal and professional collection spanning two decades. Enjoy!
Cheshire Cat (Bloor Research)
1999
This section contains posts from 1999.
April 1999
This section contains posts from April 1999.
Government launches IT White Paper
1999-04-01
Perhaps the most important elements we can glean from the government’s white paper “Modernising Government”, are the milestones it sets for the availability of its electronic services. The first is that 25% of government services should be available online by 2002. The second sets a target of 2008 for all government services to be accessible electronically. Pragmatic application of the Pareto principle – the 80:20 rule – is required to ensure that the most useful services (in terms of both utilisation and benefit) are included in the 25%. In this way the needs of most individuals and businesses should be met, and the costs of delivery of government services, should be dramatically reduced within three years. A lot, however, is dependent on facilities to access such services. Will the government run electronic and paper facilities in parallel? Will this be seen as the moment when the possibility of information haves and have-nots became reality, with connected citizens being ever better informed and better able to access services than “the great unwired”? Proactive steps will be necessary to avoid this.
The 2008 deadline might be “realistic,” as noted by Ian Taylor MP at the launch of IT-Director.com yesterday evening. However, when the time comes it will be very difficult to judge whether the government of the day achieves the target. The eWorld is commonly known to be changing at a pace of dog years; it is also recognised as an enabler of vastly different ways of working and living. It is likely that the services provided by government will change significantly over the next decade, particularly as it becomes clear that the internet is a vehicle for even the most insignificant to have a voice. Hence, even if the government of the day meets its goals, it is likely that whatever is achieved by 2008 bears little resemblance to that which was initially planned.
(First published 1 April 1999)
Open Goal for Microsoft
1999-04-01
Why do Microsoft want to open their source? The answer is that they probably don’t. For one thing, it is unlikely that the Windows code is particularly pretty. Any software that has gone through the same number of reincarnations as Windows is likely to contain plenty of redundancies, inconsistencies and downright errors. There are tens of thousands of seasoned developers that will be going over the stuff with a fine tooth comb the moment it is released to look for problems. Undoubtedly they will find and gleefully report them to the baying (and burgeoning) anti-Microsoft camp, adding to the company’s already tarnished reputation for quality. Given the current climate, it may be that the software giant has no choice but to surrender a little of its projected image of perfection. A little lesson in humility is not such a bad thing, once in a while.
(First published 1 April 1999)
IT-Director.com launched by Bloor Research, Certus and Silicon News
1999-04-01
Yesterday evening, the realities of how business will operate in the 21st century were very much in evidence at the House of Commons launch of IT-Director.com. Speakers Ian Taylor MP, Paul Cooper of British Gypsum, David Taylor of Certus and Robin Bloor of Bloor Research, all described how co-operative approaches, such as those enabled by IT-Director.com, would become essential elements of business success.
Robin Bloor, CEO of Bloor Research, described the role of IT-Director.com as a provider of “instant analysis” to its subscribers. This was noted by Ian Taylor MP as a key facet of the online service, acknowledging he was delighted that it would be available to members of the house. David Taylor, President of Certus, explained how IT-Director.com and Certus intended to provide a triangle of communication between end-users, vendors and decision makers. “IT-Director.com will help us provide a quality IT service to positively support business goals,” he noted. Paul Cooper, IT Director at British Gypsum, stressed the role of the future IT Director as focussing on business requirements and issues, whilst promoting and facilitating the use of IT. Continuing the co-operative theme, Certus also announced the launch of its 4th R campaign, involving businesses working with education to enable the next generation of IT-literate employees.
(First published 1 April 1999)
The Art Of War
1999-04-09
The Web is changing the world, in peace and, so it would seem, in war. All sides of the conflict, plus external agencies, are using the internet more than ever to aid them in their tasks. If the rumours are true, for example, systematic hacking of both military and civilian sites is becoming expected as a weapon of war. At the same time, as in Rwanda, relief agencies are reported to be using bulletin boards to help reunite refugee families. Donations to such agencies are now accepted online. News agencies are broadcasting up to the minute information, in a range of formats and languages, to a global audience. As with all tools, the internet cuts both ways.
(First published 9 April 1999)
ISAs and Legacy Systems
1999-04-09
ISAs are quietly posing a significant problem to the IT departments of financial services companies. There are several factors for this. First, ISAs are essentially a management vehicle with which a customer can manage a variety of investment types. The implication for IT is that legacy systems which could previously be run standalone now require to interoperate. Similarly, business processes, for example for savings and for investment customers, are having to be merged. Second, the complex rules governing ISAs were only available in draft form until very recently, hence organisations have had very little time to implement ISA facilities in their applications. Together these factors are causing much of grief, not to mention cost. It could be argued that the ISA situation will be beneficial, as it encourages companies to integrate their legacy applications and, in the process, provide a better overall service. However, what with the Year 2000 and the Euro, it is probably an additional headache they would have been glad to do without.
(First published 9 April 1999)
Sailing the Shifting Sands of Silicon
1999-04-12
One of the issues faced by hardware manufacturers is knowing which technologies and markets are a stable enough foundation on which to base a business strategy. Recent reports are an indicator of this, from IBM's report of a $1 billion loss for its PC division, to Compaq's shock earnings announcement today. Stable foundations are elusive, but companies have no choice but to continue their efforts to identify, encourage or even create points of sufficient solidity on which to develop and market their products. Even if some organisations, such as Dell and Gateway, have business models which are demonstrably successful for a while, it is clear that no one company has the monopoly on predicting the future.
One company which may be getting it right is Hewlett Packard. HP has a hard-earned reputation as a provider of quality, technologically sound systems and equipment. Unfortunately, it has also gained a reputation of being a little dull and it has become left behind in the systems race. There are signs that all this may be changing. Last month, HP announced it was spinning off its testing and measurement division. The company is now announcing a range of computers to be targeted at the e-commerce market. The new N-Class servers will run Unix as well as NT, proving the company's strategic commitment to this operating system. Secondly, and perhaps more importantly, the servers are designed to be upgradable to Intel's Merced 64-bit, RISC-based processor, due out next year. Hewlett Packard are not just aligning themselves with Intel's strategy: HP is being instrumental in the development of McKinley, Merced's successor, and are developing a number of key components of the chipset required by McKinley.
This approach is low-risk, considering that other parts of the industry (including Microsoft) are endorsing Intel's 64-bit architecture. At the same time, HP appear to be injecting more pizazz into their product marketing. In doing these things, they are positioning themselves as a major player in the systems battles of the future. Who dares, might just win this time around.
(First published 12 April 1999)
Global Village Voice
1999-04-12
BT is launching a voice-over-IP service. Read this statement, now read it again. British Telecom, the organisation providing telephone and dial-up services to the vast majority of the British public, is launching a service which enables its users to communicate via the Internet at the price of a local call, wherever in the UK they may be situated. Admittedly, the service is only open to customers of BT's own ISP. Also, it is known that Internet telephony still suffers from quality issues remeniscent of patching a call to a less-advanced foreign country ten years ago. But still, this move by BT is a milestone in the inexorable rise of the Internet. What we are seeing is the convergence of technologies and this has several implications. Given BT's acceptance (and endorsement) of voice-over-IP, the question raised is that of relative cost. Subscription to BT's ClickFree service is now free, and two BT ClickFree subscribers can call each other at local rates as long as each has a PC. In other words, users of telephone handsets are going to be discriminated against – these unfortunates will still have to pay the price of a long-distance call. This overhead will become difficult to justify as quality improves and hence more and more people start using the Internet for voice calls.
BT, like other telcos, has already responded to concerns on call pricing, with the price gap between international calls and local calls decreasing. This is fair enough, as most of the costs of a call are derived from enabling a connection to the local exchange and subsequent billing of the call. BT will be keen to keep its revenue streams open for as long as possible. Ultimately, though, the costs of both national and international calls will have to bow to the pressure of the Internet.
(First published 12 April 1999)
Applications - Rent or Buy?
1999-04-14
The chances are you have not heard of Corio, however you will have heard of British Telecom. These companies are linked by a common objective - to provide application services to customers over the Internet. Corio, a Californian firm, is linking with Peoplesoft; BT has struck a deal with SAP. There will be many providers of such services, making enterprise scale applications such as these available to customers that could not otherwise afford them. Smaller scale applications can also be rented out over the Web – for example Yahoo already provides email and scheduling and is reported to be considering providing word processing facilities.
The advantages of application outsourcing are numerous – for example, capital costs and administration overheads are enormously reduced and customers can take advantage of the newest releases of applications. Downsides are yet to be fully appreciated as this market is still in its inception, but are likely to include problems of interoperability between local and remote applications, issues with service level maintenance and difficulties in changing suppliers. Just as with outsourcing of IS departments a few years ago, perhaps the biggest problems will stem from the legal wrangles which will inevitably arise as customers find the service is not all it was hoped for. Application outsourcing, whilst appearing extremely attractive, should be treated with the utmost caution until its benefits and costs have been fully appraised.
(First published 14 April 1999)
Internet Value – Business Sense
1999-04-14
Nobody really knows what the Internet is worth, but whatever its value, companies which have successfully invested in the Web are reaping its rewards to the detriment of those left behind. At the Massachusetts Institute of Technology yesterday, Bill Gates underlined his predictions of ubiquitous Internet usage by saying that “Billions of dollars of market valuation are based on knowing how quickly this [Internet uptake] will happen.” Organisations unable to exploit these cash flows are proving the losers. For example, Merrill Lynch came in 12th in the US last year, down from 8th in 1997, relative to other securities companies setting up Internet IPOs. Merrill blamed the “less-than-constructive” advice of its (former) Internet analyst, but it must bear the cost.
Smaller companies, too, are suffering as they find that the Web does not wait around for the late arrivals. Bob Geldof’s flight reservations site deckchair.com, announced last September but not launched until today, will pay the price: the unique features it touted six months ago, such as flight availability, personalisation and online booking, have become standard on a wide variety of travel sites. The Internet is a club with little respect for its members: it remains to be seen whether deckchair.com is arriving too late to join.
(First published 14 April 1999)
Linux - Immature but happening anyway
1999-04-14
The open software movement has long been seen as a David, able to topple the Goliaths of greedy corporations and sluggish standards bodies. In reality, better parallels might be derived from the natural world, where myriads of chaotic organisms work together, over time reducing old orders to rubble and creating their own. It happened with TCP/IP, which was considered the de facto standard by the masses long before it was grudgingly adopted by the big people. It shall happen, indeed it is happening, with Linux.
Despite its ready adoption by the technical community, it has to be said that Linux is probably not ready for mass-market rollout. As a Unix derivative, Linux is too complex to be given to “just anybody.” Efforts are underway to change this – Caldera Systems, for example, are launching an easy-to-use Linux but even they admit it isn’t ready to be pitched against Windows. Linux also lacks the applications support of Windows, a fact acknowledged by Red Hat, another Linux distributor.
Yes, Linux is immature. However this does not seem to be preventing a desire for its adoption across the industry. In a recent poll of 3Com customers, 50 percent of respondents requested support for Linux, a fact which has caused 3Com to rewrite its Linux strategy. Other companies are experiencing the same upsurge and are responding accordingly. It is unlikely that the Linux bandwagon is a fad: rather, the global technical community is responding to the potential of a robust, flexible, free operating system. Experience shows that, once this level of consumer interest exists, it is unlikely to go away despite the best efforts of the giants to subvert or ignore it.
(First published 14 April 1999)
The Sun is Shining Again
1999-04-16
SUN Microsystems, it would appear, can breathe again. SUN have never been unsuccessful, not in monetary terms; however in technological terms the last few years have been a little tense, to say the least.
Not so very long ago SUN was the provider of choice of Unix workstations and servers. The mainframe was dead, so it was assumed, and the lower end of the market consisted largely of PCs running MS-DOS. The workstation market was virtually sewn up, with users finding the term SUN as synonymous with workstation, as Hoover was to vacuum cleaner. Need to develop software? Use SUN. Need to run a CAD application? Use SUN. They had it made.
Things started to turn sour for SUN when Windows 3.1 was launched. Within months, the costs of workstation hardware and software seemed all too expensive relative to the apparent cheapness of a PC running Windows applications. IT managers, used to running PCs, saw cost savings and very quickly the de facto desktop became the PC. SUN were on the run.
Smaller companies might have buckled under the joint pressures of Intel and Microsoft, but somehow SUN have refused to let go. Recognising that the desktop battle was lost, Scott McNeally turned his full attention to the still lucrative server market. His timing was both fortuitous and impeccable: within months the World Wide Web had left the launch pad and a whole new, rapidly growing server market had been created.
SUN have not had it easy over the last couple of years but with equal measures of good luck and judgement they have transferred their brand. SUN is now the Internet server of choice: its alliance with AOL is ensuring a continuation to both its revenue stream and its reputation.
(First published 16 April 1999)
For Banking, the Future is Online
1999-04-16
A spokesman for NatWest said yesterday that Internet banking was a “luxury service” and that NatWest would not be closing any high-street branches. There are two ways in which these remarks can be interpreted. Firstly, that NatWest do not see the Internet as something which will significantly affect their business. This interpretation goes against the grain – the Internet is speeding up trends which are already well-established, such as the market’s acceptance of telephone banking. It is very likely that in the future, a significant proportion of banking transactions will be undertaken without a branch visit. Such a vision is difficult to realise at the moment, as the provision of facilities is dependent on customer adoption and customers are notoriously conservative. However, it is only a matter of time before the UK consumer decides that it is, indeed, more convenient to dial a number or type a Web address rather than try to find a parking place on a busy street in office hours. The second way to interpret the remarks is that NatWest are discussing the short to medium term. We must hope that this is so: when the use of online and telephone service reaches critical mass, no business will be able to avoid its effects.
(First published 16 April 1999)
The hidden cost of Internet Real Estate
1999-04-22
Wallstreet.com was sold for 1.03 million dollars. What, the company? No, just the name, to a Venezuelan casino. The purchasers don’t think they are crazy, so good luck to them. Across the Atlantic, the premium on a good “.com” name may be seen as worthwhile, but may have some impacts back on the old continent.
Despite the existence of national addresses such as .co.uk and .co.fr, the .com domain is accepted as the domain of the global enterprise. 90% of words in the English dictionary are now registered as .com domains, as the prediction is, “if you want soap, go to soap.com. If you want flights, go to flights.com.” This may or may not be true but it certainly makes sense and seems to be being borne out by patterns of Internet usage, hence the high premium on the better terms. The downside is that the majority of .com addresses are registered to US companies. So, if you want flight tickets, you may end up paying an American company for the privilege. European businesses may lose out to US companies purely because of Web serendipity. This may not be a problem for most European companies – the french will, for example, go to “savon.com” rather than “soap.com”. But watch this space – American entrepreneurs will not take long to wake up to the commercial opportunities they are at the moment too busy to cover.
(First published 22 April 1999)
Turbulent times for IT stocks
1999-04-22
If we needed any more proof that we are in the midst of a huge wave of technology change, we have just had it. IBM have made a stonking profit, shattering analysts’ predictions. But there’s more – Lucent, another old dinosaur, has done it as well. Compaq and HP are down but SUN is up, Informix is up, Unisys is up. What’s going on? Simple. Companies which are successfully leveraging the Internet, either by providing the technologies to support it or by using it as a business tool, are winning. Companies which fail to tell a coherent eCommerce story, or which fail to exploit the capabilities the Web provides, are losing out. Sure, the world is more complex than this simplistic view might suggest, but even with other factors taken into account, companies big and small are finding it very difficult to buck this trend.
(First published 22 April 1999)
Quantity is not the only answer.
1999-04-22
IDC are predicting a shortfall of 600,000 networking professionals across Western Europe by 2002. This figure is based on an expected requirement for 1.6 million network staff, given current trends. Phew. This is scary stuff. It gets worse: if the situation is not rectified, say IDC, we could be in for a European slump as businesses fail to leverage their growing success and existing assets due to inadequate infrastructure. Dark days are ahead, it would appear. Or would it?
There exist a couple of factors which, together, might augur a less bleak future. The first lies in the past, it is something we have always know. Motherhood, if I may. The second is a direct result of providing global connectivity, in which network professionals have already played a significant role.
So to the first point. IDC rightly say that training and development needs are becoming acute. However it is well known in networking circles, as in others, that real experience has far greater value than the classroom learning by rote that is becoming commonplace for vendor-sponsored certification programmes. The ability to answer a multiple choice questionnaire, a networking engineer does not make. Street knowledge and management experience tells us that some individuals, qualified or otherwise, are worth their weight in gold-plated terminators. We know who they are – they come in, they diagnose, they tweak and go away again, leaving us to wonder why so many features of networking equipment remain undocumented. Identify these people within your organisation. Treat them well, pay them well and keep their skills current. Involve them in infrastructure design decisions and strategic rollout programmes. Or leave them in the workshop, but don’t be surprised when your network management policy is jettisoned because people are too busy fighting fires.
The second point is newer and requires a little more thought. There has been, indeed there is being, an explosion in connectivity caused by the convergence and standardisation of communications technologies - we call this internetworking. The internet is bringing with it a raft of new options and opportunities to businesses. For example, applications outsourcing enables smaller businesses to take advantage of enterprise applications with minimal administration overhead. Secondly, telecommuting is happening more slowly than predicted, but it is globalising: it is now possible to outsource services (from managed call centres to point secretarial work) to companies and individuals literally on the other side of the globe. Such outsourcing of both systems and services is implicitly cutting the requirement for local infrastructure, hence the need for engineering support can be reduced leaving businesses to concentrate on their core offering.
Smart businesses will be worrying about the future, inevitable skills shortage in network staff. Training and development programmes are essential, but so is looking after incumbent staff. The smartest businesses of all will be looking to profit from all the opportunities presented by the internet revolution. Good luck to them.
(First published 22 April 1999)
Coming soon to your screens ….
1999-04-26
Surely most exciting developments on the Internet at the moment are those relating to multimedia. Well, let’s face it, these developments would be exciting if we could take advantage of them. The possibility of a full surround sound audiovisual experience, broadcast directly from the Web, is, frankly, non-existent at the moment. Even with a leased line, the closest we can get is a low-quality, stop-start audio stream. Having said that, we all know that the bandwidth will increase and the technology will improve… but even then the chances are that what we want (streamed, high-quality full-screen video) will remain a distant mirage for some time yet. We do, at least, know that it will come. It is a good job that the technology is not yet ready, because if it were possible right now, it is probable that most of us wouldn’t be quick enough to do anything astounding with it. We have 3-5 years of slack before the immersive multimedia wave hits – will we be ready?
(First published 26 April 1999)
Spam, spam, spam, spam, spam
1999-04-26
Spam is back in the news again. Not special american processed meat, as I’m sure you know, but unsolicited email. How this won the name “spam” is lost in the mists of dog time, but it shares some characteristics with its namesake – instantly recognisable, unavoidable and often without taste. So – whilst Software Warehouse is saying that it probably can’t take Insight to court for spamming the former’s customers (with a dubiously obtained database), the European Parliament has voted against an anti-spamming amendment to the eCommerce bill.
In fact, in a funny way, it is Software Warehouse (or their ISP, Planet Online) that should be in the dock as well as Insight. According to UK Data Protection law, the keeper of personal data is held responsible for its upkeep and security so, in a case like this, they should carry the can. There have been several cases in the past, however, of information being obtained form ISPs, which are a common target for hackers. It is probably unfair to expect any ISP to somehow be able to secure this information when there is a global network of internet junkies conspiring together to look for security holes. The odds are undoubtedly stacked against ISPs.
It is probably also true to recognise that spam is a global problem. The Internet is too complex to be shored up against the distribution of unsolicited information from any far-flung corner of the world-wide web.
Given the current inevitability of spam, however, we should not sit idly by - legislation is still necessary. National and international laws currently lag far behind the technology curve, but efforts should continue to ensure that this gap is closed. Long arm laws, which enable prosecutions across international boundaries, are being implemented in the US and this concept should be extended to our own shores to counter the web’s global nature. Information misuse laws need to be modified and extended to take into account the new possibilities afforded by technology. The biggest problem here is one of timeliness - law-making cycles should permit the development of a flexible, responsive legal framework which keeps its currency against a fast-moving technological background. Otherwise, unscrupulous users of the Internet will be permitted to keep one step ahead of the law, conducting injustices without ever committing a crime.
(First published 26 April 1999)
Signs of Internet Immaturity
1999-04-27
Three young, exuberant individuals in the US are landing up in court following a spoof press release concerning the availability of high-speed communications bandwidth. This unfortunate April Fool incident, in which the joke seems to have turned on the jokers, is just one example (there were unfounded stock rumours being posted on message boards last week) of immaturity being seen on the internet. This is not really about the childishness of the people posting the information. We all need a sense of humour. No, this concerns the current state of the internet and the resulting gullibility of its users. Hey, it’s not their fault. The electronic superhighway needs us to be reasonably open to just about anything happening – there are so many advances in technology and capability that we seem required to suspend our disbelief for a good few years yet if we are to benefit from its potential.
Unfortunately, this leaves us open to confidence tricks, both humorous and malicious. So what should we do? We should move forward with open minds, learning as we go to take advantage of this new medium. Should tricksters try to hold our naivety to ransom, they should pay for their actions. But we should allow a little flexibility to deal with the occasions when undergraduate tomfoolery catches us out.
(First published 27 April 1999)
IT Security doesn’t have to be Rocket Science
1999-04-27
It is worth reading Silicon.com’s headline that the UK government is exposed to hackers. Firstly, it says that security firm NTA monitor found that 31% of .gov.uk email servers were running flawed software. That’s the scare-mongering bit. It goes on to say how NTA monitor discovered this – by sending emails to the servers to identify which software the servers were running.
It is commonly known in IT security circles that the best way to identify security holes is to use the same techniques as the hackers. Indeed, suites of tools are available which act as “auto-hackers”, which run ensembles of simulated hacks to find weaknesses. Some of these tools are available for free, from organisations such as CERT. The question asked is, but can’t the hackers get hold of the tools as well? Of course they can. The principle is that hackers know this stuff anyway, so IT managers should also be informed. This is why IT managers are duty bound to use such facilities and close up any holes they may find. Otherwise the hackers will do so instead. Illegal access becomes no more difficult than running a few scripts and reading a log file.
(First published 27 April 1999)
Network Computing has its place
1999-04-27
The Network Computer, launched with much hype and applomb by Larry Ellison four years ago, is widely believed to be dead following only minimal success. What killed it? Although the Windows camp was largely responsible, through products such as Citrix Winframe and Windows Terminal Server, the advent of web browser-based applications should also take a share of the blame. Or should that be credit? As SUN’s I-Planet announcement demonstrates, the Network Computer or NC is, and always was a myth.
I-Planet is a software package which runs on a server to enable applications and databases to be accessed through a Web browser. Browsers are becoming established as the standard device interface to the internet or intranet, hence the I-Planet concept is sound. SUN is not the only company to see that the browser-enabled, thin client approach is worth developing: most vendors, including SAP and Peoplesoft, are scurrying to web-enable their applications.
It is this facility any different to the NC architecture? No, of course it is not, in principle, but the NC concept is far too flexible to be limited to single device type or hardware. All browser-enabled devices, from PCs to PDAs, mobile phones to set-top boxes, should be seen as Network Computers. Other architectures have their place, for example thick clients for graphics intensive tasks, but the thin, browser-based client will be the best approach for a large number of applications. The NC might have died, but its spirit lives on and on.
(First published 27 April 1999)
May 1999
This section contains posts from May 1999.
ECommerce Crisis
1999-05-04
A crisis, it is said, is a problem with no time left to solve it. It may be triggered by external factors, such as losing a job or winning on the horses (and it’s a harder pill to swallow when the cause of our pain is that our wildest hopes have been met), but its causes lie most often in the past and creep up unawares. When dealing with a crisis, it is first necessary to admit fallibility: only then is it possible to start coping with the hand that has been dealt. We are all individual, sayeth Brian, but we are not immortal and we are all subject to the same patterns of behaviour as out peers. Even the stages of coping are predictable, to an extent – consider those identified by Elisabeth Kübler-Ross, for people coming to terms with terminal illness – denial, anger, bargaining, depression and acceptance. These stages have been applied, in one form or another, to everything from dealing with herpes to coping with mediochre golf. It may be that the stages are not exact, but then this is not an exact science. The stages are a gross simplification of real life, but they are still recognisable to the majority of those who have lived through “difficult times”.
Why am I saying this? Well, lets look for some parallels. Despite the powerful BPR rhetoric of the past ten years, do we look about ourselves and find a brave new world of mature, evolving, learning enterprises? Nope. It is fair to say that, although the headcounts are reduced and we are more reliant on bought-in services than before, not much has changed. To be fair, huge efforts are often made, for example reorganising corporate structures, merging with the competition and investing in call centres. But generally, these investments occur because the decision makers’ hands are forced by external factors such as increased competition and Y2K. It is no secret that the biggest blocker to business change is “resistance to change,” and the forming, storming, norming and performing mantra of organisational change bears a remarkable resemblance to Elisabeth Kübler-Ross’ five stages.
Whatever we want to call it, the Internet, the Web, eCommerce is just one more external cause which is going to trigger business crisis. Fundamental change we will see - it is already happening in the states and will come over here as inevitably as disco dancing and the skateboard. This is not just another stateside fashion, however – this is unavoidable change on a global scale. There is a fundamental difference between this and previous industry-shaking events. The first off the blocks, if correctly prepared, is appearing to win the race. Time is becoming crucial: companies which procrastinate are losing market share. This well-documented trend is an omen for us all. What is also interesting is the other side of the coin, where companies have been victims of their own online success – for example the computer systems of eTrade, the online brokerage, suffering a number of outages due to the volume of people accessing the site. eTrade has survived (with a mere 25 thousand complaints) but other such companies have not been so lucky.
Admittedly we are not in crisis yet, not quite. The percentage of UK business conducted online is at 0.5%, so there is no need for unmitigated panic. The trends and supporting factors suggest that in ten years time a large proportion of business will be conducted online. We can wait until external factors force our hands, or we can start a process now, remembering that the stages we will go through a re largely predetermined. Sure, we need our denial phase. It is a time during which we assimilate the facts, assess the consequences and come to terms with the fact that nothing will ever be the same. Better that we start the process now and still leave time to reorient our businesses and profit, yes benefit, from the eCommerce revolution. We cannot make the revolution happen, and neither can we predict or plan for every eventuality. However, by forcing our own hands and coping with the consequences, we can at least orient ourselves so we are facing in the right direction when the time comes, for example by building business relationships with possible partners, or implementing the open, secure, scalable architectures which will be necessary for online access to our systems. Surely this is preferable to doing nothing in the blind hope that nothing will come of it, or worse seeing the policy of wait and see become wait and watch others succeed.
(First published 4 May 1999)
The demise of the call centre
1999-05-10
Over the past ten years call centres have boomed, and for good reason. Cost savings have been made by centralising customer service operations, optimising business processes and locating staff in areas of cheaper infrastructure and employment costs. How sad, then, that the internet has come along and wrecked all that good work.
How can this be? Call centres have a variety of functions, but one of their chief objectives is to manage the interface between customers and computer systems. Want to book a flight? Call our travel centre and we will type your details into our system and book it for you. Want to check your balance? Give us a call. But wait. What if you could also access such functions directly, through a Web browser, without you needing to be held in a queue and without the need for someone to take a call. Interested? Then you will understand why the demise of the call centre has been predicted.
Call centres are not dead, or even dying. However the need for call centres to take care of mundane, routine enquiries is rapidly going away. As says Tom Black of the Smith Group, this trend will accelerate with the arrival of digital, interactive television and the set top box. Call centres of the future are likely to be smaller and more focused on sales activities and proactive customer support, integrated with online services. Call centre strategists should consider provision of customer services using all access devices, not just the telephone.
(First published 10 May 1999)
Agnostic to what’s in the box
1999-05-10
An agnostic is a person who does not know if a god or gods exist. Hence, as Microsoft continues its march into the hinterlands of consumer devices, it has a battle on its hands. Microsoft’s recent alliance with AT&T signals its “bullishness” (a word which the company is very attached to) about its entry into the internet set top box market. As discussed in Friday’s analysis, its rationale is to put Windows CE onto 5 million of these devices. The fray is already joined by Sun with Java and Symbian with EPOC – AT&T’s position is now unclear. What a battle there will be. But it will pass largely unnoticed by the mass market.
Microsoft’s Windows 95 announcement was a marketing coup which has never been bettered, winning the hearts and minds of millions of PC users. Can you imagine being able to announce a new mobile phone or laser printer in such a way? No. Device technologies don’t lend themselves to such stunts, as their users are largely oblivious to what goes on inside them. Internet set top boxes will enable access to the web without needing consumers to back one technology or another. In any case, it is likely that the choice of set top boxes will largely be decided by the service providers - the ISPs, media companies and cable companies. This is a fact already being exploited by Microsoft, for example through their Telewest alliance announced today. Indeed, as discussed today in “IT infrastructure for free”, the chances are the boxes will be given away or incorporated in other devices such as the TV set. IT vendors may have god-like status in technological circles, but in the greater, unknowing world they are less likely to be recognised as such. Microsoft’s battle is not so much against the other giants in the industry, but it is a fight for brand recognition in a mass market of disinterested consumers.
(First published 10 May 1999)
IT Infrastructure for free
1999-05-10
So, BskyB are to give away satellite receivers for a song (or a forty pound connection charge). This announcement may have come as a bit of a shock to those poor punters who have just shelled out hundreds of pounds. Fortunately for them, the broadcaster is to freeze their subscriptions to make up. In any case, in this converging market, the news should not come as a surprise.
Let’s think about this. Again and again, we have examples of technology and services being given away for free. Look at Microsoft’s Internet Explorer and the now-ubiquitous mobile phone (which has been consistently offered at several hundred pounds lower than its cost of manufacture). Reasons may vary – in all cases the suppliers are looking to win market share which they can exploit later or leverage for other products, in some cases costs are recuperated, at least partially, through subsequent subscription costs. A company in the US is even giving away PCs for free, but considering that these can now be built for the cost of a mobile phone, this approach should not be unexpected.
Given that this trend is to continue, we are likely to see ever more imaginative approaches to making money out of the internet, both to cover costs and to make a tidy profit. Revenue will come from advertising and customer profiling (click trends), sure. Money is also to be made by hosting the channels through which “real” goods and services can be purchased, such as travel bookings and car servicing. Tomorrows entrepreneurs will have to think of ever more esoteric ways of extending and exploiting this medium.
In the meantime, both businesses and consumers are likely to profit from the situation. It might be wise for anyone about to make a major investment in IT takes stock of what is likely to be made available for free, such as email access or extranet services. At least it should enable us to harden our negotiating stance.
(First published 10 May 1999)
A Safety Basis for Y2K
1999-05-11
So, the Channel Tunnel is going to shut on New Years Eve, 1999. This has been decided following concerns about the interfaces between Eurotunnel systems and those of the National Grid and their equivalent in France.
Developers of Safety Critical Systems are a breed apart, and they have to be. These are the people that build the systems which run our trains and control nuclear power stations. The underlying rule seems to be “avoid grounds for litigation,” which is a rather crude way of summarising what is a complex systems development field. There is very little room for error – the blasé, “bugs are inevitable” attitude which appears to pervade the rest of the IT industry is replaced by huge attention to detail at every stage of the development process. During design, systems are considered not just in terms of when things go right, but also in terms of effects when things go wrong. An interesting fact, often overlooked, is that many systems have a safety critical element and would benefit from adopting some of these considerations.
If the Eurotunnel systems experts have decided that the tunnel should close, they will have done so following a detailed safety assessment of the risks and their knock-on effects. Most Y2K work is now complete or is being completed but much of this work is based on the risk to the systems involved. Managers of systems which are not considered “safety critical” would do well, even at this late stage, to consider the ramifications of their systems going wrong before they blithely let them run over into the new millennium.
(First published 11 May 1999)
Time is running out for Iridium
1999-05-11
A few weeks ago I wrote about the uncertain fortunes of Iridium, the satellite phone company. Trouble is, time is running out. The satellites only have a space life of five years, following which they must be replaced: Iridium’s business model is based on making enough money to justify the continuous relaunch of new orbiters to replace the old.
Iridium need to identify new markets for their services, as mentioned before. But if they take too much time doing this, they will generate insufficient funds to sustain their phenomenally expensive infrastructure. Leave it too long and the whole lot might come tumbling down, in more ways than one.
(First published 11 May 1999)
Arranged Marriages to rule eSpace
1999-05-11
There has been a proliferation of recent announcements about company tie-ups, for example Microsoft and AT&T, IBM and C&W, and HP, SAP and Qwest. As can be seen, these alliances are cross-sector, normally involving a hardware company, a software company and a communications company. As the alliances have been formed to take advantage of the expected trends in business use of technology, they are a fair indication of the vendors’ opinions on what is likely to happen over the next couple of years.
There are several “givens” here. First, that a significant amount of business will be conducted using the internet as a medium. This will be both business-to-consumer, via a Web browser, and business-to-business, for example using email as a transport for inter-business information such as purchase requests and invoices. Second, that computer systems in different companies will communicate with each other directly, avoiding the need for human interaction and speeding up the supply chain.
The question is, what are us Europeans doing about it? Recent surveys, such as the poll by PhoCusWright of major European airlines, indicate that us europeans are behind the curve. Now we have a choice here. Either the future of business is on the Web or it is not. We at Bloor believe it is and there is substantial evidence to support the fact that businesses which do not embrace new technologies will be at a disadvantage, in terms of both efficiency and cost. In the UK, consider LIFFE, which has only been able to turn around its fortunes by endorsing what it previously rejected. LIFFE’s near-demise was blamed on complacency; others will claim ignorance. In either case, though, for some businesses that do not take on board the internet (together with the new opportunities it offers), the result will be the same.
I know, I know. Here we are again, harping on about eCommerce. A question for you then – are we exaggerating the situation? Or are you well aware of the threats and opportunities posed by the Web? How are you planning to take advantage of these “new media”? We’d love to hear your feedback.
(First published 11 May 1999)
Gigabit over copper? Never say never!
1999-05-12
Broadcom has just announced a new chip which, it is said, enables Ethernet data traffic on copper (and that’s the clever part) at 1000 bits, or one gigabit, per second. A number of people in the industry said it couldn’t be done, but it has. It was only a matter of time, really – just as modem speeds have far exceeded the 300 bit “maximum” originally mooted, so the potential of copper wire has been pushed way beyond initial expectations. This is good news for all those organisations which have flood wired their offices with Cat.5 Universal Twisted Pair – their investment (and their topology) is protected.
The question is – do we still need fibre? Probably not to the desktop, even for high-bandwidth requirements such as streamed video, and maybe not in the machine room either. For example, Fibre Channel Arbitrated Loop boasts transmission speeds of 800K and is being pushed as the plumbing for the storage area network, a topology which separates storage devices onto their own token-based network. Tokens can slow things down, so a number of storage manufacturers are proposing switched FCAL, which enables better use of the bandwidth. But why bother with fibre at all, when the same job can be done over existing copper, only faster?
One thing is for sure. The IT manager’s job can be made easier if designing a network architecture is based around network partitioning for performance, as opposed to partitioning by device type. If all devices share the same network protocol, for example using the Ethernet 10/100/1000 family, then this becomes a real possibility.
(First published 12 May 1999)
Linux on trial
1999-05-12
What is the Linux phenomenon? Repeated announcements (including two on Silicon.com today) show more and more companies, vendors and end users are adopting this freeware package. What are the reasons? And isn’t this mass migration to a seemingly unsupported piece of software a little dangerous?
Different camps have different reasons for adopting Linux. For some, particularly IT-literate end-users, Linux is a free training bed – it runs Unix commands, comes with free compilers and a host of other software that are an ideal entry point to IT. Vendor adoption is two-way – some are “listening to their customers’ needs,” as they say, but others are porting to Linux merely as a marketing ploy – after all, porting from Unix to Linux is relatively straightforward. Businesses really are using Linux – a number have built Linux-based systems as mail servers or as a cheap entry to the Web.
This is where things are getting interesting. A vast number of people are taking a look at Linux for their non-mission-critical neeeds, and as they do so they are discovering that the rumours are true about Linux stability. Common sense is coming into play, for example how much support do end-users really get for commercial operating systems? As with the X-files, Linux users note, the support is out there. Techies from the Unix world (who, admittedly, are a little biased) note the fact that a number of essential facilities on that platform, such as Xwindows, were also freeware – that is, public domain software, which is entirely different to freeware. Honest. In other words, Linux is becoming acceptable. And as this happens people are starting to trust Linux for their non-trivial applications. Some of this trend is unavoidable, as Web interfaces and email servers move from being “windows to the world” to being eCommerce gateways. But IT managers are becoming more and more comfortable with the cheekie chappie operating system that was downloaded by the placement student.
There are still plenty of things missing from the Linux jigsaw – in particular, there is still only limited application support and multiprocessor capability still has some problems. These facilities will come, and in the meantime there are plenty of other opportunities for IT managers to see for themselves the benefits of running a secure, stable, free operating system.
(First published 12 May 1999)
£50M for NHS IT – Plan wisely, spend wisely
1999-05-13
It is sometimes too easy to be critical about government spending announcements, so let’s not be for a moment. The money, if it is new, should be welcomed. Trouble is (and I’m still not being critical), IT projects tend to expand to fill the amount of money allocated to them, and hence should be prioritised at a higher level. Whilst difficult to see how the critics can say “it is not enough,” it is also difficult at this stage to know whether it is adequate.
Rather than considering spend, at this stage, the NHS should be defining IT strategy at the highest level. This will enable the appropriate spending decisions to be made based on real priorities. The strategic objectives should share the same values as the NHS, namely to provide an integrated, national, accessible infrastructure which makes the most of modern technologies such as the Internet. In fact, it would be quite stunning if such a strategy did not exist already.
(First published 13 May 1999)
See the future – read my palm
1999-05-13
The announcement by 3Com at Networld+Interop, concerning the use of PalmPilots with a wireless Internet connection to access network management information. Just might be the start of something really big. That’s really big, and not just for network management staff.
Sure, such facilities (and 3Com do not have the monopoly on this) will be of great benefit to network operators, particularly those working on distributed networks across multiple buildings. Currently these poor individuals have to rely on a remote terminal or, worse, have to wait until they get back to the control room to update the records, only to find that a second fault has occurred back where they have just come from. Maybe I exaggerate – pagers are in common use these days, but a handheld wireless terminal would be a real boon.
3Com have expertise in both handheld devices and wireless networking, and are using both to enable a whole new set of applications, the potential of which is exciting to say the least. It is not hard to think of other markets than network management for a wireless, Internet-enabled PDA, from warehousing to shopping – markets which 3Com, with other vendors hot on their heels will be keen to exploit.
(First published 13 May 1999)
Cable, digital, set top box. Cable, digital, set top box
1999-05-13
This is becoming a bit of a mantra. More digital TV alliances have been made over the last day, this time between Microsoft and C&W, and AOL with a whole bunch of hardware and digital TV providers. At the risk of stating the obvious, things are hotting up.
Real soon now, as the saying goes, digital TV is going to get popular and all those predictions analysts have been making for years will come true. At least, that’s the assumption being made by the giants and the little people amongst the vendors (and they’re putting billions of dollars against this horse). The Internet really, finally, will leave the domain of the propellorhead and enter the mass market. Your grandmother really will be sending email and checking bus times on the browser.
Fortunately, contrary to previous expectations, it seems (as reported on silicon.com) that Europe is catching up the US in terms of internet access. At least, business executives over here are using the Web nearly as much as over there. European users results in European internet entrepreneurs. So maybe, just maybe, when the dam does break, we won’t find ourselves entirely dependent on stateside services when we should quite reasonably be depending on our own.
(First published 13 May 1999)
New laws for the online community
1999-05-14
An interesting element of the Richard Tomlinson situation is the perception that the internet is somehow beyond the law. There are two factors to this. The first is that the internet, being global, spans international boundaries and hence leaves room for activities which are illegal in some countries to be conducted remotely from other countries. This is not dissimilar to the current situation in online gambling. The second factor is that the new facilities provided by the internet are causing new situations which, though immoral or antisocial in theory, do not have any corresponding legislation to make them illegal in practice.
The one thing we can say is that this situation will change. Cases have already been successfully brought under the terms of the “long arm” laws being implemented in some US states, particularly against copyright restrictions. The UK government, amongst others it is assumed, has noted the need for international legislation to restrict the misuse of the internet. Of course, the situation is still in a state of flux but things are firming up – it is now possible to see, at least, that global agreements are required to enable some national laws to be enacted across international boundaries.
Without taking a view on Richard Tomlinson’s position, it is possible to see that he has been exploiting weaknesses in international law exposed by the internet. His case is high profile – there are plenty of others. The sooner these weaknesses have been dealt with, taking into account the international conventions to protect the individual and with the necessary agreement of the international community as a whole, the better.
(First published 14 May 1999)
Adrenalin keeping the online hermits going
1999-05-14
Apparently, the four individuals living “by the net alone” for four days are having the time of their lives. Some are being more successful than others at catering for their basic needs (one poor chap is clothed in pajamas and socks still), but all are enjoying themselves.
What is the internet about? It is about getting access to vast quantities of information, a lot of it obsolete, irrelevant or wrong. It is about being bombarded by emails from friends, foes and people you have never hear of. It is about goods and services being provided online, with varying levels of service competence. Most of all, as all these examples attest, it is quite simply a new communications medium – and we all love to communicate.
(First published 14 May 1999)
Microsoft Open Source is missing the point
1999-05-14
It is reported that Microsoft is still considering opening its source code, osensibly as a response to the “threat” from Linux. However, there are some gaping differences between the two camps’ concepts of open source.
Steve Ballmer, Microsoft president, gave a very clear perspective on Microsoft’s position – that some parts of the Microsoft source code could be published or licensed to enable applications to be built more quickly. In other words, if you can see what the code is doing, you can understand it better and use it more wisely.
The approach to development of Linux could not be more different. Here, the Linux source is released in its entirety \i{for development}, so that anyone who wants to modify the code can do so. Changes and extensions to the code are fed back and accepted (or otherwise) in a subsequent release. In other words, there is an open software development process in which the ability to read the code is only one element.
The thought of allowing others to actually modify Microsoft code still seems a little alien to Mr Ballmer, who is promoting only a very restricted definition of the “open” concept. Whilst the Microsoft move might seem relevant to the strategists, it is unlikely to cut much ice with the developers on the ground.
(First published 14 May 1999)
IBM goes in Deep
1999-05-28
On the 24th May, IBM announced it was spending $29 on an initiative it is calling the Deep Computing Institute. Deep computing is the flipside of pervasive computing – just as we expect that computer-ettes will be embedded in everything that can take a power cable (and probably a few things that can’t), so will there be the megacomputers which push the limits of technology to solve computational problems currently way beyond our reach.
As with most research initiatives in the past, there is unlikely to be one big benefit. Rather, there will be lots of smaller benefits – new technologies which can be build into existing models, new standards, approaches and architectures. Either way IBM should be applauded for investing in programmes which will be of benefit to the whole IT community, not just IBM.
(First published 28 May 1999)
Bizness as usual for Microsoft
1999-05-28
Yesterday in Paris, Microsoft relaunched its eCommerce strategy under the banner “eCommerce for all”. Apart from presenting the strategy from a European perspective, nobody expected much difference from the first launch, two months ago in San Francisco. However, things move awfully fast in this business, and it was clear that the strategy was significantly firmer than when first announced.
The building blocks were the same – Microsoft Commerce Server, BizTalk server and MSN. Commerce Server is “websites with transaction management” – an essential element of setting up an eBusiness site. BizTalk server is the XML interpreter and integration layer, to enable business applications to communicate across the Web. MSN, back in its third (or is that fourth) incarnation, is now an eCommerce portal.
What has changed? For a start, the marketing may be the same but the technology appears a lot more advanced. It is possible to start believing some of the dates in the roadmap - currently product betas are expected in the summer. Real issues are being addressed, like how Exchange and Biztalk server will work together, for example to deal with eCommerce transactions via email. Another change was a small change of emphasis – Microsoft will be “stewarding,” rather than controlling, the standards effort required to define the XML schemas for vertical markets such as Finance, Retail and Healthcare. This gentler approach is to be welcomed – whilst it is recognised that big companies such as Microsoft and SUN have a major role to play in setting standards, it is a relief to hear that they are prepared to give up those standards to more appropriate bodies, when the time is right. Let’s face it, Microsoft’s reputation in this arena is less than pristine.
Still disappointing was the rollout plans for MSN commerce in Europe. Expected early next year, this leaves us Europeans significantly behind if we want to take advantage of MSN’s services. Having said that, maybe its not such a bad thing – it might leave time for a non-US company to step into the breach.
(First published 28 May 1999)
TANSTAAFL, but we'll throw in the starter
1999-05-28
The article on Silicon.com about operating systems being given away for free has an element of truth. The fact is, all companies (IT and otherwise) dealing with technology are in an experimental phase. Given the fact that everything has a cost, the question to be answered is “what should be given for free to gain subscribers, and what should be paid for?” There will be different answers for different sectors. In retail, information and value-add service will probably be the major “free” differentiators. In the mobile world, provision of handsets has always been at an initial loss to the telco – this is a model being adopted by BskyB and OnDigital.
In the software and services world, things will be different again. At the moment nobody is really sure what will be given for free and what will be paid for. Maybe operating systems will be given away but like everything else they will have to be paid for somehow, perhaps through paid subscriptions to the services they enable. It is likely that there will never be one answer to this. Rather, business models will be chosen on a tactical basis depending on what people are prepared to pay for and what the competition is doing.
(First published 28 May 1999)
June 1999
This section contains posts from June 1999.
Baan’s battles not over yet
1999-06-07
Baan hasn’t made a profit for a while but battles on, determined to be one of next year’s success stories (it seems to have written off this year) with a new approach and new product lines. Baan is keen to shed the image of being an ERP company and is moving into e-commerce, focusing on the business to business sector. In our opinion this is a wise move - ERP as a market-leading technology is dead, becoming just one of many applications which need to be linked and co-ordinated via the web to optimise and automate business relationships and transactions. The only mistake seems to be that Baan still believe in lock-in. “The majority [of organisations] … preferred to buy all its products from a single vendor,” claimed Baan. This said, the “majority” have no choice but to run a variety of products from different vendors. It is likely that companies which promote integration and interoperability, such as Concur, will steal a march on single-solution vendors. Baan better watch its flanks.
(First published 7 June 1999)
The science of appliances
1999-06-07
You heard it here first - the up-and-coming buzzword is “appliance”. So what is it? A variety of sources seem to claim the term for their own – even in their relatively closed community, storage manufacturers differ in the use of the term, ranging from single-function filer to fully-fledged messaging server. Recent estimates suggest that sales of “information appliances” will outstrip PC sales by 2003. Pretty interesting, considering that a definition for the term is, to say the least, vague.
To better understand what is going on, it is worth looking from the two viewpoints that manufacturers seem to have adopted. First off are the systems manufacturers, who are used to seeing large metal cabinets which often contain whirring motors and fans. From this perspective, the appliance is seen to possess the characteristics of a washing machine – large, white, floor-mounted, single function, easy to program. The second viewpoint is that of the handheld device manufacturer. To these companies, appliances are very small, ergonomic and multifunctional, looking much like a mobile phone.
At the end of the day, it’s all semantics – one person’s appliance is another’s device. All so-called appliances of the future should possess certain characteristics to distinguish them from the systems of the past. They should be as low-maintenance as a fridge, as appropriately designed as a magimix, as easy to use as a phone and as easy to replace as a digital watch. We shouldn’t care what’s going on inside the box, what operating system is being run or what the processor bus speed is. Wishful thinking? Well, we’ll just have to see.
(First published 7 June 1999)
Microsoft at the Perly Gates
1999-06-07
So, Microsoft are to “extend Perl to take advantage of Windows capabilities”. Well, well. We saw it with Java, we saw it with HTML – in both cases, the company were quite rightly accused of taking an open standard and extending it to lock users into the Windows platform. It may be that their approach to Perl is different – goodness knows that Windows lacks a scripting language of any merit, so the logic of porting Perl to Windows is clear.
Can we give Microsoft the benefit of the doubt? Put it this way. If Microsoft are to be trusted with yet another standard, namely XML, then they need to have quit their old “standardisation” tricks. Perl can be the test case: we watch their every move with eager anticipation.
(First published 7 June 1999)
Office 2000 is out – but is it in?
1999-06-09
Microsoft has launched Office 2000, 15 million copies are sold already and it’s still only 1999. The product reviews are excellent, thus far – the individual applications that make up the O2K family are being judged, once again, as the best in their field. Microsoft appear to have it all sewn up. Good news for Microsoft? Well, that depends on your perspective.
First off – Office 2000 continues a line of partially integrated productivity applications based on Microsoft Office 4.2. When Office 4.2 came out in 1993, magazines ran comparisons with WordPerfect Office and Lotus Smartsuite, usually drawing the conclusion that any of the above would do the job, but Microsoft had the edge. Organisations bought office suites by the truckload, with the brand usually dependent on the incumbent word processor. New releases of suites from the different suppliers brought with them new features, enhancements and bug fixes, each time making the new versions a more attractive purchase than the old - when the suite wars reached a crescendo, organisations “rationalised” to Microsoft and it was all over bar the shouting. The point is this. In 1994 we had a very capable set of office applications which gave us the facilities to do 95% of the job (bar, admittedly, the functionality now available in Outlook). It is now five years later and, whatever the functionality now available, it is questionable whether we need it at all. A word processor is, was and always has been a word processor.
It is clear that Microsoft recognise that if they want to ship O2K, they need a new spin. As stated on Silicon.com, “the software is intended to promote a ‘web work style’ for ‘knowledge workers’ who need to share information.” Reading between the lines, and looking at Microsoft’s other announcements yesterday, the plan is to sell Office 2000 as an element of facilities enabling collaborative working, for example document sharing. Fair enough – aside from the (still perceived as niche) document management market, this is an area which remains largely untapped.
Microsoft are guaranteed a packet of O2K sales, at least in the short term. But in the longer term the jury is still out. As a purchaser, if you see Office 2000 as an application set, the question is, do you need to upgrade from your existing, functionally rich suite? And if the suite is to be used as part of a collaborative working environment, is your organisation ready to exploit the potential of your “knowledge workers”? Aye, there’s the rub. Microsoft are stepping into new territory with Office 2000, and it is anyone’s guess what the future holds for the suite.
(First published 9 June 1999)
Amazon’s laws of the Jungle
1999-06-09
Amazon is getting bullish. No, okay, it has always been bullish. Amazon is getting very big and bullish. The still-to-make-a-profit company has made a spate of investments recently, and not just in its in its traditional over-the-web, commodity markets. It has also commenced selling downloadable music and it has even set up an auction site. It is putting itself in direct competition with the big guns both online and offline, and from eBay to Wal-Mart the competition is rising to the threat. Not content with local battles, Amazon.co.uk has put itself head to head with W.H.Smith Online, with both now offering 50% off books from their own bestseller lists.
Amazon has grown fast and clearly it must act fast if it wants to stay ahead in the game. It is a big game, however, and there is a danger that Amazon spreads itself too thin or makes too many enemies. In the litigation-friendly US, we only have to look at what is currently happening to Microsoft to know how competition between retailers can be a battle in both the storefront and the courtroom. In the words of Sun Tzu, “do not fight a war if you have to fight on two or more fronts”. In Amazon’s case, the numbers of fronts are mounting and there are more than two dimensions to the fight.
(First published 9 June 1999)
Information Overload is a Distraction
1999-06-09
A recent survey by Pitney Bowes found that the majority of workers are interrupted by communications technology every ten minutes. For “communications technology” read voice mail, email, telephones and mobiles, pagers and (probably) the beep from the calendar manager. This survey confirms the results of other surveys, such as the Avery survey which found that information overload through use of these new media was preventing the job from getting done.
We are, sadly, still at the stage of being driven by these technologies as opposed to driving them. It has been generally accepted that communications technologies hold the key to efficient, knowledge sharing organisations but our use of these technologies still borders on the primitive, as like Pavlov’s dogs we respond to the sound of a bell by switching off from what we were doing. There will be a huge market in the future for intelligent information-sharing and communications facilities, and the sooner it comes, the better.
(First published 9 June 1999)
The Internet is going, to Worms that is
1999-06-12
We might still be waiting for Cyberspace, but as far as electronic phages are concerned, Science Fiction has well and truly become hard fact. William Gibson was credited for the former but the concept of viruses and worms first reached popular fiction ten years earlier, in the novel “The Shockwave Rider” by John Brunner. Quote: “It could take days to kill a worm like that, and sometimes weeks”. This situation has been all too apparent recently, starting with Melissa which has spearheaded a whole new wave of viruses such as the most recent (and the most damaging), Worm.ExploreZip.
Viruses have had several forms since their early days. Most commonly, they used to “infect” executables and would propagate themselves when the executables were run (as well as offloading their ‘payload’, with whatever consequences that might have had). Recent examples have been as Word or Excel Macros, and now we have a new breed which exploits weaknesses in our email. There will be software patches and virus recognition updates, all well and good. Or is it?
The problem lies in prediction. Solutions to viruses come after the event, and in the case of Worm.ExploreZip, it is likely that several weeks will pass before it is eradicated. How does it work? It appears as an email attachment, and if it is run, it accesses your Exchange or Outlook address book to forward itself to anyone it can find. Was this problem predictable? No… well, possibly, someone just might have been able to work out a scenario such as this, preferably before the hackers did.
This isn’t a Microsoft-bash here. It is quite likely that similar problems exist in Lotus and Netscape products, to name but two. Users of non-Microsoft messaging, for once, can be thankful that they weren’t using the de facto products. However, lessons should be learned for future generations of all products which use the Internet as a communications medium. For example, as applications providers XML-enable their software, they should be asking the question “What’s the worst possible thing that could happen to someone using my software?” Risk scenarios should be developed, evaluated and countered. We can be sure that, if they’re not doing this, plenty of others will be. Nobody’s directly to blame - to quote John Brunner, “The medium is the mess-up.” All the same, vendors should be doing everything they can now, rather than leaving us overdependent on antivirus companies who have no choice but to provide solutions to problems only after they have occurred.
(First published 12 June 1999)
Commerce Electronique? Ja, bitte!
1999-06-12
Us anglophones might have a hard time with this but it’s a fact. English is not the most spoken language on the planet. And, despite any impressions we may have to the contrary, it won’t be the most used language on the Internet either.
Let’s face it, we’ve seen this before. There was a day when we though that all software in the world was written in English and, besides, anyone that was IT-literate was probably bright enough to speak two languages. When we went to export our products to the continent, to our chagrin we found that the potential user base would much prefer the software in the local language, and that competitor products already existed, written by our overseas friends for their own local market. Grudgingly, we had to admit that IT was not a UK/US phenomenon. And neither, shockingly enough, is the Internet.
By 2005, it is estimated, 57% of all Internet sites will be in a foreign language. Unlike currency, where convergence seems to be the trend, language use is diverging. This is very interesting. Yes, there is a global pool of customers out there, but they are expecting to be communicated with on their own terms (they are the customer, after all). It is likely that successful sites will be multilingual – indeed, if you’ve looked at sites in other countries, you may have noticed that a large number of sites exist in at least two languages. UK companies wishing to exploit this huge market of consumers, take note.
(First published 12 June 1999)
Sony gets Music Shops Online
1999-06-12
In the prediction game, it is reassuring to see some projections come true. The effect of the Internet on retail, for example, has been a subject of much speculation: in lunchtime discussions, most will have suggestions as to what might work, but few if any have the monopoly on the future.
Sony’s announcement today is a case in point. In association with Red Dot, Sony are to distribute their back catalogue to high street retailers via a dedicated ATM network. Customers will be able to select an album and have it pressed while they wait. For customers this is excellent news, as retailers will be able to carry a much broader range of albums; for retailers and suppliers, the future is equally bright because neither will have to manufacture or carry unnecessary stock. For these reasons, t is exceedingly likely that other media firms, such as Polygram, will follow suit.
What will the music retailer become? Well, we are back to speculation again, but it is likely that the physical store-front will stick around for a few years yet. Music retailers will still be dependent on getting the punters through the door before they can make a sale, and in the absence of pure product the focus will likely move to added value services and branding. Megastores with coffee bars, for example, are already in a position to serve as trendy places where customers can gather to discuss and possibly buy what’s on offer. We expect to see more of this but what do you think? Let’s meet up at Virgin, do lunch and talk about it.
(First published 12 June 1999)
July 1999
This section contains posts from July 1999.
Linux fails at the OS Corrall. So What?
1999-07-01
Recent benchmarks have shown Linux running less quickly than Windows NT. This may seem significant, but it is not. Linux is open source, developed lovingly by the doves of the IT world. It is a product, it might be argued, and it does pose significant competition to comparable products such as Windows NT. It does not, however, behave as a product. It does not stand up to cost-benefit analysis, because it does not have a financial cost. Different models must be applied.
Die-hards of the Linux community would say that the reason Linux is successful is because of the huge numbers of idealistic hackers who are prepared to make something beautiful. They’re wrong. Linux is moving mainstream because the vendors see it as an opportunity to make money. In any business, there are some things that are given away for free and others which are paid for. With the technological advances that are currently being made, new business models have been invented which take advantage of this. But it was ever thus. The huge advantage of Linux to vendors, is that it is already free – vendors can give away something for nothing, but at minimal cost to themselves.
Interestingly, it was Microsoft that demonstrated how powerful the “give-it-away” model could work, with its free distribution of Internet Explorer giving it the lion’s share of the browser market from a standing start. Explorer is still free, and will probably remain so – a small, calculated cost by the Microsoft camp. Did anyone run performance benchmarks comparing Microsoft and Netscape? Yes. Did anyone care about the results? No. Businsses were more interested in protecting their existing investment, getting something for nothing or trying to avoid lock-in.
The mistake we can make with open source, is to think that one day it will have to be paid for. Some things are for free because they oil the wheels. Service in a retailer, local newspapers, meals on aeroplanes are free at source with their real costs being factored into other goods or services. Open source is an enabler, as it encourages the broad acceptance of technology, with all the money-generating spin-off opportunities that it may cause. And this is why open source will remain.
(First published 1 July 1999)
The sad truth about ADSL
1999-07-01
If the rumours are true, BT is on the point of rolling out ADSL services in the UK. ADSL presents both a problem and an opportunity to BT, the company which pretty much “owns” the link between the local exchange and the household. BT has come under fire in the past for not going the (literal) extra mile for its consumer customers: suitable pricing models for home use of ISDN, for example, were a case of too little, too late. ADSL provides a new opportunity for BT to demonstrate that it can provide cost-effective, high-bandwidth services to SoHo and consumer alike.
Issues remain, of course. Firstly, BT are unlikely to be in a position to manage a national migration to ADSL in a short timeframe. For a start, there is an infrastructure rollout cost for BT, to enable its local exchanges. Also, and maybe most importantly, there is a benefit tradeoff. ADSL is only appropriate for customers requiring high-bandwidth digital reception – this translates to internet users prepared to pay the cost of the access device and the subscription. Although we all think fast access to the internet would be a good thing, how much are we really prepared to pay for access from the home? Until there is a critical mass of customers, BT will be loathe to roll out the service in a particular area.
Secondly, there is the bottleneck issue. The internet acts a lot like a hard disk and controller – it has seek time, access speed and throughput as its major characteristics. Surfers can employ ADSL to increase local throughput but can do little about access speed, for example to a poorly connected site. Seek time is still largely down to knowing what we are looking for or how to find it. When designing a computer system, a lot of effort is made to “balance the bottleneck” – that is, to ensure all components work together so that no one component is slowing down all the others or is underutilised. The internet still has a long way to go before all of its bottlenecks are balanced.
BT will inevitably roll out ADSL, but probably more slowly than the commentators would like, even though the full benefits of fast internet access will not be realised for a good while. What does this mean for the consumer? ADSL should be considered in the round, as one of a number of protocols for data transmission on telephone wire. It will not be the only mechanism, just as the telephone line will not be the only medium – to get the fullest picture we must think of wire, cable, fibre, satellite and wireless and the variety of protocols they support. The biggest issue will ultimately be cost and there is currently such a diverse set of pricing models that it is difficult to gauge the relative benefits of each. This will change, but in the meantime ADSL is more likely to begin with a whimper than a bang.
(First published 1 July 1999)
More e-Baa than e-Business
1999-07-01
A group of farmers in Wales are setting up Direct Welsh Lamb, a web site which sells lamb direct to the consumer. Whatever your philosophy on food, you will never see a clearer example of how eCommerce stands to benefit both producer and consumer.
Direct Welsh Lamb was set up as a direct result of the farmers being ever more squeezed by the corporate world of supermarkets. Rather than go out of business, the farmers looked for alternative ways of selling and hit on the Internet. There are several factors which may prevent the success of this venture, not least that the company itself may not be able to cope with the demand that it creates, but where it goes many other businesses are following. Interestingly, research and experience is showing that the “personal customer experience” is far greater than that, of traditional shopping. In this case, for example, consumers are in direct, daily (if required) contact with the producers of the goods. There are no middlemen and so it is difficult to sidestep blame; also, the competition is only a click away – responsibility heightens and hence so does quality of service.
Supermarkets will continue to have a role, but this role is set to change dramatically over the next ten years. Once the channel of communication has been re-opened between producer and consumer, it will prove very difficult to close.
(First published 1 July 1999)
Local loop open for competition
1999-07-04
OFTel, the UK Telecommunications watchdog, has announced that BT’s monopoly on the local loop has come to an end. It is still unclear about what this will mean in terms of implementation. It seems that two options exist: either BT can sublet its existing services to outside companies (in much the same way as electricity can be bought from a number of companies, despite the provider remaining fixed); alternatively, other organisations can provide alternative backbones to which the local loop equipments can connect. As with ADSL, one thing likely is that any rollout of new services will be slow, at least in the short term. The longer term is more positive, particularly for non-metropolitan areas which are more likely to benefit from new services than they would have been if everything were left to BT.
Lest we forget, though, there is a pretender in the wings. Wireless communications do not currently support the necessary bandwidth for data communications. With a new standard ratified by the ITU earlier this year, however, it is only a matter of time before the necessary equipments and infrastructure are put in place. Add ingredients such as Bluetooth, a local wireless standard which will transform home networking, to the pot and things become very interesting indeed. If the fixed line operators do not get their act together over the next couple of years, others will be happy to step in and take the business.
(First published 4 July 1999)
Yahoo bows to people pressure
1999-07-04
After a not-so-long-running dispute, Yahoo has finally agreed to the demands of its Geocities users. Following Yahoo’s acquisition of Geocities, the organisation attempted to enforce its own terms of service on the many thousands of Geocities homestead owners. It may have expected a backlash, but it certainly didn’t anticipate the scale.
Considering Yahoo’s Web heritage, this lack of foresight on Yahoo’s part is remarkable. First off, Geocities existed (and, indeed, continues to do so) on the back of its user community. To fail to consult the community which is the very reason their for existence might be considered folly. To attempt to take away from this community, what could easily be considered as a online right, starts to defy belief. Secondly, let’s think about what Yahoo were attempting – to gain copyright over the individual Geocities Websites. It is difficult to imagine a scenario where the wealth of site owners would agree to this.
It should be noted that the principles behind the two organisations – Geocities and Yahoo – are very different. Yahoo is a portal organisation, letting its users access services at the cost of enduring banner ads and spam. Geocities is a hosting organisation, providing free facilities to its users with the cost (in advertising) borne by individuals browsing the sites. Clearly, one does not easily map the Geocities model to that of Yahoo. A fact that Yahoo is learning to its cost.
Yahoo is still not giving in, not fully – it has left the door open for more battles in the future. Ultimately, this is a lesson in net power – one which all organisations intending to exploit the Internet would do well to take note of.
(First published 4 July 1999)
XML – the common language is half the battle
1999-07-04
Articles of recent news point to both the crisis and the opportunity offered by XML, the eXtended Markup Language which looks set to be the standard language for business to business communications. First, IBM and Rational have announced that they are hooking up their software development environments via an XML bridge. Second, Oracle has announced an XML interface to its 8i database management system. Both show the importance of metalanguage standards, or the lack of them.
The first article is an example of what is possible if such standards are in place. The Object Management Group has ratified both the interchange mechanism and the metalanguage definition, in the form of XMI and UML respectively. The net result is that two applications know how to communicate with each other and they also understand what is being communicated.
How different things look in other sectors, as indicated by Oracle’s modus operandi. XML contains tags to represent specific types of data. Oracle have taken a low level approach, providing an engine which can be configured to recognise the tags in XML content, enabling it to remain standards-independent. This differs from Microsoft’s approach, which has involved setting up a standards body to define the tags. Complicated? Yes it is. The net of the absence of standards has been a free for all, with different vendors trying to impose their own standards by strength or stealth (and Oracle and Microsoft are just two of many players in this game).
Eventually, one side will win and the other will lose. What a shame. As the software development community has already demonstrated, the benefits to all players of having standards in place are great. The stakes are high, admittedly, but it is likely that the costs of the battle will far outweigh the benefits, to all but a few.
(First published 4 July 1999)
CA SANITI illustrates the problem with management frameworks
1999-07-14
Enterprise management frameworks, since their inception in the early nineties, have been seen as the ”ultimate answer” to the questions posed by managing the applications, hardware and network devices that make up our corporate infrastructures. This would be true if the world stood still, but unfortunately it does not.
CA announced yesterday an initiative for the management of Storage Area Networks, known as SANITI or SAN integrated technology initiative. Essentially this equated to enabling the management of SAN hardware by CA’s UniCenter management framework.
Once again, it would appear that devices are rolling onto the market while framework vendors are struggling to keep up. No date has been given for the release of SANITI, though hardware vendors such as Compaq and Hewlett Packard are already shipping their SAN devices. Network managers must also plan time to reconfigure their management architectures to take account of the new devices.
Unfortunately, it will ever be thus. Increasing use of the Internet, wireless devices and the arrival of appliances to the market at an ever increasing rate are coupling together to transform our infrastructure topologies and make obsolete our modes of operation. For example, uptime is only one criterion to be judged for an E-Commerce site – throughput, response time and transaction management are also issues to be addressed against a backdrop of new types of application such as application servers.
Investors in enterprise management software should recognise that such frameworks can only solve part of the problem, and that deployment is not a one-shot operation. Best of breed approaches can help, but go against the “one size fits all” policy which goes towards justifying the inflated price of frameworks. Frameworks can help, particularly to manage the integrity of the core infrastructure. However they cannot be expected to keep up with all developments in the domains which they are supposed to be managing.
(First published 14 July 1999)
Jiro clarifies SUN’s open intentions
1999-07-14
Yesterday, SUN announced the launch of its Jiro initiative, which aims to provide a cross-platform standard for the management of storage devices. So what’s it all about?
Essentially, Jiro refers to the provision of a set of Java APIs for storage management such that applications can access storage resources directly, regardless of what operating system is being run. Jiro does not provide storage management functionality (such as that from Veritas), nor does it affect the underlying storage protocols or architectures. Rather, it acts as a middle layer, enabling applications to work with resources without having to reinvent code.
Jiro runs on the Java Virtual Machine, and this is how it retains its machine independence. Jiro also depends on Jini, the SUN directory initiative, for a number of services including discovery of storage resources.
It would probably be a little harsh to accuse SUN of sinister motives, but J-based software standards would seem to be moving into domains far removed from the original ambitions of Java. The JVM is essentially a stripped down, free operating system: through JVM extensions such as Jini and Jiro, plus application support through the Enterprise Java Bean specification, Java is starting to encroach on areas which are worth a lot of money to certain players. Such facilities, it could be argued, reduce the need for an operating system to a device-specific microkernel – maybe they don’t yet, but there is every reason that they will in the future.
In fact, the net result of Java-based “open standards” is to encourage commoditisation of operating system software and hence to reduce its relevance. This, clearly, increases focus on both the developed applications and the underlying hardware. SUN’s market is clearly the latter; it is fascinating to see how many applications vendors are also buying into this vision. Consider this: Microsoft were sued for modifying Java to encourage use of their own operating system. But can Microsoft sue a standards body (which is the ultimate home of the Java-based family) for promoting “standards” which directly encroach on its own territory? Of course it can’t. While we welcome the arrival of cross-platform resource management from the application development standpoint, we would do well to remember that there is more than one way to rule the IT world, and SUN’s aspirations are no different to anyone else’s.
(First published 14 July 1999)
IPV6 picks up momentum at last
1999-07-14
Five years ago, the doomsayers were predicting that before long, the number of available IP addresses would reduce to a trickle. Back then the solution was said to lie in IP version 6, which replaces the 32-bit addressing scheme with a 128-bit scheme.
The prediction of the demise of IP version 4 – the current standard for internetworking – never quite went away. The forecasted end-date is now 2003, and the solution, as ever, is still IP version 6. V6 also comes with increased capability for reliable routing and performance, and is touted by some as the enabler for the convergence of broadband and packet switched networks. Now, with the launch of the IPV6 forum, the standard for the next generation of IP networking seems ready to lumber into reality.
The future is not necessarily going to be reached without some pain, as hardware devices currently running IP V4 will have to be changed over to the new protocol. While commentators predicting a worse catastrophe than Y2K might be overstating things (we have not, after all, seen the full might of the Y2K monster as yet) it is clear that a lot of work will need to be done. One reason why the IPV6 changeover should cause less pain than Y2K is that it is clear what needs to be done. All devices currently running an IPV4 protocol stack will need to run an IP V6 stack. If they cannot, then they will need to be replaced otherwise they will not work. This is a simpler equation than attempting to identify the compliance of code to avoid data corruption: either devices will work, or they will not. What is less clear is the approaches to be adopted by device manufacturers for the V6 changeover. While most punters can see the benefits of the new addressing scheme (despite the incomprehensible addresses relative to the simplicity of the current standard), they are the ones having to bear most of the pain for what is, after all, an initiative fron the device manufacturers.
We will expect to see significant support from vendors to enable their customers to make the jump to “the new internet”. In particular, devices from now non-existent companies (merged away by the larger players) should be supported to the same extent as newer devices. Some pain (and associated cost) is inevitable, but like Y2K, so is the change. And to think we were all going to take a breather after the millenium celebrations were over.
The IPV6 forum can be found at www.IPV6forum.com
(First published 14 July 1999)
PC-on-chip spells end for consumer hardware costs
1999-07-16
It’s all coming together. A number of announcements, from NatSemi, Microworkz, Tiny, AOL, MSN and SEGA to name but a few, are indicators of convergence in (or on) the consumer market. Let’s look at them.
- National Semiconductor unveils the Geode, which combines PC functions onto a single chip. Proposed market: set top boxes, mobile phones and handheld devices.
- Microworkz plans to distribute its sub-£200 iToaster PC through Dixons
- Tiny Computers in the UK start giving away PCs to subscribers to their newly-launched telecommunications service.
- AOL and MSN offer free PCs to users taking out a three year subscription to their online services.
- SEGA announce that their Dreamcast console will have Internet access.
What does all this mean? NatSemi’s announcement equates to reducing the cost of PC-type devices to well below the current level. Microworkz’ iToaster is a set-top-box lookalike PC which demonstrates the way the technology can be taken – interestingly, the iToaster runs Linux, mainly (it is understood) to avoid the increased cost that would be incurred if a commercial operating system was being used. Tiny are giving away a £300 PC, which is as good an indicator as any of the maximum amount of material which can be “given away”. AOL and MSN demonstrate the head-to-head battles that are going to be fought by the internet giants – indeed are already being fought, exchanging hardware for mindshare. SEGA’s announcement is indicative of the scale of the convergence, as the games console becomes as powerful as a PC (if not more) and sports similar functionality.
There are plenty more announcements from the past few months which could be quoted, all indicating service providers giving stuff away, hardware getting cheaper (and therefore easier to give) and different devices – consoles, receivers and computers – becoming one and the same. To extrapolate only slightly, we can see consumer hardware giveaways becoming the norm over the next 18 months. This appears good news for the materialistic amongst us, but only because the situation is being judged relative to the recent past where such hardware would cost well over a grand. In fact, consumers have short memories and will come to expect such deals.
What is likely to cause more of a stir are the shockwaves expected to reflect back onto the industry itself. Hardware manufacturers are already seeing their profits cut to the bone as they compete for volume sales. Convergence suggests that previously ring-fenced groups of manufacturers will find themselves pitted against each other. Their markets will be less and less directed at the consumer, however, they will be selling to the service providers. Here, also, we expect the current shakeup to continue. Resellers could be the worst hit, unless they reorient their channels – peripheral devices such as printers, imaging devices and storage have a longer life, but pure PC sales will be damaged significantly.
The bottom line is that the computer and service infrastructure will, in the future, have little more than gadget value to the consumer market. People will purchase what they consume and need – food, clothes, bricks and mortar, the latest film. IT will act as an enabler for these purchases and will take its cut from their suppliers, but the time of buying PCs for their own sake is virtually over.
(First published 16 July 1999)
8i reasons to go Oracle
1999-07-16
When is a database not a database? When it is also an application server, a development environment, a search tool and a Java Virtual Machine. It would appear that Oracle customers find the bolt-on features a side issue to the database’s core competence – that of scalable, performant information management. It is difficult to gauge whether the movers and shakers within the company are too worried – indeed, as Oracle’s market share continues to rise, the success of the new features (or otherwise) would seem to be of only minor importance. For now.
Oracle’s “additional features” are far more than that. A three-layer architecture comprises the user interface, the application layer and the database and Oracle 8i has something to offer in each layer. Considering Oracle’s moves to strip down the operating system (with Raw Iron, now known as its Information Appliance), it is possible to imagine an IT development and operational system consisting of no more than hardware and Oracle software. 8i is not a database with add-ons, it is an application environment with an information store.
Hey, they’re all doing it. Just two days ago we showed how Sun’s J-based strategy was putting the squeeze on both the OS and the application architecture. SUN haven’t produced JQL yet, but the bets are on that they are thinking about it.
So what are we seeing here? The Internet has resulted in new concepts, paradigms and other architectural clichés upon which applications may be built. Things are stabilising fast – the thin-client, three layer architecture, centralised on the server, is becoming the de facto approach. Vendors from all disciplines are keen to move into this newly formed space and make it their own. The standardised, stable world of applications servers is no bad thing for systems implementors but will inevitably spell a horrible death for many vendors who are currently converging into the same space. As with the hardware market, the increasingly stable architectural boundaries become the battle lines and it is a battle from which there will emerge few winners.
(First published 16 July 1999)
Apache grows up – and up
1999-07-16
It is not only Microsoft’s admittance to be running Apache on some of its MSN Web servers that indicate the growing maturity of the server software. The loosely knit federation of Apache developers have formed the Apache Software Foundation, a not-for-profit corporation which aims to continue Apache’s development in its open source vein.
Apaché’s maturity is bolstered by its global takeup. According to a recent survey, Apache is, now, the software in use by over half of the world’s Web servers. This trend is set to continue, as the formation of the ASF will be seen as good news for the corporate world. It can be a difficult decision to take as an IS manager, to use the industry standard (and free) Apache, or to follow a traditional model of buying licensed software which comes with a nominal offer of support, services and upgrades. The reality is that the Apache community is as good at supporting its software as any corporation but making the leap to public domain software is seen as a bit like voting for the Green party, in some circles.
Where will it end? There is always the chance that, once Apache has captured the majority of the Web Server market, it starts to charge for services. Considering the formation of the organisation (and even if it keeps its promise to avoid profit) it seems difficult to believe that the ASF will be able to avoid making money in the future. These are unlikely to be through direct sales of the web server software itself, more through leveraging the Apache brand. For example, the organisation could sell added value service or, most likely, sell new software packages that build on the Apache brand. Indeed, if Apache was floated, it would be likely to make millions on the strength of its brand alone.
(First published 16 July 1999)
Thor poses Open Source threat to Windows 2000
1999-07-31
Timpanogas Research Group (TRG) announced yesterday its plans for an open source, NDS-compliant directory for Windows NT, Windows 2000 and Linux. This is significant, not only because of the direct impacts it could have on the directory-based world. TRG have a reputation for building bridges between NetWare, Windows and Linux. Despite a “professional” relationship with Novell, TRG have a reputation of developing products which can be used to migrate away from Novell. It would seem that, this time, the company is turning its sights towards Microsoft.
Clearly the move to implement NDS-compliant, open source directory software is a blow to Microsoft. The first versions of Windows 2000 will not contain a fully functional version of the much-touted (and awaited) Active Directory software which, despite being announced over two years ago, will still miss the initial Windows 2000 release deadline. Administrators wanting to make full use of the directory functionality will have to wait for the Janus upgrade to the operating system, unless they are prepared to consider the increasingly attractive option of adopting (or continuing with) an NDS-based solution.
Companies such as TRG are not the only ones to benefit from Microsoft’s absence. Novell itself is focusing its efforts on its directory products (as reported in IT-Analysis.com earlier in the month. In partnership with Compaq, Novell have also released a “directory appliance”: it is products such as these which will make moving to an NDS-based solution increasingly attractive.
So what of the indirect impacts? Announcements such as these are indicators of where the open source community is moving, away from its traditional home ground in the academic community. TRG are a commercial organisation whose motives are driven by profit, loss and a desire to get at the big guys. In giving away software, TRG are increasing the chances of its adoption but also are launching an attack on the profit margins of Novell and Microsoft. This trend is unsustainable but may be seen as a David-against-Goliath tactic in the short term: as the giants battle it out, they would do well to watch what the little people are doing.
(First published 31 July 1999)
Sun's rising Star to catalyse device revolution
1999-07-31
Things have moved on since the first reports of SUN Microsystems's acquisition of Star Division. It surely is too much to ask that the giant took our advice, but it would appear that our suggestion, that this was far more than a battle for office automation market share, was uncannily accurate.
In the article where we first commented on the acquisition, we put it in the context of Microsoft’s decision to move its policies from a PC-centric position to "great software.. on any device". Fair enough - the advent of better bandwidth, both inside and outside the corporation, coupled with the continued drive to push down TCO (promoting re-centralisation onto enterprise servers), inevitably points to a thin client future. Microsoft, as a software company, wants to reposition itself to take advantage of this new landscape, by selling operating systems and office software which support the thin client paradigm. Recent announcements concerning Office 2000, coupled with the existence of cut-down versions of the applications for handheld devices, reflect this.
And then, here came SUN Microsystems. SUN is a hardware vendor who has consistently used third party software to promote the sales of its platforms (remember the encyclopedia-like Catalyst catalogue?). Then, as now, SUN's main interest is in selling hardware: whilst conceding that it has lost the battle for the desktop (a battle which, as indicated by Microsoft's changing mission, is going away), SUN is concerting its efforts on server hardware. As a calculated move to increase server sales, SUN are to give the StarOffice software away - a move which (considering its high quality) may well make significant in-roads into Microsoft's market share, just as Linux and Apache before it.
Clearly, whilst Scott McNeally will undoubtedly derive pleasure from cocking a snoop at Microsoft's share of the desktop software market, this is not their main target. The future backdrop of IT, it is well recognised, is the Internet, which will be the communications medium for delivering all kinds of services to consumer and business alike. We would expect companies such as Yahoo and AOL to capitalise quickly on the free availability of the server-based version of the Star software, known as StarPortal, which will enable them to roll out word processing, spreadsheet and other facilities in addition to their existing email, scheduling and information services. This opens the door to use of set top boxes, handheld devices and even mobile phones for more than just web browsing: SUN stands to make money from the increases in centralised processing power that this new model suggests.
There are still some shortfalls in this approach. The issue of consumer bandwidth is still not resolved: even given StarOffice's modular approach, there will be an inevitable download overhead each time a new or updated module is required. Security is of equal concern: recent events concerning Microsoft's Hotmail service (see companion article) are unlikley to encourage individuals to give service providers access to their documents and spreadsheets. Even given these short term weaknesses, SUN's announcement is clearly a major step on the way to mass market adoption of a device-accessible, internet-centric IT world.
Oh and, by the way, I wrote this article using StarOffice, downloaded this morning as I was deciding what to write. One day all software will be this way.
(First published 31 July 1999)
Hotmail hacked, but who's to blame?
1999-07-31
Yes, it is highly embarrasing to Microsoft that their Hotmail service, currently with over 40 million subscribers, was broken into by a bunch of hackers. The fall-out from the incident is, as usual, indicative of Microsoft's reputation, both concerning the security of the Microsoft software that Hotmail runs and the inability of MSN to inform its customers that anything had happened.
The wider issues of this incident are just as telling. First, given the fact that the computer industry has been in existence for at least thirty years, the question must be asked concerning why the so-called global class systems are still open to attack. This boils down, unfortunately, to the global IT community's acceptance of mediocrity: the theory and practice of computer security is well undcerstood, but its implementation is often seen as non-core functionality. Products have come to market too soon, have been rolled out without sufficient attention to security issues and have been left to evolve into the complex morass we now know as "infrastructure". Even today, new versions of operating systems software are released without being properly tested. We all know this but we foolishly accept it as the norm.
In many organisations, clearly including Microsoft, security has become a firefighting exercise: it would appear that concerted attacks are likely to succeed. The Internet has not caused the situation but has exacerbated it, by giving outsiders access to unsuitably configured corporate systems and by providing novices with access to a wealth of up-to-date information about security weaknesses and how to exploit them. All this serves only to undermine Internet confidence - with reason, users are unwilling to risk even their personal credit card details by exposing them to the Web. Despite this, however, companies continue to over-expose themselves by ripping a hole through to the Internet from their private networks, using inadequate security software or poorly configured firewalls as weak protection.
Let's face it, we knew it was a risk to trust Microsoft with our email, just as it would be a risk to trust any organisation. Maybe one day we will be able to sue for breach of trust - this might be locking the door after the horse has bolted, but could force corporations large and small to treat security with the importance it clearly merits.
(First published 31 July 1999)
September 1999
This section contains posts from September 1999.
OS vendors on the Merced starting grid
1999-09-01
We have said repeatedly in the past that Merced is likely a test bed technology, enabling the potential of the Intel 64-bit architecture to be established and paving the way for McKinley. However this is unlikely to prevent the inevitable operating system wars.
Merced silicon had barely left the fabrication plants before the rumour-mill started up. Currently the lines are being drawn between a 64-bit version of Linux and a Windows 2000 beta for IA-64. HP-UX and Monterey, both Unix flavours, are next on the list.
What can we expect to see? The most likely event is that Intel’s own release plan will determine the release plans for the operating systems. Microsoft has already announced, for example, that its 64-bit operating system will be available at the same time as Merced, “before the end of 2000.” Likely there will be similar announcements for HP-UX, Monterey and even Linux which will need a marketing push to position itself effectively against the Microsoft and Monterey spinning machine.
Then, there will be the inevitable betas and delays. It is Microsoft who are the most disadvantaged here, as Monterey, HP-UX and Linux development for IA-64 is already underway but Microsoft will not be able to fully commit resources until after the release of Windows 2000. Sure, developers will be working on 64-bit prototypes, but Gates and Ballmer are unlikely to risk W2K delays by reallocating developers elsewhere. Finally, we will see the inevitable benchmarks and bakeoffs, with each vendor, consortium or community demonstrating beyond reasonable doubt that their OS is the best.
There are several factors which will decide the success of the operating systems in the IA-64 space. The first is time to market: any delays will severely jeopardise the market’s interest. The second is application availability, which is less and less of an issue even for Linux. The third is down to marketing and acceptability. Here Microsoft is very strong but Linux is carving its own credibility. HP-UX is likely to have support, not least from the Hewlett Packard user community, but it is unlikely to be “the one”. The riskiest player has to be Monterey, which despite strong backing from the vendor community, may well end up as the Unix also-ran.
Overall, then, interesting times ahead. Obviously for us commentators, who capitalise on the gossip that this situation will generate. Also, though, for the vendor community, who even now may be staking the future of the company on which of the operating systems is likely to dominate. Finally, the user organisations have a clear decision to make, but will probably prefer to wait until the dust settles before linking their IT strategies to any specific operating system.
(First published 1 September 1999)
Cisco keeps its focus with IBM deal
1999-09-01
Cisco’s drive towards world networking dominance tool a step forwards yesterday when it purchased the best parts of IBM’s routing and switching technologies and patents, for an estimated $2 billion. Cisco has rarely been out of the news over recent weeks due to its procurements of technology companies – this latest move should be seen within the context of its overall strategy.
The networking market, it is generally agreed, is in a state of flux. The keyword is convergence – the advent of technologies such as Voice over IP means that IT systems, networking equipment suppliers and telecommunications companies are constantly treading on each other’s toes while they define their position in the market. Companies such as Lucent, Alcatel and Nortel Networks see the key to be to amalgamate networking and telecomms. Following some hefty buys, they are now remodelling their internal organisations, not without some difficulty as the two markets have traditionally followed very different business models. Cisco has, for the moment, adopted a different tack, namely to concentrate on acquiring networking companies and technologies which help them to grow into the telecomms space. This, more organic approach is likely to cause less internal grief but means that they might lag behind when trying to meet the needs of “pure” telecomms.
All of the companies recognise the value of services. This is a trend which can be seen across the technology industry, with vendors from application development tool suppliers to networking equipment manufacturers keen to capitalise on what is clearly still a growing market. IBM’s own model is clearly a good one to follow: IBM have grown their services arm within the last three years to a multi-billon business. Again, Cisco has chosen not to go with the networking flock: while Nortel Networks and Lucent have their own services arms, Cisco are partnering with companies such as IBM and KPMG.
Cisco’s policy is a sensible one, namely “do what we do, and do it well” whilst growing the company into new technology areas and using partners for non-core business. It remains to be seen whether this strategy will be world-beating, but the alternative can result in a company growing too fast without being able to assimilate its constituent parts (as demonstrated by Alcatel a couple of years back). Interestingly, it is a policy which is also paying off for IBM: the deal with Cisco enables it to focus on its core businesses, namely as a computer manufacturer and a services provider.
(First published 1 September 1999)
Mac is back with new Pentium-killers
1999-09-01
What an incredible turn-around. Over the past year Apple’s share price has more than doubled, quarterly earnings figures continue to beat analysts expectations and with yesterday’s announcements of new machines, the seismic turnaround in the company’s fortunes looks set to continue. Why?
New machines were unveiled yesterday with a claimed performance of three times faster than the fastest available Pentium III. The availability of such powerful, well-designed PCs will help convince the marketplace of the viability of the products both as a platform for now and an investment for the future. This is, however, only part of the reason for Macintosh Corporation’s return from the ashes.
If the PC industry was about technology alone, we would all have Macs. They were the first to the market with an intuitive interface, affordable networking and WYSIWYG (remember that?) output. Unfortunately, big Mac screwed up big time: it was slow to recognise the arrival of the PC (big and clunky though they were at the time) and it was slower to react when PC clone and component manufacturers pushed the prices down. PCs became affordable to the mass market, and Macs did not. Wintel won and Jobs lost. End of story.
But not quite. There was a time, not so long ago, where it was assumed that we only needed one sort of computer. It was called a PC, it ran Microsoft software and, well, that was that. Microsoft on Intel was considered the safe bet. This time is still here, but it is nearing an end. Everyone recognises it: look at Microsoft’s new corporate tag line “great software… for any device” – no mention of PCs. The device world is nearly upon us, with the one-size-fits-all PC being just one of many device types, including thin clients, mobile phones, PDAs, toasters, fridges… and Apple computers. Yes, that’s it. It is becoming OK to not have a PC. This is the real reason for Apple’s turnaround: the software is available, the performance is there and the company will be around in a year or so, so why not get an Apple computer?
Of course, Steve Jobs is to be credited for not letting the company lay on its back with its legs in the air. Bill Gates injected new life into Apple by committing to make Microsoft applications available for MacOS. There are lots of reasons why Mac is back, but the most fundamental reason of all remains, because it can be.
(First published 1 September 1999)
System I/O fabric to transform computing architecture model
1999-09-02
Peace has been declared, it was announced yesterday, between the vendors involved in defining a replacement for the PCI bus. In fact, so much attention was paid to the avoidance of battle, that the potential of the proposed technology was missed by most.
The keyword is “fabric”. System I/O is a fabric-based architecture which enables any device to communicate to any other device. Harware switching is used to enable far higher throughput speeds than previously available, from 2.5Gb to 6Gb per second – this compares with a measly 132Mb per second from PCI. This is all well and good, suggesting that PC devices such as processors, network cards and memory will no longer be limited by bus bandwidth (particularly as the 6Gb top end will probably be extended in the future). Two points have largely been missed, however.
The first is that the system vendors are not the only organisations experimenting with switched fabrics. One notable group is the storage community, for whom the switched fibre architecture is key to the SAN strategies of most companies (including StorageTek, Compaq and HP). What interests the storage vendors is the ability to link switches (by fibre) over long distances: currently, the maximum distances between switches are touted as between 10Km and 50Km. How might the System I/O fabric benefit from long-distance interconnect? For example, a single, conceptual device, running a single operating system could in fact be a pair of mirrored devices, each with its own processor, disk and memory. The processor, disk, memory and graphics card become devices in their own right which, with the inclusion of a switched fabric with a fibre interconnect, could be physically positioned anywhere in a 50km radius but which could be configured dynamically to make best use of the resources at a given time.
The impacts go both ways, so…what impact will the System I/O fabric have on the SAN community? Do we need both types of fabric? How will they interact?
A second point is the inclusion of IP version 6 in the System I/O specification. This effectively removes the need to consider system components as part of the same machine, virtual or otherwise. The Internet is currently IP V4, but IP V6 will be included in most network-ready devices in the future. The potential is clear – that the system bus replacement, System I/O, becomes an integral part of the Internet infrastructure. How this will happen is still a matter for speculation, but the potential this has, of moving us into a device-based world in which bandwidth is a forgotten issue, is clear. Given that System I/O is likely to be one of the most important enabling technologies we have seen, it is only to be hoped that the new consortium can come up with a snappier name.
(First published 2 September 1999)
Intel and IBM bring processing to the network
1999-09-02
Capitalising on the flurry of interest generated by their recent demonstrations of Linux and Microsoft operating systems running on real 64-bit silicon, Intel have announced a new foray into the networking chip market. And biting their heels are IBM, fresh from selling off a bunch of network technologies to Cisco. It makes sense.
Intel’s “niche market” is the Internet, where it aims to address the spiralling need for new services and network functionality. Existing products are very much about the lower layers of the stack, performing network-level functions such as routing, compression and bandwidth management. Higher level functionality such as management, voice/video processing, multimedia and graphics handling is usually addressed by operating system or application software. This separation is for two reasons: the first is that “this is how it has always been done” and the second (which derives from the first) is that network processors are currently limited in what they can do, being mainly ASICs in which the functionality is hard coded on the chip.
What Intel have proposed is a hybrid device which is optimised to work with networking hardware. The device is known as the Internet Exchange Processor or IXP. At its core is a StrongARM processor which is surrounded by a number of RISC-based microcontrollers. This means that the devices can be programmed to support both lower level and higher level services: the obvious advantage is that the programs can be more easily upgraded to support new technologies and service requirements. Also, a significant load is taken off the core processing capability of the machine. The IXP is designed to be linked with other IXPs, promising terabit throughput.
IBM have rushed out a similar announcement, clearly caught on the hop by Intel. How wrong we were yesterday, to say IBM were defocusing from network devices! Instead, it seems, the sale of technology to Cisco allows them to concentrate on network processors. This move is interesting: having lost the CPU battle to Intel, IBM see a fresh battlefield for the two companies, on which the winner is by no means decided.
The move by Intel and IBM makes a lot of sense. Technologically, the addition of serious processor capability to networking devices is long overdue. Both companies have the skills to integrate processor technologies with network technologies, plus both have the R&D budget to pay for their development. From a business perspective, Intel are not directly competing with the network equipment manufacturers, at least not yet, and as a result have forged alliances with a number of vendors. In this case, it is chip suppliers such as MMC who are the most likely to suffer as the Ciscos and Newbridges jump on what may well turn out to be another “Intel Inside” bandwagon. IBM are already building network devices such as routers and switches so the chances of partnership are limited. Winners and losers aside, the development is likely to be of benefit to end users, not least through the old stalwart of increased functionality and performance with reduced costs.
(First published 2 September 1999)
Solaris tightens Merced screws on Microsoft
1999-09-02
In our recent story about the Intel 64-bit architecture, we neglected to mention one operating system vendor who has, until recently, politely avoided showing its true colours. Highly unusual behaviour for SUN who has now announced a Win-beating timetable for its flagship operating system, Solaris.
SUN’s plans are for a release of Solaris for the IA-64 simulator “some time this autumn” – this is based on the assumption that most development will be carried out on the simulator for the immediate future. As we discussed, it is highly unlikely that Microsoft will have much to show before the middle of next year.
Microsoft have had a lot of wind in their sales [sic] in the past, which has caused end users to wait for Microsoft releases rather than use available products from other vendors. This was true of applications software, when users had already invested in Microsoft operating systems. It seems most unlikely that the same will hold true for operating system software. The other differentiator was price. Inevitably, OS pricing will be driven down, both by the commodity nature of the IA-64 OS market and by the fact that at least one, namely Linux, will be available for free. Application availability was never really an issue on the server.
It appears that the biggest software company in the world may, already, have missed the boat.
(First published 2 September 1999)
Microsoft faces Hobson’s choice for W2K
1999-09-03
Microsoft’s most recent announcements concerning Windows 2000 release date are unsurprising. Coupled with the delays to the most recent developer release, the Seattle company are insisting that the product is on schedule. What is most telling is that nobody seems to mind.
As usual, the media campaign is a combination of promotion and expectation management. Microsoft are emphasising quality, and seem to have separated the concepts of “final code” from “released product”. But let’s add some common sense here. Rule One of software projects: you cannot compress development time. Time and again, less experienced managers try to turn eight months into four only to find that that, like a tensed spring, the required time jumps back to the initial estimates. It’s true, try it. The only thing that can reduce a development schedule is reduced functionality or, heaven forbid, reduced testing.
So, what are Microsoft’s options? The first is to remove some functionality from the product, but that would be a marketing disaster. It has been done before, of course (hence the telling reputation of Service Pack 1). The second is to risk releasing incompletely tested software onto the market. Of course, this is a strategy known to most vendors, not just Microsoft: release 1 of any product is, generally, avoided by all but the most desperate. The third is to claim a code freeze at the end of the year, then continue development until shortly before the release date. This, employing the term “code chill” rather than code freeze, is again common practice but risks reducing testing time (again) and removes any slack that may be left from the system.
Microsoft are faced with a stark set of choices. The baying wolves of the IT industry will jump on the slightest misfortune of the software giant, so it will be treading very carefully over the next few months. But what of the end user? The advice is clear, and would be the same for any new product of this scale. Take your time. Evaluate the product (and others) in a realistic environment, making sure that it works with existing and planned architectures and configurations of other software. Listen carefully to the experiences of early adopters, and learn from them. Fortunately, the signs are there that end users are adopting exactly this approach. They are “waiting and seeing,” or planning upgrades only when their existing configurations will no longer be adequate. Y2K is, funnily enough, a helpful distraction: no-one is likely to rush into anything in January 2000.
It is extremely likely that Windows 2000 arrives late, incomplete and buggy (but don’t get me wrong – I’d be delighted for this to be proved untrue). Whilst Microsoft will struggle, yet again, with the blows to its reputation that this will cause, the vast majority of potential users will be unaffected.
(First published 3 September 1999)
From Kidneys to Iridium shares – what will eBay auction next?
1999-09-03
eBay kidneys are back in the news again. A similar tale, of an auction being pulled as it broke the rules, concerned Iridium. Such stories, hoaxed or unlawful, serve to illustrate the huge potential of auctions to sell, well, anything.
You will probably have seen the newest eBay kidney story by now (there was another attempt to sell a kidney in May, which was also pulled). As the “product” was removed from the site it had achieved a value of $5.75M. Pretty good for a body part, but probably worth it to someone. As for the Iridium tale, apparently Iridium had already filed for Chapter 11 bankruptcy prior to an opportunists’s attempts to sell their shareholding on the auction site. The Securities and Exchange Commission, not amused, requested that the offer be withdrawn: this was agreed, despite the “shares” no longer falling within the SEC’s jurisdiction. Of course, auctioning shares may leave you flummoxed – what is the stock market itself, if not an auction house? This serves to illustrate that people are prepared to auction, well, absolutely anything at all. And this is the crucial point.
As discussed in eRoad, a Vision Series report from Bloor Research, the auction is one of two mechanisms that are growing to eventually replace existing markets. The other is the pure market, which exists to sell products with a clearly defined value. Where the value of a product cannot be predefined, it must be turned over to the market where its value will be defined at the point of sale, that is, the auction. These two markets are not mutually exclusive: sites such as \link{www.lastminute.com, Last Minute} are profiting from the fact that the prices of travel products fluctuate wildly as they approach their sell by date. The success of eBay, and the rushed creation of me-too sites across the globe, are testament to the viability of auctions.
As the two stories above illustrate, however, the legal frameworks which govern auctions are not yet fully geared up. This is a global issue – for example, there is understood to be a thriving black market trade in third world body parts. The Internet may be the great liberator but it also opens the door to misuse on an unprecedented scale. Off-world banking is already being mooted: coordinated action is required before auctions are put out of the reach of even international law.
(First published 3 September 1999)
Open Sky’s wireless PDA future leaves Microsoft out in the cold
1999-09-03
Competition in a market is a good way of encouraging innovation and keeping control of costs. But what do you do if you don’t have any competition? In the Wireless PDA space, 3Com’s answer would appear to be – buy some.
3Com have a share in Open Sky, a joint venture with Aether Systems, Inc. Open Sky’s goal is to provide a US-wide wireless infrastructure to support handheld devices in general, not just to the Palm VII. Its two-pronged approach involves, firstly, linking up with existing mobile carriers and secondly, involving partners to develop modems and software to enable existing devices to talk to the ether. Whilst the initiative remains stateside only, this partnership approach is more likely to result in a european service than 3Com’s existing wireless service, the single-company Palm.net.
The move is interesting because, effectively, it pitches OpenSky against Palm.net. It is difficult to see how Palm.net will compete with OpenSky: the former requires a Palm VII, currently costing nearly three times the price of the chepest Palm (the newly released Palm IIIe) and it is unlikely that the price of the modem will make up the difference. Palm.net service costs are notoriously high, and likley to be put to shame relative to OpenSky’s fixed rates. On the eve of its national rollout of Palm.net, 3Com have taken a calculated risk – the greater good is to establish a market sector for the wireless handheld device, and the Palm VII/Palm.net combination cannot do it by itself. For credibility, the market needs to extend to all flavours of handheld, Palm or otherwise. Enter Windows CE.
Here is the fundamental element of 3Com’s risk calculation. It must establish a market, a move which is unlikely to succeed if it is linked to any one flavour of PDA. However it wants to retain its early bird advantage over the Microsoft camp, which makes up the lion’s share of the competition. For this reason, Open Sky has announced that it will be supporting Windows CE by autumn of next year. 3Com are relying on the rapid acceptance and uptake of wireless handheld devices, and will do everything they can to encourage the end user to come on board. There’s the rub – the company will even pitch against itself to create and win the market before Microsoft get a look in.
This strategy is fraught with risks. Corporate America is likely to be smarting for a couple of months after Y2K, which could dampen the acceptance of wireless. The success of the Palm was driven by the end user and not the IT department, however – this is a fact that will have been in 3Com’s calculations. In addition, of course, mobile phone manufacturers are not standing still, and the fruits of the Symbian alliance are to be yielding forth in the near future. Despite all this, 3Com probably feels it has no choice – it has to steal the initiative, because the risks of not doing so are even greater.
For the moment, from this side of the pond we can only watch. But we watch with interest.
(First published 3 September 1999)
Mainframes lead resurgence in software tools
1999-09-08
It is old news that mainframe companies are experiencing an uplift in their fortunes. As the mainframe re-invents itself as the workhorse of the Web, the drive to competitive advantage is causing vendors to light the fires under the slumbering development tools market.
Let’s take an example. Amdahl’s hardware revenue figures for last year beat expectations by 202%. Quite a turn-around. Now the company says that interest is increasing in its ObjectStar development environment. ObjectStar is an integrated collection of tools to enable mainframe applications to be developed – tools include screen and GUI builders, a rule-based engine for business logic and wizard-like facilities to support the generation of common functionality. Whilst the product has a reasonable existing customer base, whose licensing revenues have supported the product’s continued viability, efforts to promote ObjectStar were largely killed off eighteen months ago when they failed to generate sufficient new customers. Things have remained flat until now, with both existing customers, and new prospects enquiring about the product.
Why should this be? The answer, says Chris James, Marketing Manager for Software and Services at Amdahl, is the drive towards competitive advantage on the Net. This makes sense: it is one thing to use an off the shelf package for internet site development, or to select a mainframe over a smaller server to ensure that the volume of transactions can be handled. However, it is the additional services which can set one Web site above the rest. These services need to be defined and implemented at a rate which the packaged tools could not possibly keep up with: in other words, they require bespoke development. At the same time, eCommerce is driving the requirement for an unprecedented level of integration with external information feeds and back end systems, new and legacy. To deal with this level of complexity, whilst developing added value functionality in an ever-decreasing timeframe, companies are turning to development tools with renewed vigour. Particular interest will inevitably be garnered by enterprise RAD tools such as ObjectStar.
A few words of reality. There does not exist, anywhere on the market today, a fully integrated development environment which takes into account the full spectrum of needs for Web Site development. Areas such as site editorial, management of external components, monitoring of statistics and configuration management are still the domain of a disparate set of tools which remain disconnected from the “core functionality” of application serving and transaction management. This situation will change: the renewed interest in development tools will, in the short term, lead to promotional campaigns for existing products. In the medium term it will also spawn advances in functionality as the suppliers attempt to leap-frog each other to corner the burgeoning market.
In a way, history is repeating itself. There is nothing happening here that did not happen during the CASE “revolution” of the late seventies and early eighties. We will see the market consolidating, silver bullets flying and the inevitable disappointments when they do not solve all ills. We will also see a new generation of software tools which, used correctly, can enable businesses to reap the rewards of the Web.
(First published 8 September 1999)
Nor.Web left on the wayside of the superhighway
1999-09-08
A surprise announcement yesterday revealed Nortel Networks and United Utilities’ intentions to pull out of their Nor.Web joint venture. What does this mean? Well, nothing, probably.
The joint venture existed to develop and promote the Powerline technology, which was much-lauded when first announced a couple of years ago. Several deals were struck, for example with electricity companies in the UK, Germany and Sweden, and interest was gathering in the US. Everything was rosy, then, well, technology just passed Powerline by. ADSL was the straw that broke this camel’s back, but the writing was already on the wall when the ITU ratified the latest bunch of wireless communications standards several months ago. Bandwidth is on the increase by default, as technologies for mainstream bandwidth provision (for example through phone lines and across the ether) continue to develop. Such advances render the development of new technologies to provide additional bandwidth, exploiting niches such as powerlines, largely unnecessary.
Digital Powerline, in any case, was facing an uphill media battle. Fears surrounded the emissions created due to the very high voltages required to transmit the signal over the high tension wires. Although this was not the direct cause of the decision, which was made for cost reasons, a spokesman for United Utilities confirmed that this was a factor.
Nor.Web are currently in discussions with their existing customers as to the options for the future. It is possible that the technology finds a buyer, who may well be able to make money out of it. If this is not possible, the demise of Digital Powerline is unlikely to cause more than a ripple in the Internet pool.
(First published 8 September 1999)
NT Pragmatists strip Microsoft naked
1999-09-08
It’s official – the shiny outer surface of Microsoft’s marketing machine is no longer the basis on which organisations are formulating their IT strategies. Thank goodness for that.
According to a survey by the NT Forum, which showed 40% of respondents already beta testing Windows 2000, only 30% had any plans to upgrade to Windows 2000 in the next twelve months, relative to 80% last year. Do I need to repeat that, or did you catch the significance already? It is easy to speculate on what has caused this apparently slating turnaround. Is it Y2K looming higher than expected in IT managers’ eyes? Is it that the world is to adopt Linux wholesale, no longer needing the costly alternatives? Whilst the first question may be a factor, our sources tell us a far simpler story.
Lets work back from the answer, which may be stated quite simply: there is no one answer. Despite Microsoft’s attempts to the contrary, to tout NT as the only operating system that we would ever need, the reality always was, and shall always remain, quite different. Back in 1997, Microsoft missed its chance to kill off Unix once and for all, by demonstrating the scalability of NT. The audience were expectant: Wintel had already trounced the pretenders to the desktop throne and was widely expected to do the same to the server. Indeed, a significant number of IT strategies at the time were flying the Windows flag wherever they possibly could, and were waiting patiently for Microsoft to take over the rest of the world. However, in the end NT scalability failed to convince the sceptics, the media and even the end users. This was the beginning of the end for Microsoft’s buy once, use anywhere mantra.
More recently, the Internet and eCommerce have uplifted the fortunes of mainframe and high-end server manufacturers (see the related story on IT-Director.com). The mainframe market, which refused from the outset to just roll over and die, is being reborn. It is not currently a market which gives NT a look in. What is now clear is that we will need mainframes for the highest availability and performance, we will need mid-range servers for more affordable back-end processing. The scale goes all the way down to the embedded software devices such as mobile phones and PDAs. Each has its own specific needs and hence requires the most appropriate operating system support. The dream, held by IT managers and promoted by Microsoft, of a single OS running everywhere, has become sadly jaded. The nine versions touted for Windows 2000 (as reported here) are testament to this fact.
There is no one answer: this is coupled with the fact that it is becoming increasingly difficult to throw away existing technologies to replace them with the newest, best thing. IT Managers are becoming masters of the morass – pragmatists for whom the goal is to integrate the best of the old with the best of the new. Upgrading only when necessary, replacing when budgets allow, these folk have evolved to listen more to the needs of their business than the hype of the vendors. Microsoft are seen for what they are, as a software company with some good products which can fit into the overall IT portfolio of an organisation. And that’s just the way it should be – the question is now, how long before the Seattle giant accepts its place.
(First published 8 September 1999)
ISPs get tools to help SMEs compete in online economy
1999-09-08
According to Forrester Research, small businesses’ share of the Web, in terms of revenues, is expected to decline from 9 percent this year, to only 6 percent in 2003. Bleak words indeed.
These predictions are understandable considering that the larger firms are more likely to have the resources to get themselves onto the Web. Even then, the inevitable shakeup of retail organisations is likely to spell doom for many small businesses. Is this just Fear, Uncertainty and Doubt about the future? Unlikely, as this is happening already both because of and despite the Internet. At the same time, however, the Web presents an unprecedented opportunity for businesses large and small: it is the great leveller in which all are equal. This is true, at least, for those companies which are online.
Today’s Internet shopper is becoming more discerning. It is not enough for a site to present a long catalogue of products and a telephone number for orders (although this has been a successful model in the past). Customers want sites to be fast and fun, informative and interactive. Such added value is raising the hurdle to newcomers whose first attempts at establishing themselves online are often proving less than successful.
Initiatives are underway to help resolve this. One set of tools, from Bondware, is directed at ISPs. The tools provide the set of functions to that the discerning surfer has come to expect, including chat rooms, newsfeeds, member pages, ad management and a host of other facilities. The most attractive thing about the package is the price: at $6000 for the basic license (which allows for 25 commerce sites), the cost of entry is lowered considerably, enabling the service providers to provide low cost service to their subscribers.
The advantage of this approach over the competition, for example, Microsoft’s Small Business Server, is that it is designed for ISPs who are the natural focus, in terms of providing both technology and experience, to help SMEs onto the Web. It also promotes continued development and competition, as ISPs can develop the range of services they provide, enabling companies to pick and choose from a number of ISPs. The other advantage is, of course, that it is available now.
(First published 8 September 1999)
Killer App for voice recognition – don’t hold your breath
1999-09-08
IBM’s newest version of ViaVoice signals a change of tactics for their voice recognition software. The product is to have improved recognition, better user-friendliness and additional features including Web surfing facilities. Will this release prompt the acceptance of VR? Unlikely, not yet.
There are three areas which we can see as benefiting from this technology. The first is dictation. Speaking, ultimately, is a different communication technique to writing. Some dictate letters, but most prefer to write them directly. Will this change? Unlikely, particularly in today’s already noisy offices. Imagine the hubbub of everyone talking inanely to their machines. As I write this, I contemplate using a microphone rather than a keyboard: even with 100% recognition, I cannot see the attraction. Maybe that’s just me. The second area is to provide an interface for users with physical disabilities. There is a clear benefit here, to certain users. Even so, neither of the two areas are sufficient to give killer app status to recognition software.
The third area is in commanding the computer. This is where the potential lies, but unfortunately not in its present state. The ability to launch an application, or even surf the Web for that matter, by voice commands is unlikely to be any more than a fad. Voice recognition will sit on the bench, as it waits for the technology which will launch it into ubiquitousness: intelligent agents. Once computers are capable of following natural language commands, then the time will be right for their communication by voice. For computers, read the whole of the online infrastructure. For example, “Take me to Amazon. I want to buy the latest John Grisham novel,” is an sentence I could imagine speaking out loud, either to my computer or directly to my phone.
The issue is one of time. The recognition software quality is almost there but the ability of the IT infrastructure to make sense of our utterances is not, in terms of both indexing the information “out there” in a sensible fashion and providing the resources to exploit it. Neither is it a priority, so it is likely that voice recognition has a wait on its hands before its perfect partner comes along.
(First published 8 September 1999)
Is today the day the doomsayers have their picnic?
1999-09-08
9-9-99. The six character string used by mainframe programmers tens of years ago to signify “end of file” is sending shivers down the spines of the same people today. Oh yes, I wrote a few of those programs, they say, dispelling any thoughts that this might be an overhyped media myth. As with the big Y, nobody has been able to predict with certainty what the effects of the 9-9-99 problem will be. Today, the waiting is over.
Certainly the issue is being taken seriously from some quarters. According to \link{http:\\www.silicon.com,Silicon.com} for example, the UK coastguard has chartered extra tugs to bale out ships whose ageing navigational systems are caught short.
As industry watchers, we shall be following today’s events with interest, not only because they will be of concern in themselves. The realities of the 9-9-99 problem are the best thing we have to predict the events at the end of the year. We are sceptical that nothing will go wrong, indeed we are amongst the more pessimistic observers. Today’s events will enable us to gauge what might go wrong, what the impacts might be and what needs to happen to resolve the effects. Like a fish-eye lens, Y2K will be a distorted amplification of today’s events, large or small. Lets just hope they are small.
(First published 8 September 1999)
Dell gets the Acquisition bug
1999-09-09
Dell made its first acquisition yesterday, which clearly marks the beginning of something big. Dell’s purchase of ConvergeNet for $340M is the first step on a long road for Dell, which will see the company take on the storage market and, very likely, come away with a sizable chunk.
At the moment, Dell is saying that the move does not affect its existing partnerships. For example, the company is quoted as saying that there is “no impact in existing relationships” with IBM and Data General, who supply disk drives and storage systems respectively. Similarly, Network Appliance, whose storage appliances are rebadged by Dell and sold on, does not foresee any direct impact at this stage. As pointed out by a source from within the company, ConvergeNet are a SAN supplier (or would be, if they had released any products yet) whereas NetApp are a Network Attached Storage (NAS) manufacturer. The two models can live side by side in harmony, said the source. Fair point.
However it is unlikely that Dell will stop here. Dell already works with tens of partners, of whom ConvergeNet was one. Further acquisitions will no doubt be announced, each, knowing Dell, enabling the company to take new technologies and skills on board without causing a major impact to their operational capabilities. No doubt, they will have learned a number of lessons from watching Compaq’s purchase of Tandem and Digital – ironically, Compaq is probably Dell’s major target for competition in the storage space, just as with the PC market where Compaq will still be smarting from losing the top spot in US PC sales to its rival.
This organic approach to growth will be coupled with Dell’s world-beating sales engine, not to mention their brand. Dell will win market share in the same way that they did in the PC arena, by putting service at the top of the list and by creating a supply model which, with its rapid turnaround and low overheads, is the envy of the market. Sooner rather than later, Dell will find itself in direct competition with its suppliers and partners, with the result that some relationships will have to change. But this is unlikely to distract the new Goliath as it drives towards its goals.
(First published 9 September 1999)
Transatlantic porn test case gets result – sort of
1999-09-09
Anti-porn crusaders will be breathing a sigh of relief today, following the landmark judgement in a British court about a UK resident running a number of US-based porn sites from his home in Surrey. The case was part-victory, part disappointment, as the US involved party appears to have got off scot-free, and two of the sites are still up, running and based in Costa Rica.
This case serves to illustrate the state of the world legal system concerning Internet porn. In school terms, it’s a case of “not bad, could do better” – clearly huge strides have already been made but many loopholes still exist. For the moment, whilst the partial victory is better than nothing, it should only serve to increase the efforts required in all countries around the globe, to set global minimum standards and enable the interworking that is necessary to deal with the crimes. It is unlikely that the porn industry will ever be stamped out, but at the moment, the Internet offers an open invitation to anyone wishing to participate in this unfortunate trade.
(First published 9 September 1999)
Microsoft’s Cool shades preach Java doom
1999-09-09
Unsure how to present this, let’s start with the facts as we have them. On The Register, it was noted that Microsoft are to release a programming language called Cool, whose aim (it is reputed) is to kill Java. Questions abound – what does it entail, will it succeed, and ultimately, do we mind?
Cool is not a language, rather it is a “series of extensions to C++”. This is the nub of the matter, as Java is ostensibly a simplification of C++. As an ex-C++ programmer I can say that its curtailment is probably a safer bet than its extension. It was always astonishing to see how convoluted C code could become (I wonder if the convoluted code competition is still running?) That is, of course, until C++ came along. If C could be astonishing, C++ could be truly unfathomable. Java has been described as “C++ without the hard bits” – this is only part of the story, of course, but it paints the picture. Despite this, of course, there is plenty of extremely well written C++ around. Java has failed, so far, to push it off its perch, and it is likely (look at COBOL) that both languages will be here to stay.
So – what are Microsoft doing? Provision of new facilities to the C++ community can be seen as a good thing. The language, by itself, is nowhere near as well supported as Java – just look at the vast range of class libraries now available, for free, for the latter. There is room for some catch-up, and Microsoft are probably right in trying to fill the gap. But Java-killer? Unlikely. There is far too much momentum behind the language. Just look at IBM’s policy of including a JVM with every platform they ship, mainframe to mini. Also, Java set out to achieve a particular end – to meet the needs of the Web. Okay, this is paraphrasing, but Java’s design enables it to be run as applets or applications on devices from phones to digital TVs. Things have moved on, of course, such as the Enterprise Java Bean specification which is as much an IT architecture as a language construct. Users now see Java as strategic – just look at the results of our recent Java survey to see this.
The other question, of course, is to ask whether we mind. Microsoft wants to be rid of Java for lots of reasons, none of them technical. We could be affronted by this, or we could remember that SUN’s initial launch of Java was set to remove Windows from its pedestal. In other words, this could be seen as a cynical attempt by a slick marketing machine to replace the cynical attempt by another slick marketing machine to replace a cynical attempt… you get the picture. At the end of the day, though, marketing will not decide this. Java is not going to die, and that’s final.
(First published 9 September 1999)
Argos misses the decimal point
1999-09-09
It looks like Argos, the UK retail chain, may be sailing into murky electronic waters over the next few months. A few days ago, a mistyped catalogue entry on the Argos website led to hundreds of customers ordering Sony televisions at three pounds a pop. The company are now refusing to honour the orders, as (in their own words) this was an obvious error. The question is, are they liable?
This question would be difficult enough to answer even without the complication of the Internet, due to the complexities of commercial and contract law. The underlying model of the retail sale is, however, relatively simple and it is this which will be used as the basis for both sides of the case. According to Taylor Walton, a legal firm which specialises in eCommerce law, there are three elements to a retail sale. The first is the “invitation to treat,” which corresponds to an entry in a catalogue or a price on a shelf. The second is the acceptance of a payment. If a payment is accepted, this leads to the third element, namely the completed contract. In other words, once a payment has been accepted, for example through a credit card transaction, then the company which made the initial offer is liable to deliver the goods.
Reports in the press suggest that Argos is to fight any suits (and indeed, threats to sue have already been made), by saying that orders were made but would not be accepted. Sure enough, a quick browse of the Argos web site shows that customers are invited to make “orders”. However, once a customer has compiled an order, they are then requested to make a credit card payment and hit the “purchase” button. Hmm, sounds like a sale to me, but who am I to say? It is probably best to leave this to the legal experts, who stand to make a pretty penny out of this debate whatever the outcome.
Unfortunately for Argos, precedent would suggest that they stand to lose. Many in the UK will remember the Hoover competition which offered free holidays on the purchase of a washing machine. The company was forced to honour its pledges, almost bringing itself to its knees in the process. Supermarkets, too, have had their fair share of problems due to misadvertising prices. The Walkling case, for example, resulted in a family claiming (and receiving) £30,000 worth of goods. This was due to Sainsbury’s policy of letting customers keep the goods and get their money back if they found that an item was mis-priced. Another example is of an enterprising customer buying £5000 worth of beer at Tesco, then going directly to the customer service counter and asking for their money back – and getting it. The service representative is reported to have commented that the lucky customer “didn’t do as well as a chap in our Stratford store,” or words to that effect. It is understood that Sainsbury’s have now changed this policy, but Tesco, to date, have not.
Whatever the outcome, it is clear that the Argos case will be watched with interest from around the globe. One thing it illustrates is that on the Internet, nothing happens by halves – the hundreds of TVs were “sold” in a matter of hours, before Argos corrected the error. It could be said that all publicity is good publicity – even articles such as this will encourage prospective customers to visit the site. Indeed, I would wager that even now, bounty hunters are scouring the online catalogues for typos.
(First published 9 September 1999)
Sega’s Dreamcast – millstone or milestone?
1999-09-09
Despite Sega’s first-to-the-market position for its next generation games console, pipping both Sony and Nintendo to the post, the forecast is not so good for the games company. But, hang on – what’s this got to do with IT?
The answer is simple. These machines are an indicator of the shape of things to come for consumer use of technology. They have two key features: TV access, and Internet access. They also have a price tag which is much more within the bounds of the mass market.
There’s been a major, but little talked about, problem with the home PC. The question has been, where to put it? Chances are, it will be in the spare bedroom, dining room, cubby hole under the stairs or office. However, a large proportion of the predicted uses of PCs are social activities – home shopping, chat, direct video or audio, and, of course, games. The games console bridges several worlds. It is accepted by the young, it is allowed in the living room and it is understandable by all. No boot failures or install problems plague it, it is reliable, resilient and quiet. The games market may well choose to shut out Sega, however successful it is in the short term. However, it may also close the door on the PC mass market. PCs have a place – data storage, hobbies, word processing. But as for their mass market potential, the future of the PC is far from sure.
(First published 9 September 1999)
Apple at the gates of dawn
1999-09-09
I like Apple. Always have, but then that’s because the technology has always been more important to me than the marketing. Apple’s demise hit hard, particularly when they had such good products. But now they are coming back.
It took Windows a good ten years to catch up with the Macintosh in terms of usability, but finally with Windows 95, it did. (OK, there’s a debate we could have there, but you get the point.) Unfortunately, in the meantime some particularly foolish management decisions priced the unclonable Mac out of a market which was growing like billy-oh. The company has suffered for its foolishness, but it is fair to say that it can leave such errors firmly in the past. The company is re-emerging – not as a gorilla that vandalises everything in its path, nor as a niche supplier with little to offer, but as a player in the game, with sufficient following and reputation to give customers confidence. Its new products are sexy and powerful, the software is there, the innovation is there and what is more there is a new track record developing which augurs well for the future.
For these and other reasons, Apple has seen its share price increase by 40% over the past month alone. Advance orders of its new laptop, the iBook, are running at 140,000 already and there are plans to launch it in Japan within the next month or so. Can nothing go wrong for the reborn company?
Of course it can, and it would be folly to suggest otherwise. However, this market is as much about keeping one step ahead of the game as anything, and for the moment, Apple appear to be doing exactly that.
(First published 9 September 1999)
W2K gets first UK deployment
1999-09-10
Microsoft announced at the end of last week that the University of Leicester had begun a deployment of Windows 2000, to replace its Novell Netware installation. Although clearly a showcase piece, it should still be watched with interest.
The network is to be rolled out this week to an initial 500 PCs. By Autumn next year, this number will be increased to 2,000 PCs and up to 7,000 remote sessions. 7 Compaq servers will be used, with 4 main servers, an SMS server, an Exchange server and an SQL server. The version of Windows 2000 to be rolled out will be Release Candidate 1. Our advice to all comers has been to pilot the operating system before rolling out, and to watch what others are doing: the Leicester project, while still not enterprise-scale, will doubtless provide some useful insights (not least the merits of rolling out pre-release software). Of particular note is the new security functionality – we would expect the student population to test this to the full.
For the monthly update, see The University of Leicester Windows 2000 web site.
(First published 10 September 1999)
AmEx builds Blue bridge over Web payment waters
1999-09-10
AmEx are the latest in line to attempt tackling a fundamental weakness in the world of eCommerce – that of making payments. Will they succeed?
Currently, Internet credit card transactions are seen by the consumer as being insecure. This perception maybe changing (albeit slowly), and possibly be more than a little misplaced relative to, say, giving card details over the phone. However it is still one of the major reasons cited for slowing the uptake of eCommerce. The other, equally important reason is that credit card payments are expensive, effectively setting a minimum purchase value of about £5. A raft of initiatives have been attempted, such as microcharging through eCash. To date, however, none has won the hearts and minds of the Web population.
AmEx have partnered with a company called eCharge to launch a hybrid service known as Blue, providing both smartcard and credit card facilities and Internet transaction mechanisms. With regard to the latter, the service is said to be more secure than traditional credit card mechanisms; also, it is notably cheaper to run, per transaction, than its credit card equivalent with merchants actually to be given a discount for using it on the Web.
What may set this trial apart is that it is not a trial. This service is a launch, available to consumers across the US. It is far more likely to have mass market appeal as it is aimed at the mass market; also, it is instantly attractive to Web retailers through its reduced cost. Unfortunately, we could not find any sites actually using the new standard on the Internet. If you build it, they will come, maybe.
(First published 10 September 1999)
Waitrose ISP – now we are truly free
1999-09-10
It is unsurprising that another UK supermarket has joined the free ISP fray. It is interesting that Waitrose are to give phone revenues to charity. What is unique, however, is that Waitrose are not to charge for support.
The UK free ISP market is already burgeoning. Tesco, Virgin, W.H. Smith and a host of others (from newspapers to football clubs) quickly followed a trend started by FreeServe, the now-floated spin-off of electronics retailer Dixons. Whilst free ISPs have been slated for providing a lower quality of service than their paid-for equivalents, the truth is that ISP quality varies widely across the board, paid for or otherwise. It is probably true to say that, in these early days of the Web, people are more accepting of non-optimal service than they will be in ten years time. The fact remains that for a substantial proportion of the UK population, the quality of service that free ISPs can provide is good enough.
The major downside of all these services is that, to date, they have charged for support at a premium rate, for example 50 pence per minute from tesco.net. Now this is nothing personal, as I have worked on both sides of the helpdesk fence – the support role is thankless, problems are often difficult to solve and, to cap it all, customers can be downright rude. From a customer perspective these services can be inefficient and bureaucratic, which wouldn’t be such a problem if the clock wasn’t ticking at a penny a second. To take a recent example (which, honestly, I wouldn’t mention if this Waitrose article hadn’t come along), I called the support desk of a major supermarket free ISP to inform them that I was receiving the email of another subscriber. Following a good five minutes on the phone, I found the procedure for dealing with this situation. Being a community-minded eCitizen I wouldn’t have minded spending the time, but I was riled at being charged at least three quid to help them sort out somebody else’s problems. Get the picture? If support is free, fair enough, but if we are paying then we expect our absolute money’s worth. Don’t we?
Waitrose are offering an email bureau service, which provides for “unlimited email addresses, accessible anywhere in the world” according to the press release. This service, which is still being developed, will be unique in that it allows customers to access their email either from a POP-3 compatible client (such as Microsoft Outlook), or directly from the Internet à la Yahoo or Hotmail. This is a major differentiator which allows users to get the best of all world for Internet access. A major concern to JKD, the company responsible for developing the Web site, and ITG, the service provider, is the recent media storm about Microsoft’s Hotmail service: every effort is being made to ensure that such back doors are not left in Waitrose’s email facilities.
The target audience for the service is the profile Waitrose/John Lewis customer, namely the broadsheet rather than the tabloid reader. Several reasons are touted for this – firstly, it is in keeping with the middle-to-upper class Waitrose branding. Secondly, the keenest users of the Web tend to be from “the thinking classes”. These reasons are only valid for the site content – it is very likely that the thought of free, unlimited support will have a much wider appeal.
The gesture to charity is an interesting one. It will be good for the image of the site, as well as giving customers the feeling that they are giving to good causes, without them having to lift a finger (this will probably appeal to their target audience). It is also good for the charities, of course! But where do Waitrose expect to make their money? This is where things get really interesting.
Waitrose is a supermarket launching onto the Web. It is setting up as an ISP in order to attract prospective customers to its site. Ultimately, though, its revenues will come from its capability to sell its own products via the Web. This, like Iceland’s announcement a few days ago (which included free delivery of goods bought through its site), demonstrates the sea change that is occurring in the supermarket-ISP arena in the UK. The battle is joined: over the next months, expect to see Waitrose, Tesco, Sainsbury, ASDA, Safeway leap-frogging each other with a host of new services, not to get users signed up to their ISP but to get real customers buying real products.
(First published 10 September 1999)
9-9-99 – Reasons to be cheerful, or not
1999-09-10
Last Thursday came, with neither bang nor whimper. The press have jumped on the single reported event, namely a spreadsheet application being run in Tasmania. So – was that it?
I still remember the weeks leading up to the start of 1984. The sense of foreboding, that Big Brother really would set up shop and start rewriting history, was everywhere. Funnily enough, the feeling continued way into the years which followed, whilst at the same time there was a sense of loss, at disasters failing to happen. Last Thursday was like that – it came and it went, with very little to show, leaving the more paranoid amongst us wondering whether we were robbed.
Okay, nothing happened. We should still be vigilant, but then so should we always be – a crash is a crash, after all. The debate centred largely around whether there really was an issue – after all, argued some, September 9, 1999 can be written in such a variety of ways. Maybe they were right or maybe they got lucky, but who cares? No big bang, no domino effect, cause to celebrate or at the very least get a decent night’s sleep.
Now then, let’s get on with the real issue. 9-9-99 enabled plenty of companies and government agencies to test out their Y2K readiness plans (the results of which served to fill the press column inches reserved for any computer failures). Most plans succeeded, some failed (notably in the US Coast Guard – what is it about coast guards and dates?). The 9-9-99 problem may turn out to be fictional, but Y2K will not.
We know several things about Y2K as a fact: the first is that it is a real problem that some computer systems use two digits to store the date rather than four. We also know that substantial money has been spent trying to resolve the problems that it might cause. Most importantly, we know from our own sources that a number of so-called “compliant” systems, when retested, have reported an average of 30 non-compliance errors per 1,000,000 lines of code, most of which result in incorrect data being written to the disk. This isn’t the result of an over-active imagination or a news shortage, it’s there in black and white.
In the words of Tom Hanks, “We have a problem.” The only thing we do not know about Y2K is how big the impact will be.
(First published 10 September 1999)
HP bring DVD+RW a step closer
1999-09-10
DVD may be taking off slowly in Europe, but it is still taking off. The reason may well be this simple – what do we do with all that space?
Hewlett Packard has announced its DVD+R/W drive, which will be available in the UK from the end of November. The price is not yet announced, but the SCSI device is expected to come on the market at about £500. According to HP, the device will be able to read existing DVD-ROM disks, but current DVD players will not be able to read disks written by the DVD+R/W. This is not due to technology incompatibilities, rather it is a software issue. HP are currently in discussions with other DVD-ROM manufacturers to see how the situation can be resolved.
Despite the continued improvements in these technologies (the spindle speed is on the up, for example), DVD has so far failed to take the European market by storm. A recent survey, from analyst firm Strategy Analytics, showed the European take-up of DVD to be lagging behind the US, expecting 4% of european households to own a DVD, compared to 11% in the US. Why is this?
DVD disks can store 3Gb of data on each side – that’s a total of 6Gb, giving each disk the capacity of 60 zip disks. The potential for DVD is huge, both in the corporate markets and for consumers. All it lacks is the Killer App. CD-ROM is largely adequate for most uses, and while I would be delighted to replace all those MSDN disks with a single platter, it is not that much of a problem to me. Similarly, at home, most european computer and video users are still content with CD-ROM and VHS. It is only a matter of time, of course, before our information appetites overwhelm the capacity of the lowly CD, but in the meantime we are quite content to wait and let the prices drop a little.
There remains the outstanding issue of the ongoing debate between DVD-RAM and DVD+R/W. These are competing, incompatible technologies, but there is an inevitable market for DVD+R/W as the disks will be (software glitches aside) compatible with existing DVD-ROM drives.
So – no big bang for DVD in Europe, not just yet. Despite this, the future seems bright for DVD – the market is expected to grow exponentially over the next three years. HP is leading the pack with its products so, given its good reputation in the removable storage arena, the company seems well placed to deal with the rush when it comes.
(First published 10 September 1999)
IT Directors – hold your heads high
1999-09-21
According to a survey published in yesterday’s FT, IT Directors and CIOs have more reason today than ever to feel valued. The traditional stereotype is of a person thrust into the job of directing corporate IT, faced with huge pressure to deliver solutions whilst lacking sufficient knowledge of either the business or the technologies required to support it. This would be coupled with having only limited authority to achieve any objectives, let alone strategic ones. According to the survey, published jointly by the London Business School and Egon Zehnder International, the IT Director role has become “one of the most demanding in the corporation,” with at least half of the working week spent on non-IT activities such as defining business strategy. Hurrah.
We are delighted by this development and we will change our stereotypes accordingly. We do have one minor concern, though: the result is such a direct contradiction to those of the past that it seems too good to be true. There’s one other tiny, tiny point – we still know many IT Directors that are harangued by the business, viewed as without teeth and, privately, rather worried that the pace of technology is far faster than they can keep up with. So – what is the full story? We await the full results of the survey, which will be published by the time you read this. In the meantime, we would be delighted for any feedback you might have, anonymous or otherwise, on the condition of the IT Director. Is it all so rosy? If you are an IT Director, you work with one or for one do, please, let us know.
(First published 21 September 1999)
SNIA Comes to Europe
1999-09-21
The spirit of co-opetition was well and truly alive last week at the inaugural International Membership Development Conference of the Storage Industry Networking Association. With title like that, is it any wonder we talk in acronyms. So what’s it all about?
The goal of the SNIA is “to develop specifications which … form the basis of standards to enable compatibility among storage networking products”. Essentially, this is about software standards, not hardware: for a start, the SNIA wants to remain device-agnostic, not wanting to tread on the toes of existing hardware standardisation efforts such as the Fibre Channel Alliance. Secondly, hardware technologies are moving very fast: the SNIA sees its role as defining a device-independent layer which will continue to be valid well into the future. In practice, this means that the SNIA is running a number of initiatives, including:
- working groups are responsible for defining specifications in areas such as device management and (storage network based) data management
- conferences and marketing activities are planned to enable the storage networking message to be broadcast to the wider community
- an interoperability lab is to be developed, which will enable manufacturers to test their equipment and their software against certification criteria (SNIA-marking, perhaps)
Overall, the aim is that devices from different manufacturers will not only be plug-compatible, but also they will be able to communicate with each other and be managed centrally. For example, a goal scenario would be that two storage devices from different vendors could be used to provide storage for a single application such as SAP.
What is interesting about the work of the SNIA is that it has been set up to solve specific issues that have arisen through the development of networked storage, both SAN and NAS included. Rather than being driven by the marketing departments or the standards bodies, both of whom have had a tendency to deal with possibility rather than reality, the SNIA has its work cut out to solve problems, for example of interoperability, which exist today for managers of heterogeneous storage networking devices. The downside is, of course, that the wider community will have to wait before any SNIA-marked products exist. The first results are starting to come out of the working groups, for example common interface modules and SNMP MIBs are appearing, however it will be a good six months before we see anything turned into lab-tested products. There is, of course, a sense of urgency coming from the SNIA camp so we can hope for some concrete products to be ready for the post-Y2K thaw.
The SNIA already boasts a powerful membership. Board member companies include Veritas, EMC, StorageTek, Sun, HP, Quantum, Compaq, Dell, IBM and Microsoft. Also, the organisation, which has traditionally gained income from memberships, now has serious sponsorship money behind it and claims to have “the mindshare of the industry”, all of which stands it in good stead for the future. The SNIA appears to have its act together and its willingness to have an international (or global) presence is to be welcomed, as it has in the past been seen as a West Coast club. Prospective members should visit the all-new SNIA web site.
(First published 21 September 1999)
Oracle, HP and Siegel – picking up the CRuMbs?
1999-09-21
CRM is, frankly, a triumph of marketing over substance. The danger lies ahead for the main players, who may have forgotten to worry about the potential damage they could do to their reputations, let alone those of their customers.
Why so? Let’s clear some cobwebs which may be obscuring the true CRM message. CRM is the blanket term applied to “front office software”, i.e. software which may be used to develop, manage and deal with people with whom a company needs to interact. This definition is dubiously vague, a fact which has been exploited by vendors of anything from sales automation, help desk, marketing and call centre software packages. The war of words is ongoing, as the big players such as Oracle and Siebel slug it out to win market share. Bizarrely, it isn’t the products that are being pitched head to head, it is the marketing messages. As it happens, the word is (from the Yankee Group) that the punch-fest is cutting little ice with the end user, who is making decisions based on what the package offers. This makes sense – we are not talking about a product with reasonably standard interfaces and functionality, but an array of very different products which must be decided on their own merits.
On to the second point. The danger of selling on hype alone is that the substance will most likely disappoint. We would strongly advise anyone from making a decision about a CRM product to define the requirement very carefully before making any decision. This is obvious motherhood, but is worth re-emphasising as CRM product lines have come into existence rather rapidly following a bout of market making from analysts and vendors alike. ERP packages, on the contrary, had a more solid foundation constructed before the marketeers got hold of it (and even then, were not painless to implement). A further worry concerns the quality of the products. In the rush to gain mind share before market share, it is possible that products will be cobbled together using half-suitable components and some window dressing on the user interface. Information has come our way to support this hypothesis: we can only advise decision-makers not to rush into any purchase before both functionality and product quality have been verified as adequate.
After all, we are not just talking about the reputations of the vendors. CRM automates the interface to the customer, so any failures will be starkly visible outside the user organisation. As we are advising against hype, neither do we want to over-hype the risks. All we would say is look before you leap: know what you want and ensure that you will get it, before you sign the CRM bottom line.
(First published 21 September 1999)
Software pricing to plummet – is Linus Right?
1999-09-24
Linus Torvalds went on record at the end of last week saying that, within 3 years, software prices will have plummeted. So – is he right? The answer, it would seem, lies in the value which that software represents.
Value, so goes the adage, equates to benefits minus costs. Traditionally the main use of computing power has been seen as to reduce costs. More recently, the advent of the Web has enabled businesses to dramatically increase benefits. By itself, IT is pure cost, so it must be pitched against the resulting cost savings or the increased business that it enables.
Given this, the charges for software, hardware and services already varies significantly. In the City of London, for example, the consultancy fees for a newly-launched, software package were anywhere between £2,000 and £5,000 per day. Why? Because the package, once installed, could save a hundred times that amount: every day that the package sat in its box was money down the drain. The premium on IT-related goods is often well worth paying, or at least it has been to date. The question begs to be answered – how long will these good times roll for the IT industry?
We are still near the beginning of the electronic age. Every year, brand new technologies come with a single guarantee: that they will change the way we work. Telephones, radio and television, computers, fax machines, graphical terminals, email, the World Wide Web, mobile communications, each has played its part. We recognise the advantages of each new generation of products, we purchase and participate, we move up to the next rung of the technological ladder. Despite all the advances, though, how far have we come? Linux, for example, is based on operating system principles and a language which were industry standard in 1969. As for software, Object Oriented languages have been around for at least that long. Package software still has some life in it: the ERP (back office) market is now largely sewn up, so suppliers are turning their attention to the front office with CRM. Couple this with supply chain management and business intelligence, throw in platform support, and the question begs – what business problems remain to be solved? Once this stage is reached, the advantages will be gleaned from the relative qualities of each implementation.
There are still some major advances to be had from IT. Over the horizon is pervasive wireless networking, followed by terabyte solid-state memory. Software to manage the vast amounts of information flowing through the ether will keep a premium for as long as the quantity of information remains unmanageable. There will always be a business benefit in doing things better than the competition – just look at eBay’s recent failure to cope with the volume of transactions, or Hotmail’s security problems. For the medium term, IT in general, and software in particular, will hold its price where it can directly guarantee business advantage.
At the same time, the infrastructure tide is rising and things below the water level tend to have a fraction of the worth of the applications they support. For example, it is not a coincidence that most open source packages, such as compilers, operating systems and web servers, deal with infrastructure issues. It is also unsurprising that Microsoft should give away Internet Explorer 5, or that CA should bundle its Unicenter network management framework. The tide will continue to rise: the products included in this category will include databases, document management facilities and workflow engines. Today we are seeing office suites and programming environments become freeware: already a user can equip an office with a full range of IT facilities without having to spend a penny on software. Products are given away for all kinds of reasons, both commercial and emotional: for StarOffice, for example, it is probably both. Once a product has been given away, it cannot be reclaimed; if the product is already adequate, it undercuts all similar products now and in the future.
Fair to say, then, that the cost of certain kinds of software will plummet. However, do not be taken in: this is a ploy by the vendors: even likeable Linus has a vested interest. Vendors don’t do anything without a reason, for example, they hope to damage their competition or attract you to other elements of their product line. Make the most of the opportunities as they present themselves, then, but remember TANSTAAFL: there ain’t no such thing as a free license.
(First published 24 September 1999)
Free domain names – a quick thrill
1999-09-24
Two significant changes are occurring in the world of domain names. The net result is that the cost of top level domain name registration will be next to nothing. Exciting news. Strange, though, that within a few years domain names will become a thing of the past.
What are these changes? The first is that a West Coast startup known as iDirections is to offer domain names for free. The basic cost for a registration company is currently down to $18: this cost, and the cost of iDirections own infrastructure, will be covered by advertising and promotions, presumably on the website of the domain name registrant. The second development is that Network Solutions, keeper of the keys to the central register of domain names, expect to have installed software which will enable direct access to its database: this could mean that the registration cost itself will drop to zero. One way or another, the chances are that domain name registration will be free by the end of the year.
This is very good news for all those that wanted a Web site with a distinctive domain name. It will not only benefit smaller businesses (for whom even $70, the going rate for a domain name for the punter, seems a bit steep), but also individuals who will be able to establish themselves with a clear Web presence. The net effect (no pun intended) will be that all the remaining nouns in the English dictionary, plus a significant number of non-english words, will be mopped up.
There are two reasons why this might matter less than it initially appears to. These are intelligent agents and directory facilities. First, let’s talk about intelligent agents. The fact is that dot com names have never been particularly user-friendly. It is a wonder to me why the IETF or the W3C did not bring out a more natural language version of Web site naming. For example, wouldn’t it be preferable to type “amazon books” in the browser, rather than “amazon.com”? You might not think so, but my grandmother would. Domain naming is a temporary aberration, which will go away as soon as there is something better. That “better thing” will probably be the use of intelligent agents.
Have you tried Google? I can recommend that you do. If you have forgotten a URL, then you can go to Google.com (excuse the anachronism), type the name of a company and hit “I’m feeling lucky”. The chances are that Google will take you to the right site. Now, let’s think about this. What, for example, is the domain name of Hewlett Packard? www.hp.com – easy one. What about CGU Insurance? That’s www.cgugroup.com. BBC? www.bbc.co.uk. Get the picture? Things aren’t always as obvious as they might seem. Google can help, but it is the “middle man” – wouldn’t it be preferable to avoid the extra step? That’s where the most basic of intelligent agents would come in, enhancing the browser by providing an implicit search facility. Propellor heads may prefer the dot com names, and (to be sure) a number of businesses are dot com named from the word go, but it is clear that the general population would prefer to stick with the names they know. More advanced intelligent agents are expected (once the XML revolution kicks in) to be able to search on products, amongst other things (for example, delivery costs), so just having the viagra.com domain will not be enough to guarantee business.
Clearly, the gist of this argument is speculative, however sites like Google and AskJeeves are already demonstrating that natural language questioning is becoming an option and that the right URL doesn’t have to be the key to the kingdom. Companies profiting from domain name registration, or the domain names themselves, should make their hay while the sun is still shining.
(First published 24 September 1999)
Ohh, James… is that a Jornada in your pocket?
1999-09-28
It seems appropriate that scenes for the latest James Bond film, “The World is not Enough,” should be filmed at Motorola’s brand new, state of the art Integrated Circuit fabrication plant at Swindon, UK. It is equally comforting that handheld PCs should take centre stage in the film. We are up to date - things have come a long way since “Sneakers” showed an Excel spreadsheet as the front end to a Cray supercomputer.
Of course, there is one over-riding reason why HP and Microsoft were so keen to get the Jornada 430se into the frame. Product placement is seen as a hugely powerful technique for reaching the consumer market and CE-based handhelds are, after all, a consumer product. Or at least, they are becoming one. 3com’s Palm (now an entity in its own right) has focused on meeting executives’ needs (rather than their desires) – it is only recently, with the launch of the Palm V, that a sexy design has been taken on board. The target for CE devices, whilst starting life as competition for the Palm, has now moved to the mass market. The new Jornada, for example, is a Game Boy grown up – it does all that boring stuff, like scheduling and email, but one look at the literature shows that the features at the top of the list are the colour screen, the hot processor and the support for MP3.
The race is on between Palm and CE. Recently, with companies like HandSpring, Palm have started to inject some competition into their own market. The bottom line for Palm is – “it does everything you need”. There are tales of people investing in colour devices, for example, then coming back to Palm when they were fed up of shelling out on batteries. The CE community, however, are aiming straight at the jugular of the gadget man. Where better to promote this image than with the king of them all, the brits’ very own 007? Such battles have been fought before – sometimes it is the gadgety that wins; sometimes, like with the black and white Game Boy, “what you need” turns out to be the more compelling argument. As for me, a self-proclaimed gadget junkie, I’m hanging on a bit longer. Integrate a camera, mobile phone and voice recognition capability and I’ll be sold. Until then, like (I suspect) the majority of the market that CE is attempting to tackle, the high price of the fully functional devices give me ample cause to wait.
(First published 28 September 1999)
MCI-WorldCom – Sprint to a finish?
1999-09-28
The possible takeover of Sprint hit another obstacle this week, as the market showed its disgruntlement. Shares in the MCI-WorldCom dropped by 4% on Monday morning alone, to $75 1/16 – things quietened down since then but a dollar was wiped off the share value by end of play Tuesday. And now two companies are saying that the merger could be off at any time. Things are not looking good.
It all boils down to wireless. MCI WorldCom has seen its market pulled out from under its feet over the past year, as the mobile market has well and truly established itself. MCI’s main competitor, AT&T, is a strong player in the mobile market whereas MCI doesn’t have any wireless capabilities. Several players without the long pedigrees of the traditional communications providers, such as Sprint, Nextel (which has also been an MCI acquisition target), Voicestream and PowerTel, are carving up the airwaves: at the same time, due to operational efficiency and better technology, they are able to offer services to corporates at prices that MCI cannot touch. The rhetoric of Bernard Ebbers, CEO of MCI WorldCom, that the company “does not need a wireless business right now” does not match what his feet are doing, namely grasping at an opportunity to buy itself back into the multi-channel future of communications.
Whatever happens, it appears, there will be a storm. If the deal succeeds, it will have to ride out the likely disagreements of both the Justice Department and the European Commission, not to mention the fall-out from its sell-off of a chunk of Internet backbone to Cable and Wireless. It is strongly possible that the merger, if successful, would involve a sell-off of Sprint’s own Internet backbone unit. At the same time, the deal would blow a crater in the landscape of the whole, global telecommunications industry – alliances would need to break and re-form to take the merged company into account.
Should the deal fail (and it is likely that it will), MCI risks losing considerable corporate custom to its competitors, who are already able to provide a more comprehensive portfolio of services. It is difficult to see where the company would go next: Nextel is an option, though things failed last time around. Other companies lack the scale necessary to provide MCI with an integrated offering, although they could be the foundation stone it needs for the future. The communications giant may end up all dressed up with no place to go.
(First published 28 September 1999)
MSN is open for discussion
1999-09-28
Microsoft has launched its MSN online communities in the UK only a week ago. It’s still in beta but already has over 3,000 subscribers. Not bad going.
So – what’s it all about? Essentially, the online community concept has changed little since the golden days of the bulletin board ten years ago. As Microsoft themselves point out, however, this is less about technology and more about reach, as the “3,000” figure above illustrates. The main component of the MSN online community is a discussion board which may be configured by its administrator. This person, like the participants in the community needs to be signed up to the MSN Passport facility, which provides a user profile. The administrator can set up the community’s front page, can define the language, whether it is open to private or public membership, the content standard (e.g. adults only) and so on. Prospective members can be vetted, if required, following which they are open to participate by posting messages and sharing files. A handy feature is the photo album – for example, families can operate a private community where they share holiday snaps or pictures of the little ones.
This would appear a reasonably standard implementation of an online community – even with the language support (the main competition, Yahoo, is currently in English only). Typically it is what is planned that gets interesting. The main features on the horizon over the next few months are provision of a calendaring facility (so community members can fix events) and, most interestingly, integration with both online and offline tools such as Outlook Express, Microsoft Messenger and even Web site building facilities. This integration trend its to continue – for example commerce facilities are in the longer term plans and even voice integration is “on the scope”. Not bad for a free service, but where will MSN make its money? In the short term, this will be primarily through advertising and sponsorship opportunities. MSN have not been particularly profitable in the past, but apparently now the company has been setting hard sales targets to, at least, make the company pay for itself. These targets are set to rise.
So, have MSN got it right? There are no guarantees. First off, what we are seeing at the moment as something new (which it clearly is not) is likely to appear pretty primitive within a year or so. What Microsoft have worked out is the importance of an underlying framework – as the facilities evolve, they will benefit from having something solid to plug into. For MSN, this hinges around the Passport facility for individuals, but there is nothing similar for organisations unless we include Microsoft DNA. MSN has reinvented itself several times in the past, as an entertainment portal, a consumer portal and now as an infrastructure for online communities. Despite the bad press the company has received for not getting it right first time, Microsoft has little choice but to go with the flow, copy the competition and hope. The competitors – AOL/Netscape and Yahoo to name two, are already proving that Microsoft’s dominance in the desktop market doesn’t mean diddley squat in this new, vaguely defined battle for the hearts and minds of the online masses.
(First published 28 September 1999)
October 1999
This section contains posts from October 1999.
HandSpring will fuel device revolution
1999-10-11
When the bunch who designed the PalmPilot quit 3Com to form their own, rival organisation, we wrote about the likelihood (and benefit) of injecting competition into the non-CE handheld market. Little did we expect, however, the wow factor that HandSpring would manage to inject into their product range.
Essentially, the HandSpring Visor is a PalmPilot with style. The basic model comes with a USB connection to speed up transfers and the more advanced version comes with more memory and a choice of colours. Not to be knocked, this – look what it did for the Apple iMac. No great shakes so far – a Palm clone with a couple of additional features.
Where things start getting interesting are, as they say on Blue Peter, “when we turn this one over.” On the back of the Visor is an expansion slot which looks not dissimilar from that of a Game Boy. Indeed, games cartridges are one thing that the Visor claims to support (although I could find no references to backwards compatibility). Expansion modules are plug and play – the drivers are build into the cartridge so they are ideal for the non-technical and technical alike. For the IT professional, executive or wired individual the selection of cartridges mooted is already impressive – GSM and land modems, voice recorders, data acquisition devices and MP3 players are already on the cards. Extensibility is the best option in this fast-moving world – there is some protection against obsolescence given the ability to plug in new technologies (say, Bluetooth) when they come.
What is really amazing about the Visor is the price. With launch prices beginning at $149, 3Com’s Palm subsidiary will be forced to follow suit (never mind the CE device manufacturers) probably leading to a sub-$100 product from one or the other before long. It is hoped that the expansion modules have similar pricing strategy, i.e. cheap – the second hand market in such cartridges is likely to boom, if for no other reason the devices will be easy to post.
Ultimately, what will prove to lead HandSpring to its holy grail is the extensibility. No other palm-held device has an expansion capability, a fact which is likely to cause expansion device manufacturers to flock to HandSpring and give it sustainability. At the moment, the Visor is launched in the US only – over here we shall be looking out for early adopters among the BA Executive Club members.
A final point – what of CE? The battle is still on, and the ultimate future of either device standard (PalmOS or CE based) cannot be guaranteed. It is true that Philips left the CE fray last week “due to poor sales”, but it is equally true that Compaq, Casio and HP still appear to have their full weight behind their CE devices. The fact remains that the PalmPilot was one of the unexpected success stories of the 90’s and, in the short term anyway, HandSpring look set to replicate that success.
(First published 11 October 1999)
MCI gets hands on Sprint
1999-10-11
In the largest corporate takeover in the US ever, MCI WorldCom agreed to shell out $129 million in a merger which will bring wireless communications into the portfolio of the telecomms giant. Bernard Ebbers, WorldCom CEO remarked on his surprise at the size of the deal, stating glibly that “I thought I agreed to much lower numbers.” As we noted in our previous previous analysis of the merger talks, however, Ebbers is renowned for statements which disagree with the reality of the situation. In this case he clearly felt the steep price was worth it. The question is, what now?
The biggest hurdle that the betrothed giants face is the regulators, both in the US and Europe (where MCI are still clearing up the mess caused by a previous deal with Cable and Wireless). The companies have stated their intentions to keep the various elements of the businesses together, describing as “prudent”, for example, their intentions to have more than one Internet backbone. Whether this is sustainable remains to be seen but is unlikely - sooner rather than later, the merged corporate will have to decide which of its backbones is strategic, as it will not be able to route the same traffic over both.
The road is both long and pitted, however clearly this is a deal which MCI Worldcom decided it could not afford not to make. Fair enough – the wireless world is advancing at a cracking pace, with mobile phones mooted to outnumber land phones in Europe within two years, and with deals being brokered all the time (like Bell Atlantic Corp and Vodafone AirTouch setting up a new US mobile company). Worldcom risked being left further behind with every week that passed. It will be interesting to see how much of their current merger strategy remains in one piece by the end of the year.
Following a fall to $68.5 after the announcement, MCI Worldcom’s share price rallied to $76.88 by the end of last week.
(First published 11 October 1999)
eCommerce in the UK RPI – a final call to arms
1999-10-11
Last week, the UK government announced that it would be including online shopping statistics as one of the sources of information it uses for its retail prices index, once of the principal measures used to determine inflation. This is a move which will be seen as overdue by some and maybe irrelevant by a few luddite diehards, but which cannot be ignored by the vast majority of organisations in the UK, which have so far failed to use the Internet for anything more than brochureware. According to a recent Economist survey, the UK lags 2 years behind the US in exploiting the potential of eCommerce. That’s one heck of a long time in dog years, the rate at which the Internet is understood to be moving. Plus, given the global nature if the Web, the issue becomes even more stark: by the time the UK gets there, the market may well already be sewn up.
This article does not want to spew out the usual FUD. Let’s face it, we’ve heard it all before. “The Internet is here, embrace it or die.” True or not, there’s some good money to be made online – EasyJet, for example, has already sold over one million airline seats, online. We’ve been scared enough, however – the question is now – what can we do to profit from the Web?
The answer to this question has three, mutually dependent parts. At the top level they may be considered as:
- the business strategy covering, for example, how can I align my organisation to best profit from the Web? What products and services should I offer, and what markets should I target? Who are my customers and suppliers?
- the technical strategy – what are the most appropriate technologies to meet my needs? What can I use to implement something quickly and gauge the reaction of the public so I can move on?
- the operational strategy – how can I resource the 24x7 operation that my online service requires? Have I the necessary pieces in place to provide the best possible customer service?
All of the above need to be addressed simultaneously to make a success of the Internet. Too often, still, it is the technical issues that are tackled first (“Give me a Web site!”) without giving due attention to the business and operational issues. Each of the above has a responsible party – the CEO, CIO and COO – who must work together to draw up a coherent eCommerce strategy.
There, that’s it, now it’s over to you. The intention here is not to scare or patronise. No FUD here, just a single question. Does your organisation have a strategy for exploiting the Web? If not, you may be resigning yourself to watching from the sidelines while others steal the rewards.
(First published 11 October 1999)
Moore’s Law not a barrier to progress
1999-10-12
According to Paul Pakan, an Intel semiconductor scientist, we are approaching the limits of Moore’s Law which states that the number of transistors on a chip will double every 18 months. This should not, at least in the short term, provide much of a hurdle for progress. How can we say this? Well, for several reasons.
First of all, as an obvious point, Pakan notes that the limitations will start to bite in 2001 and beyond. However this is not the crux of the debate. Hardware technologies have been far and away in advance of software technologies ever since Bill Gates determined that 640K of RAM was sufficient for the fledgling PC operating system DOS. Software, in general, is inefficient and burdensome, using up chunks of hardware resource for the simplest of manipulations. Should software vendors discover that the hardware resources upon which they rely are not infinite, then they might be tempted to develop more efficient designs. Embedded software developers have worked within their bounds with excellent results, and it could be argued that the top end of chips required for next generation mobile devices will be sufficient for most needs.
This leads to the second point. To make the best use of limited resource requires a well-designed architecture. Thin client is becoming the approach which is recognised as best for both application partitioning and consequent manageability. The capabilities of the end user device may require to be limited not only because the hardware has reached its limits but also because the application architecture requires it.
Finally, we have so far neglected to mention the huge leaps forward in hardware design, not only concerning the physical construction of chips but also the designers’ abilities to create on-chip components which exploit the underlying hardware to the fullest extent possible. We only have to look at how processors such as those from Intel, AMD and ARM have been constructed, with features such as intelligent instruction look-ahead and multithreading, to see that the transistor size is only one (albeit important) facet of chip design.
As noted on the Register, “the sooner they can get quantum and/or optical processors to work, the better”. In the meantime, however, there is plenty to be keeping both hardware and software designers busy, and plenty of progress still to be made, with the technologies we have available today.
(First published 12 October 1999)
Vendors poised to tear down the Broadband wall
1999-10-12
The broadband revolution is only a hair’s breadth away. Barriers to its inception are about to be removed. A tidal wave of services will ensue, transforming how we live and work. It looks like the electronic future will be in time for the Millennium, after all.
The last pieces of technology are clicking into place, overcoming the two main obstacles which may be summarised as a lack of bandwidth to connected devices and a lack of suitable protocols to enable services to be delivered. These technologies include:
- 3rd generation wireless protocols, enabling high-bandwidth delivery to mobile phones and wireless PDAs.
- Roll-out of broadband technologies such as DSL and Cable, removing the low-bandwidth local loop from the equation.
- Creation of protocols such as WAP to enable wireless devices to access Internet-based services.
- Creation of devices sufficiently small, powerful and function-rich to take advantage of these technologies
- Creation of service-level protocols such as Jini and eSpeak to enable applications to communicate.
Most of these components exist as functioning prototypes and are being demonstrated at Telecomms 99 in Geneva this week. It is only a matter of time before they are delivered into the mass market, enabling such vapourware concepts as videophones, media streaming to the home, integrated messaging and phone-based eCommerce to become reality. Telecomms companies, hardware and software vendors are teaming with retailers, banks and content providers to achieve the dream and we have reached a point where it is only a matter of time before this happens.
The real test begins when the products and services begin to be rolled out in anger. It is likely that, the sales model will follow that of the mobile phone market, at least in Europe where phones are given away subject to taking a services agreement. This time, however, there are a raft of potential mechanisms given the variety of service providers. For mobile phones, the service for which customers were prepared to pay was the voice call. In the future, we will be able to cover the costs of hardware by, for example, committing to use E*Trade for our stock trades, or to use Blockbuster for our video stream.
In any case, it is highly likely that the availability of broadband bandwidth to the device will lead to an upsurge in the services available. Already companies such as Thomas Dolby’s Beatnik.com are intending to use the Internet as a delivery channel, superseding the need for anything other than cache storage on the wireless walkman. The currently reported glut in European bandwidth may well be quickly drained by the new generations of eCustomers who find the concept of pre-recorded music and video, frankly, old-fashioned.
(First published 12 October 1999)
GTE launch carrier-class VoIP service
1999-10-12
Telecomm 99 in Geneva this week saw the launch of a Voice over IP service from GTE Internetworking, aimed at telcos, ISPs and Internet Telephony companies. The question is, will they bite?
Voice over IP has long (in Internet time) been a possibility but has always been hampered by theoretical (and practical) issues of reliability. Essentially, this is to do with moving from a circuit-switched network (where devices link together to provide a guaranteed bandwidth path from end to end) to a packet-switched network which divides information into chunks, which are routed across the network and reconstituted at the other end. This, latter technique, used by the Internet Protocol, offers few guarantees about latency (the time the packet takes to arrive) or reliability (whether the packet gets there or not) in the network itself, with mechanisms being implemented in the devices at each end to, for example, acknowledge receipt of a packet. For telephone users this lack of guarantee equates to the possibility of pauses in conversation and drop-outs where parts of the conversation are lost. Those are the downsides, but on the upside we have the huge potential of cost savings to telephone user and telco alike, as the cost of sending information over the Internet is greatly reduced compared to traditional leased lines.
So, will the telcos bite? The two reasons why they won’t are fears about the reliability issues and the overheads of moving services to a new supplier. In order to guarantee a high level of service, GTE are implementing a IP network dedicated specifically to the service, plus a technique coined “Intelligent Route Diversity”. This refers to a number of services including load balancing, dynamic routing and failover to the (circuit switched) PSTN should the reliability falter. In other words, GTE Internetworking have covered their bases. They are not routing packets over the Internet but dedicating Internet technology to provide bandwidth channels over which they have more control. Brian Walsh, of GTE Internetworking, told us that the company was achieving sub-100 millisecond one way latencies, well below the 200 millisecond target agreed to be “undetectable to the human ear”. This should appease the service providers: if not, they have the additional guarantee of re-routing traffic to the more conventional PSTN.
GTE Internetworking are likely to emphasise the cost reductions that telcos can achieve employing IP-based bandwidth for voice calls. Walsh noted that companies could expect savings of up to 25% off charges within the US, with greater savings possible on international services. It may be that, with PSTN competition already cutting prices to the bone, many companies find they have little choice but to quell their fears and head for VoIP.
(First published 12 October 1999)
Travelocity ups gear and merges with Preview Travel
1999-10-13
The recently announced merger between Travelocity and Preview Travel spawned the world’s third largest eCommerce site by revenue, following Amazon and eBay.
Travel ranks as one of the most appropriate services to sell over the Internet –the deal is, essentially about agreeing contracts for services which will be delivered at a later date. The Internet is a far better medium for selling travel than brochures and guidebooks could ever be – it provides an integrated service so, for example, a prospective traveller can book a hotel, rent a car and book flights and receive a consolidated, clear itinerary covering all aspects of the journey. What is more, it is much easier to provide additional information over the Web, for example sites are gearing up to providing multiple views of hotel rooms, longer descriptions of localities and information about local hotels. Coupled with this, Web operators are able to significantly undercut high street travel agents, for whom the future would appear, well, bleak.
Having said this, we see the largest growth area for Internet travel services to be in the corporate space. Thus far, Travelocity’s services have been targeted at the consumer. We expect to see partnership arrangements between Web travel companies and ERP vendors, particularly companies such as Peoplesoft who are advancing the principle of end-user self service within an organisation. The concept is simple: allow the end-user to arrange their own travel within an automated tool which knows the limits of each user, then ensure the existence of an audit trail so that any usage (and abuse) is logged and reported. In this way the inefficiencies inherent in internal company procedures are significantly reduced.
Despite Travelocity’s projected size, the battle for the online travel segment is far from over. The merging of Travelocity and Preview travel’s user communities will not be trivial and is likely to cause weaknesses in the combined company’s system architecture. Weaknesses can lead to failures, and we know how seriously these are treated by the online press and user communities alike. Also, competitor sites such as Expedia and LastMinute are unlikely to roll over and die. LastMinute, with its reverse-auction business model, is one to watch in particular. Finally we should not forget the Web presence of the “travel fulfilment” companies – BA, AA and the like. One of the prime capabilities of the Internet is to displace intermediaries unless they can demonstrate real added value. This fact will be the ultimate litmus test concerning the successful future (or otherwise) of today’s third largest eCommerce company.
(First published 13 October 1999)
Y2K’s winter to freeze SMEs
1999-10-13
Two recent announcements gave an indication of the likely winners and losers at the turn of the Millennium. First off, the UK Financial Services Authority (FSA) stated that all high- and medium-impact companies in the financial sector were on schedule to complete, or had already completed their Y2K readiness programmes. In another announcement, according to Silicon.com, a study by the Cutter Consortium has found that 50% of US companies were planning a spending freeze on IT equipment between now and the end of the year.
In both cases, the companies which will be most affected are the small-to-medium sized enterprises or SMEs. The FSA report included only 400 out of a total of 8,000 financial services companies in its medium-to-high impact categories. The remaining 7,600 are the smaller companies who, if they have Y2K problems, will impact on a lesser number of “stakeholders” – depositors, policy-holders and investors. Unfortunately these are also the companies which lack the financial resources or in-house IT skills to evaluate the risk and deal with any suspected Y2K weaknesses. UK Government brochures, while well-written, only demonstrate the complexities involved in testing every PC, router and PABX for compliance, not to mention any outsourced services. For companies who have not started this process yet, the remaining 100 days are likely to be insufficient.
Similarly, it is the SME vendors who are likely to be most affected by any spending freeze. Again it is a question of resources – the larger companies have both the financial know-how and the hedged funds to carry them over until the predicted thaw at the end of January next year. Smaller companies do not have the luxury of digging in and waiting: most will survive but there will be plenty that do not.
Finally, on a lighter note (as we are accused of preaching too much doom and gloom about Y2K), it would appear that even the least prepared will steal a march on PLN, Indonesia's national electric board. When asked by an Indonesian newspaper about it's Y2K preparedness, a spokesman for the board replied "We can observe what happens (at midnight 2000) in Western Samoa, New Zealand and Australia and still have 6 hours to make plans.”
(First published 13 October 1999)
Linux and NT – The myths exposed
1999-10-13
There’s been a lot of press recently concerning Microsoft’s recent presentation of its Linux Myths web site. By coincidence, and to right the wrong expressed by Microsoft that “Linux Needs Real World Proof Points”, Bloor Research published today the results of its operational comparison between Linux and Windows NT.
The report was based on a study of both operating systems from the standpoint of how they operated in practice to support real applications. It was possible to rank the products according to the following nine criteria: Cost/Value for Money, User Satisfaction, Application Support, OS Interoperability, OS Scalability, OS Availability, OS Support, Operational Features and OS Functionality. The products were also ranked according to appropriate usage, in the following areas of application: File/Print server, mixed workload server, Web server, mail server, database server, groupware server, data mart, application server and enterprise level server.
The results of ranking the products showed Linux to be superior to Windows NT in seven of the nine categories, by a significant amount for some and by a short head in others. The comparison by application type came down less strongly in favour of Linux, with each operating systems seen as optimal for certain applications. Bloor Research found neither product suitable for use as an Enterprise Level Server.
Clearly the Windows NT – versus – Linux debate will run on and on, particularly with the launch of Windows 2000, which will see a significant number of additional features to the operating system (not least its ability to be reconfigured without being rebooted, which has long been the bane of NT systems administrators). Linux remains, above all else, free and capable of supporting a wide range of uses, although it is clearly not suitable for everything.
Microsoft are learning a hard lesson with Linux, namely that there is more to life than product marketing. Five years ago there was the NT versus Unix debate, which the Unix community appeared to be losing until Microsoft attempted to tackle the issue of enterprise scalability and convinced no-one. This gave the end users a new set of perspectives, that it was OK to use the right operating system for the job and that the concept of a single OS which could run on the smallest device up to the largest server was inappropriate, not to say unachievable. These perspectives hold true today: NT is suitable for a variety of tasks, but remains inadequate for a whole set of others. At the end of the day, any comparison will
(First published 13 October 1999)
ASP's – The Age of the Information Power Hub
1999-10-14
The momentum behind the ASP movement is such that it seems to be only a matter of time before we see all software being provided 'off the wire'. The Application Service Provision model refers to the predicted rental of applications, which are accessed via the Internet using a standard browser as the front end. At least, this is the initial model: in the future it is expected that customers will be able to integrate ASP services into their own applications. Both models make perfect sense. Consider, for example, the ERP market which has long been the exclusive domain of larger companies which can afford the inflated list price (and associated consultancy) that an ERP implementation can entail. ERP companies have been forging alliances with telcos and ISPs (for example, SAP with BT) to give smaller companies access to applications which are preconfigured for a given market, for example retail, warehousing, engineering and the like. Not only is the cost of entry reduced, but also support levels are touted to increase, compared with the likely levels of in-house resource.
This is a model that makes perfect sense, for vendor and customer alike. The economies of scale that can be gained from these larger installations yield clear benefits to the vendor, which can be translated to a greatly reduced per-user cost. As the ERP market has largely become saturated, vendors have been looking for a way to tap smaller organisations: the ASP model is just the answer. ISPs, too, are looking for ways to differentiate their own services from others’ and also for ways to bring in revenue as the basic connection costs reduce to zero. ASP services enable ISPs to offer far more capabilities to a much broader range of organisations, and also to make money, a factor which is becoming an increasingly difficult given the burgeoning competition and the disappointing revenues gained through advertising.
The emergence of the ASP industry is to a great extent being enabled by the pace of technology. Previous technological limitations, which effectively prevented the real time access to applications over the Internet, have been eroded. This is primarily down to greater bandwidth coming online at cheaper prices, but is also down the capabilities of both hardware and software to support a far greater number of potential users. Solutions now exist to meet the security fears of most, or at least to reduce them to acceptable levels.
No longer will the end user be buying, installing, configuring and supporting software. Rental is the name of the game and it will dramatically change the industries involved. The computer and telecom industries are on the brink of a step change. The models for distribution are mutating, leading us into an age where the computer industry will resemble that of the electricity utilities, providing farms of processors, storage and preconfigured applications all dedicated to serving the world with data and software.
As you might imagine these changes will be far reaching and the customers will be the last to learn, but their position is changing too. In the future, software users will fall into one of three categories. Anyone whose business has an information element (and that goes for most) can become an ASP, from pharmaceutical and genetic research companies to organisations which decode and manipulate video streams. A company will either be an ASP itself, an ASP customer, or a combination of the two where it not only uses the software of others but also build and deploy their own solutions as rental services.
(First published 14 October 1999)
W2K no Y2K
1999-10-14
We were unsurprised (and a little relieved) to see Steve Ballmer’s expectation management concerning the end of year shipping date for Windows 2000. In our September article, Microsoft faces Hobson's choice for W2K, we expressed our concern about interim deliveries slipping with no apparent impact on the final release. The logic (and experience) suggested that either quality or functionality would have to be reduced, neither of which were acceptable options to the user community. Oh good, honestly prevails. We no longer have a fixed date for the product, but at least we can be comforted that the product will be reasonably stable.
Despite this, it seems that the prospective users of W2K are not champing at the bit for the new OS release. Gartner figures suggest that 70 percent of existing servers will not be upgraded until the back end of next year, when the second release of Windows 2000 is slated. There are several, obvious reasons for this, not least Year 2000 itself – few existing Windows NT users have a compelling reason to upgrade the operating system in the short term. The upgrade will involve significant reconfiguration of existing servers, and reconfiguration is the last thing that IT managers want to deal with early in the New Year. In our last article, we recommended IT managers not to jump early, but to wait and see what were the realities of Windows 2000 in terms of its stability and functionality.
The operating system market is likely to evolve significantly over the next six months. Gartner’s write off of Linux, for example, seems to contradict the fact that every day sees new vendors allying with the Linux community in one way or another. What seems truly strange in all of this is the continuing idea that there really is a one-size-fits-all operating system. Despite the fact that most IT departments have been running heterogeneous environments for the past twenty years, companies still tout themselves as “the one,” be it for hardware, operating systems or application software. The reality is, and will probably always remain, that there are twenty or so vendors whose devices and packages need to work with each other, now and in the future. This reality is the basis of middleware, EAI and indeed most of the eCommerce market. Ballmer himself confirmed this yesterday at the Planet 99 conference when he referred to XML as the glue “to stitch together work from different vendors”.
The world has moved on from “no-one ever got sacked for buying IBM, Microsoft“ or whoever. This is a fact acknowledged by the captains of the IT industry as they promote interoperability and standards. However they are still failing to walk their own talk. Windows 2000 has a place, as do a variety of other operating systems and thin server devices. To sit any one product on a pedestal is to deny the reality and the opportunity of using the best tool for the job.
(First published 14 October 1999)
HP eases itself into NAS waters
1999-10-14
Following its announcement in September, Hewlett Packard demonstrated its SureStore HD Server 400 range of NAS devices which are due for release on November 1.
Network Attached Storage (NAS) devices enable disk storage to be connected to a LAN, employing thin server technology to provide the basic minimum of an operating system required to serve files to LAN clients. The market for NAS has been widely predicted to grow rapidly over the next five years. Figures from DataQuest, for example, estimate the market to reach $10 billion by 2003. Even if it only attains half that, this is seen as an opportunity which storage vendors can ill afford to ignore. The current main player is Network Appliance, whose NAS devices sell for about $20,000 apiece. NetApp was ranked at #4 in Fortune’s fastest growing companies listing, testament to the potential of this technology. Put it this way – the company does not make anything else.
HP have signalled that it wants more than a slice of this pie. Its intention is “to dominate the NAS market”. Time to market has, however, limited the range of devices that the company will be selling, at least in the short term. Initial products will support between 27 and 108Gb of data and will only be accessible from Windows NT clients and servers. Costs lie between $5,000 and $10,000.
So – does the NAS market merit all this excitement? The answer is, probably, yes, but not for the reasons that are being touted. NAS devices are seen as a solution to the requirement of adding storage to a network, quickly and simply. Yes – they can do this. However the real advantage lies in the fact that storage processing and application processing no longer have to be run on the same box. Storage-specific devices, attached directly to the LAN, are inherently more manageable than multi-purpose servers as they do one thing and they are optimised to do it well. Anyone who has attempted to reconfigure a set of general purpose servers (for example, “we want to move the users onto this box, consolidate the databases onto this box and use this box as our messaging server”) knows the loops that have to be jumped through to achieve their aims. Specialised devices do not solve this problem, but they do limit it happening in the first place.
There is one issue that is not being addressed by any of the NAS vendors at present. Just as NAS is seen in terms of storage rather than in the context of an overall IT architecture, so we still lack the tools to enable us to manage the distribution, accessibility and availability of our information assets. It is an unfortunate reality that storage requirements will always outstrip our capability to increase capacity, so the longer term solution must lie with better management of information as well as increasing the basic resource. NAS can only ever be part of the answer.
(First published 14 October 1999)
Dream is over for UK ISPs
1999-10-15
Just as the world was looking to the UK as leading the way in the free ISP market, the ISPs appear to have dropped the baton.
Let’s take some examples. Currantbun.com, the News International ISP being offered to readers of the mass-market paper The Sun, has been canned to be replaced by Bun.com. Freeserve shares are now worth less than the IPO price. AOL UK subscribers are threatening to walk out on the service provider. To cap it all, Screaming.net has been indicted by the consumer programme WatchDog with paper evidence of customers being charged for so-called free services.
What is going on? Well, in reality, the dream could not sustain itself. The fact is that there is no verifiable revenue model for free ISPs, as Freeserve has discovered to its chagrin. Given this, all that is left is the promise of market share or, at least subscriber share. “Get me the list of subscribers and then they will be my captive audience for whatever comes next.” Trouble is, as we all know, the audience is anything but captive, particularly when the ISP can offer very little that is not being given away elsewhere on the Internet. The site provides news? Well, so does the BBC. And so on. Also, “whatever comes next” hasn’t come, at least not yet. We expect ASPs to be a huge market, initially for businesses but ultimately for consumers as well – however this is waiting for the bandwidth bottleneck to be removed. We also expect online shopping to take off but so far it hasn’t given ISPs a differentiator over anyone else. The fact is that the model is the wrong way around: a free Web connection is a bonus to be provided to customers of a given service, rather than getting a motley band of subscribers together (I include myself in this) and trying to sell them anything under the sun.
ISPs in the UK started the free trend and now have little choice but to deliver on it. This they will only be able to do when they offer a truly differentiating range of services to the UK consumer. In the meantime, we can expect a continuation of the turmoil that we are witnessing right now.
(First published 15 October 1999)
What’s in SUN’s sights next year? Microsoft, of course.
1999-10-15
With earnings at 33 cents per share, SUN finished its first fiscal quarter of Year 2000 beating analysts expectations by 2 cents. With its ISP successes, Java story and now StarOffice, the company would appear to be full steam ahead for a successful year.
SUN is, at the end of the day, a platform company. It wants to ship hardware, and not just any hardware. Despite having more or less launched the open systems movement in the eighties, SUN systems look pretty proprietary. Sure, they run Unix, but it is still the case (despite efforts to the contrary) that SUNs are used to develop and run SUN applications. The miracle of a single operating system flavour never happened – indeed, the closest we might get to that is through Monterey (which is ostensibly IBM’s AIX ported to 64-bit Intel with SCO technology) or Linux, which (let’s face it) is a phenomenon happening despite the best efforts of the major vendors. Remember when SUN refused to allow the Linux sources on SunSite, its download portal? Enough said.
Despite all this, the SUN platform has become “the enterprise platform of the Internet”. This has far more to do with marketing and alliances than technology, though admittedly the high-end of SUN hardware has grown up to encompass the kind of transactional capability that we expect of eCommerce sites. What of Java? The programming language has achieved widespread acceptance (according to our own figures, 28% of programming job ads are for Java, compared to 38% for C++ and the gap is closing), however one of Java’s essential strengths is its portability so it cannot be seen as one of the technical drivers towards using SUN. All in all, a pretty slick campaign, in this phase of technology where managing the mainstream is at least as important as having a good product.
So that’s SUN’s first goal met. Given the way the Web is going, SUN would have to make some pretty big mistakes to lose its market share. Microsoft won the desktop and, with Windows 2000, will make more inroads into the departmental market, but the enterprise will remain a Windows exclusion zone for the forseeable future. On to SUN’s second goal, which is dubbed self-protection by the senior execs of the company. Truth is, though, that the company smells blood. The next big battle to be fought is for Application Service Provision – providers, or ASPs, are ostensibly ISPs who cannot make any money through a basic connection and hence who are looking round for differentiating sets of application services that they can offer. At the client end, the operating system is unimportant as it is likely that the majority of applications will have a Java-based front end. Server-side, however, a decision will need to be made over which, compatible set of applications can be provided. As always, this boils down to “should we run a Microsoft configuration, or not?” ASPs will want to offer the set of applications which are most appealing to their customers. At the moment, despite most of their functionality being ignored, Microsoft Word, Excel and PowerPoint are the de facto desktop applications. This may be for no other reason than file compatibility, but ASP customers are unlikely to move to a set of services which cannot read their existing files.
Enter Star Office, which reads MS Office files like a native. It is unlikely that SUN see Star Office as a strategic application. More realistically it is a non-application, intended to remove differentiators from the Microsoft-based offering. Microsoft apps need Microsoft OS need Intel hardware, it’s that simple. If SUN do not challenge the perception that MS Office is essential, then the corporation may end up losing a sizeable part of its ISP business. It is interesting to note Scott McNeally’s recent comments concerning the his doubts about Microsoft breakup – given the size of the prize, it is likely he would prefer one big target to lots of little ones.
(First published 15 October 1999)
Anyone for eTea?
1999-10-15
Few examples than this one offer a clearer impression of how the Web can change the way the World does business. According to Bloomberg, a site is to be launched early next year which enables tea producers in Africa and Asia to trade tea with UK-based tea companies.
The tea market in the UK has both consolidated and dwindled since its heyday. 40 years ago, tea auctions dealt with up to 150,000 tons of tea per year whereas last year a mere 24,000 tons were sold. The reasons for this are mainly to do with tea auctions moving to producer countries, such as Kenya. The result has been that only the bigger players (who can set up bases abroad) still remain in the game, with smaller companies having to work through the two remaining UK-based tea brokers.
The Internet looks set to change all of this. An online auction would enable tea producing companies, large and small, to work directly with smaller companies in the UK. Intermediaries would be required only to manage the auctions (hence the site) and for shipping, meaning that the costs of working through such players as brokers would be greatly reduced. It is no surprise that the company intending to set up the site is a brokerage company, Thompson Lloyd and Ewart Ltd. The company intends to run the business in parallel with its brokerage.
Using the Internet as a transport mechanism, online trading would enable costs to be reduced and savings to be passed on to producers and customers alike. In fact, what is happening is a return to the more traditional modes of trade, with small traders dealing directly rather than relying on the few companies that remain since “value” became the catchword and consolidation the practice. The Internet is the great leveller, reducing the costs of both entry and ongoing business, allowing large and small to compete on the same playing field.
What is more, the Internet is global. Tea trading has occurred in the UK because it always has, but this no longer needs to be so. Suppliers to tea-drinking markets around the world will be able to profit from the global accessibility of online trade in a way not possible before. Who knows – they might even start drinking it in Boston.
(First published 15 October 1999)
Sun sets up autonomous Forté, completes its Java portfolio
1999-10-25
In a similar fashion to IBM with Tivoli and Lotus, SUN is to give Forté a semi-autonomous status. The Forté organisational structure will report into the head of software at SUN, however there will be no moves to change the internal organisation of the division, beyond establishing the communications channels to enable Forté to work with other parts of SUN’s organisation.
This is clearly good news. Forté is in the right place at the right time with the right products, reasons why the company was so attractive to SUN in the first place. Forté recognised that its traditional market, of bespoke enterprise application development tools, was insufficient to guarantee the company’s future success. With Fusion, it moved into the Enterprise Application Integration space; more recently, with SynerJ, it released tools to support the Java 2 Enterprise Environment or J2EE. Forté’s original vision, of producing scalable, reliable platforms for enterprise development, has been retained and enhanced to meet today’s demands. Now, it seems, the organisation will be able to retain this strategy as it moves forward into the future.
Sun Microsystems is a canny company. Still recognised primarily for its hardware, it has quietly been building up a software portfolio based around the unexpected success of Java as an enterprise language. This fortune is as much down to IBM as anyone, nonetheless Sun have been putting in place the elements it needs to benefit directly from Java. Since its acquisition of NetDynamics, through its developing relationship with AOL/Netscape to the more recent purchases of Forté and NetBeans, Sun have assembled the portfolio of products it needs to start generating real revenue from the Java language. Services will also play a big part, but these would not be possible without the comprehensive set of products that Sun have now at their disposition.
(First published 25 October 1999)
ASPs – how IT can learn from the Comms guys
1999-10-25
If the rumours are true, Application Service Providers, or ASPs, are going to dominate the way in which IT services are procured and delivered. At Bloor Research, we have reason to believe the hype. As for the vendors, they have a stark choice to make. Some are embracing the ASP model whole-heartedly, whilst some are hedging their bets.
Networking and comms vendors are in an enviable position concerning ASPs, for two reasons. The first is that they stand to gain whatever happens. The provision of application functionality over the Web, requires a networking and communications infrastructure capable of sustaining a multitude of reliable data pipes between an organisation’s internal networks and the outside world. As it becomes business critical, no longer will the Internet connection be under such tight constraint, either by budget or by policy. Besides, ASPs are only one area which requires an enhanced networking infrastructure: there are plenty of other usage models, not least business to business eCommerce, which will ensure that the networking revenue stream is protected. The second reason is that ASP capabilities are defined primarily by which technologies networking vendors are able to deliver. Application and systems providers are dependent on networking providers, a fact which can be exploited by the latter.
So, what about the traditional IT companies – the manufacturers of systems hardware, systems and application software? Some, such as IBM, SUN, Oracle and Microsoft are adopting the ASP model wholesale. Of course, the industry giants can afford to develop themselves in the ASP market with much less risk than smaller players. This week, Microsoft announced a partnership deal with Cisco which will enable small-to-medium sized businesses (typically sub-100 staff) to use Microsoft software over the wire. It is likely that more and more vendors will jump onto the ASP bandwagon, however in doing so they are likely to miss one fundamental point.
The ASP model is currently seen as a facility to deliver applications across a network, thus reducing costs. This view is valid, in the short term, but it ignores the full potential of the ASP concept. Once applications can be delivered, then they can be combined and assembled in previously inconceivable ways. Communications services, application services and business services will all be delivered in the same way, enabling even more combinations. This kind of shake-up is already being experienced in the telecommunications marketplace, as convergence finally starts to happen. Issues faced by these companies include how should a properly managed service be developed, provided, and most of all billed for. These, same issues are to be the bane of the ASP: vendors prospecting for a chunk of ASP real estate will do well to watch the successes and failures in the comms market, and learn from them.
(First published 25 October 1999)
Coppermine braces itself, without Rambus
1999-10-25
Yesterday’s announcement by Intel concerning the copper-based version of its Pentium III chip has one major objective: to knock AMD off its perch. AMD’s flagship product, the Athlon processor, currently claims the high ground as the fastest PC processor. The importance of this position is as much to do with marketing as anything, however so is the whole of the IT business.
The popular press, at least in the UK, has warmly received the Athlon, as have the vendors IBM, Dell and Gateway. These factors led to AMD’s better-than-expected financial importance reported a fortnight ago. Intel has already swept away a number of chip companies, such as National Semiconductor, who have found it difficult to fund the levels of research and development necessary to keep up. Not so AMD who have proved themselves capable of beating Intel at their own game.
Both Intel and AMD’s processor marketing is geared around “being the best”, and neither is likely to let the other get away with being top dog for long. AMD announced the opening of a new fabrication plant, in Dresden, last week which will be capable of manufacturing the next generation Athlon chips – pre-production versions have been running at 900MHz. The two companies are likely to leapfrog each other to the 1Ghz holy grail, and the first to achieve this at a production scale is likely to gain huge kudos as a result.
Intel’s announcement should be seen in context – it is playing the game for which it invented the rules. The only downside for technologists is the absence of Rambus support from the launch. As we reported in August \link{http://www.it-director.com/99-08-06-2.html,(see article)}, Rambus has some advantages particularly for the Server. While this is a blow for some PC makers who were planning to ship the new processor and chipset together (and who will now have to wait), it remains to be seen whether this will affect the market as a whole.
(First published 25 October 1999)
VDSL beats back Broadband bounds
1999-10-26
The fat pipe is getting fatter. Very-high-bit-rate Digital Subscriber Line (VDSL) technology promised by Alcatel and Texas Instruments promises bit rates of up to 60Mb per second. And this is no pie in the sky, sometime over the next five years technology. Sample VDSL chips are already being shipped to equipment makers, with products expected some time next year. Compare this to VDSL’s little brother, Asynchronous DSL (ADSL) which can only (only!) deliver 1.5Mb/s.
Applications for VDSL already being discussed include running exclusive digital TV channels – for example a corporation may obtain a training channel with a predefined programme, direct from the digital broadcast site. In fact, what we are seeing is the stretching of the boundaries of the managed service – a video feed is one application, which will be coupled with voice and data feeds to give a truly comprehensive managed service. It is surprising that no-one has mentioned virtual reality which, even if it remains a niche hobby, will benefit hugely from the bandwidth.
Oh, but hang on. This really is pie in the sky, at least as far as the UK consumer is concerned. It is widely understood that in some parts of the US – Boston, for example, cables are being laid like they were going out of fashion and ADSL is getting its own display area in Radio Shack. Meanwhile, back in Blighty, we have a pseudo-monopoly in the guise of BT which has just finished an ADSL trial and is expected to offer a reduced service, well, maybe one day, at least in the London area. In the UK, we can dream about the advanced capabilities that new technologies such as VDSL will enable. However the reality is likely to remain virtual for a good time yet.
(First published 26 October 1999)
Orange dreaming of a WAP Christmas
1999-10-26
UK mobile telephone operator Orange is to launch a Wireless Access Protocol service in November, ready for the Christmas rush. The up-front services will be basic, but given the general excitement about WAP this is unlikely to be the case for long.
The WAP forum was launched by Ericsson, Nokia, Motorola and Phone.com, with an aim to make information-based services more accessible to the mobile user. The forum now has nearly 200 members and is on the brink of real products, a number of which were demonstrated at Telecom 99 two weeks ago. Orange’s new service will be based around a WAP-enabled phone from Nokia, with initial information services including sport, news, weather and a business directory.
What is really, really interesting about the Orange service is the charging mechanism for services, or rather the planned charging mechanism. Initially, WAP data calls will be charged at less than the price of a voice call, however the plan is to cease charging for the connection and to start charging for the services themselves. Basic services will remain free, but according to Rory Maguire, Strategic Relations Manager for Orange (quoted on Excite UK), basic content would remain free but “premium content” would be charged.
This is fascinating stuff as not only does it turn the existing mobile phone charging model on its head, but it also flies in the face of other, wire-based Internet services, who already give away a great deal of what we might term “premium content.” Consider share prices, for example. It used to be the norm to charge Web users a subscription before they could access share prices. The model then changed, as this information was given away by portal sites (suchas Yahoo) as a way of luring custom towards other, advertised services. Money could still be made, however, from the share deal. Now, however, services are setting up which do not even charge a fee for the share deal, as it is reckoned that hedging the funds and bulk dealing is worth far more than the fifteen quid transaction fee. In other words, the tide is rising: whenever there is a higher level way of making money, the lower levels become given away. There is a caveat: as you move up through the levels, you need more information (and expertise) – in the share dealing example, the free trading is only open to “experienced traders.”
So – what we have is two rules:
Rule 1 – a service will be given for free as long as the supplier of the service stands to gain
Rule 2 – if the customer needs help, then such help is also a service, which must be paid for unless covered by Rule 1.
These two rules should come as no surprise as they are the twin pillars of retail pricing, understood and applied instinctively by customers and suppliers alike. If buying a car, for example, we do not expect to pay for the advice, guarantees and so on. If we buy a car from a trade outlet, we will be able to get the car more cheaply but at the expense of having less advice, expertise and guarantees to go on.
Orange may intend to charge for premium WAP services. Indeed, they are ideally placed to do so as they control both the data feeds and the billing mechanisms. However when doing so they should take the rules into account.
(First published 26 October 1999)
Loi 495 – allez le source ouvert!
1999-10-26
Open source received a boost this week, as two French senators proposed a law concerning the use of such software by national and local administrative systems. The driver is one of freedom of access to the software, but the issue is more likely to be one of cost. As reported on \link{http://www.theregister.co.uk,The Register} yesterday, a \link{http://www.senat.fr/grp/rdse/page/forum/index.htm,discussion forum} has been set up on the issue. And the feedback is good.
Comments such as “Un pas vers la démocratie électronique” (a step towards electronic democracy) give an indication of some of the feedback, however this is coupled with fears about the impact on business. It is worth reflecting on what such a move might entail, should it be adopted.
First of all, the imposition of such a law implies that open source is here to stay. There are plenty that believe that it is, but others say that it is an interim stage, made possibly by government subsidy of academia and the political manoeuvres of companies. (Personally, I believe that old software technology can be given away, and new technology developments need to be paid for, but that’s opinion, not analysis). However, given the existence of open source, the question moves to one of delivering value.
Value is benefit minus cost. Benefit from software may be derived from its functionality and ability to interoperate with other packages. Costs are financial, but also relate to intangible costs of procurement, implementation and maintenance. Costs also relate to risk, driven either by security or safety criteria. So – what are the impacts of the open source model? Clearly there is a bottom line cost reduction. The intangible costs require a little more attention: an open source package can lead to reduced cost only if it offers similar functionality to existing packages, whilst reducing the costs of implementation and maintenance. This is the area which is less clear, and which must be judged on a package by package basis.
The issue might stop there if it wasn’t for the appearance, over the horizon, of the ASP. Application Service Providers are to do for software what outsourcing did for IT departments, namely take the responsibility away and enable companies to concentrate on their core business. It may seem a logical step for public organisations to adopt open source where it can, but an even more logical move is to pay a provider to supply software services over the wire. In this way, an organisation can concentrate on its functionality requirements without being distracted by issues of package cost, interoperability or maintenance overhead. In this way it is for the ASP to decide what packages to buy, and whether or not to use Open Source products. The ASP market is still in the crib but, for any organisation wanting to keep control of costs whilst gaining access to the best services, it may well prove to be the answer.
(First published 26 October 1999)
Future of UK Enigma site continues to confound
1999-10-27
All is not well at Bletchley Park, the World War II code-breaking centre, former stamping ground of Alan Turing and location of the world’s first programmable computer, the Colossus. Controversy centres about what should be done with the site. Once again, it seems that Britain is failing to deal with its IT heritage, let alone its wartime memory.
What’s going on is, unsurprisingly, an everyday story of folk. The battle lines were drawn two weeks ago between the old guard of Park trustees (seven out of the twelve) and Christine Large, who was brought in as chief executive to oversee the process of bringing the currently dilapidated park and outbuildings up to scratch. Mrs Large’s plans to transform the park into a modern museum, conference and education centre were seen as too ambitious for the trustees who were wary of the site becoming “a high-tech theme park”. Following a series of wrangles and a vote of no confidence by the trustees, Mrs Large was sacked from the board, only to be reinstated last Thursday.
If this wasn’t complicated enough, Mrs Large has rejected the reinstatement offer unless a new Board of Trustees is created. This is not an outlandish suggestion, as the Board itself recognised the need to disband and reform in May this year. Six months later, the impression is that the Board are reluctant to follow this through.
To an extent, the reasons for this and the current situation are less relevant than the consequence, which is that nothing is getting done. Time and again we have seen personalities get in the way of progress, in this case towards recognising our heritage. As with the failure to procure the funds to build a statue of Alan Turing himself, we are stymieing our own potential to get things done and, in this case, to broadcast to the world our own successes. Britons are experts at hiding lights under bushels – humility is a good thing, but so is recognition.
People need to have visionary focus and to act accordingly. From this perspective it would appear that Christine Large is right – there is no point in returning to the post of Chief Executive if the current composition of the Board is likely to put the brakes on every step of the way. For the sake of our IT and wartime heritage, we wish Bletchley Park the good will and commitment it needs to ensure it is developed with both sensitivity and vision.
(First published 27 October 1999)
Xerox’s 18-month European Plan
1999-10-27
Following the company’s seven billion slump in value caused by its less-than-successful Q3 announcements earlier in October, document company Xerox explained themselves this week.
Despite the company’s huge customer base (with over a million customers in Europe alone), its main problems (so say Xerox) were due to its out of date and badly targeted sales strategy, which was focused on the direct sale (even when dealing with the largest corporate deals) to the detriment of other channels. Also, the company was organised on national lines, with “each country operating its own fiefdom”, according to William Goode, Deputy Managing Director of Xerox Limited.
Xerox have every intention to change this, in fact they have been spending the past eighteen months consolidating its European operations, for example opening call and manufacturing centres in Ireland. The company is also setting up two new organisations: the Industry Solutions Organisation (ISO) will concentrate on major accounts, and will be organised by sector across Europe, whilst the General Marketing Organisation (GMO) will take over the direct channel. Initial sectors for ISO will be Manufacturing, Financial, Graphic Arts and Public Sector; the organisation will also encompass Xerox’s professional services organisation, named ICSI. The showpiece of these efforts is a pan-European SAP implementation which will support all of its back office operations. When it is complete, in eighteen months’ time, it is destined to be the largest single-instance ERP implementation in the world.
For the committed Xerox customer, these developments are likely to be good news as they will result in better account management and more co-ordinated services. For Xerox itself, the company is moving into the IT space from being an office equipment company, and stands to benefit from the opportunities that this offers – not least the marketability of the showpiece SAP implementation. Despite all this, the timescales are still very long and some pieces of the puzzle lack definition, for example the shape of the sales and marketing operation remains unclear. Also, the ongoing changes are likely to impact on the company’s ability to take advantage of any opportunities, at least in the short term. From the customer perspective (and as an ex-customer of Xerox), organisational change is nothing new for Xerox, so customers are likely to keep their counsel until they see real benefits coming through.
All in all, Xerox is still making money and the company is orienting itself to be better positioned for both its markets (which are, more and more, IT related) and its customer accounts. No change was not an option. It is just a shame that it is all taking so long.
(First published 27 October 1999)
Micromuse setting their sights high
1999-10-28
Enterprise Management software company Micromuse saw its shares soar by 30 percent as the company announced a doubling in its fourth quarter revenue at the end of last week.
From lowly beginnings in London, UK, the company launched on the Nasdaq in February 1998. In the early days, the company had a strong relationship with BT. The company has stuck with telecommunications as one of its core markets, a move which has stood it in good stead for the proliferation of ISPs and the eCommerce revolution.
Micromuse’s core product is NetCool, an event management system which is designed to enable events to be received from any device. This simple principle has allowed the NetCool product to be extended across a whole raft of protocols and device types, including SNMP devices, databases, WAN/Voice devices and TCP/IP protocol connections. Application interfaces (e.g. for SAP) are currently being considered.
Based on this core product, Micromuse have driven their product range in two directions and it is this which is getting the analysts excited. The first is NetCool/ISM, which simulates web site requests in a variety of protocols and feeds the results into the NetCool framework. This enables the monitoring of Web site performance, from the basic request-response level up to more complex series of transactions. Monitors can be situated anywhere in the world (to gauge international differences in performance) with the results fed back into a central NetCool installation. The second product is Impact, which uses historical records, configuration information and external databases to build an information set to aid the resolution of a given fault. Impact won a “best in class” award at NetWorld InterOp this year.
The market for Web-oriented management tools is burgeoning, with a variety of old and new players jumping on the bandwagon. Micromuse have a head start, in that they have worked in the comms space since their inception. They understand the issues and they already have an enviable set of customer and partner relationships including AOL, AT&T and BT. The NetCool/ISM product is based on the company’s existing platform, which is a differentiator from Tivoli, for example, and which isn’t a possibility for newer vendors such as FreshWater. Investors are already sitting up and taking note, all of which paints a rosy future for Micromuse.
(First published 28 October 1999)
Thin server appliances drive OS shakeup
1999-10-28
Bringing ease of installation and administration, thin servers look set to steal a sizeable proportion of the server market. Market analysts are predicting that between 3 and 8 billion dollars will be spent on such devices by 2003. For speed, usability and cost reasons, thin server manufacturers are keen to keep operating systems as scaled down as possible, prompting an inevitable shakeup in the OS market.
What is a thin server? Also referred to as a server appliance, a thin server is a server which does a limited number of jobs and does them (at least, this is the intention) extremely well. Examples are:
• Network Attached Storage (NAS) devices for file management, manufacturers including Network Appliance and Hewlett Packard.
• Email servers, from Compaq and Mirapoint,
• Web Proxy servers, from Novell (using Compaq hardware),
• Database servers, from Oracle.
The thin server model is attractive to the IT Manager, for two reasons. The first is that it enables additional resource or specific functionality to be added to a network simply and effectively. The second is that it moves away from the multi-purpose server model, where servers often have conflicting demands and incompatibilities posed by the different packages they run. The target market for thin servers is seen as the small business with less than a hundred people, but this need not be the case: Yahoo email, for example, boasts the logo “powered by Network Appliance.”
As for the operating system, there is no consensus on an ideal platform for a thin server. Neither should there be – the thin server operator is more interested in the functionality than the platform, which is largely hidden. Network Appliance devices use a proprietary kernel, flavours of Unix are visible in both Mirapoint and HP’s offerings, Compaq’s email servers run Microsoft Exchange on NT and Novell’s proxy, unsurprisingly, runs NetWare. Mentioning no names, this is nonetheless a blow to any software vendor whose marketing depends on promoting the relative advantages of its own operating system. In fact, the main issue for the OS vendors appears to be who can secure the best deals with the hardware companies. Not coincidentally, this trend bears a striking similarity to the Symbian/Palm/WinCE wheeling and dealing in the handheld and mobile device arena.
We have seen very little from Microsoft and Intel, since their thin server alliance announcement in April. Intel were hugely successful with their “Intel Inside” campaign, but this was as much down to their incumbent position in the PC market as anything. Just as with set top boxes, games consoles, mobile phones and PDAs, in the device-driven, thin server market it is unlikely that “WinTel Inside” will have the same cachet.
(First published 28 October 1999)
UK Govt PC plans miss the mark
1999-10-29
It is interesting to compare two recent announcements, that of the UK government offering PCs at low cost to low income groups, and that of the new wave of Web-ready appliances to hit the markets next year. Essentially, this is about bridging the current and future new business models for consumer IT. The old model states that IT facilities in the home are centred around the PC, wherever it may be situated. The new model, of ubiquitous IT, says that a spectrum of computing devices will exist throughout the home.
UK Chancellor of the Exchequer Gordon Brown expressed his desire to see up to 100,000 refurbished PCs rented out to low income families for a fiver a month. This move has caused controversy over who would pay the phone bills for Internet access. Also, it could be argued that a 486-based PC is still adequate for the Web, and these are virtually being given away today in second hand magazines. Still, the principle of making low-cost computer facilities available is sound.
Meanwhile, it was reported in the Wall Street Journal that Dell, Compaq and Gateway would be launching Internet access devices next year, none of which would run Microsoft software. These devices are to be priced below the cost of the PC and it is likely, if their popularity grew, that the prices would tumble. Here we are seeing the kind of devices that could be given to subscribers of internet services, possibly even undercutting Gordon Brown’s offering of £5 per month. Web devices will not be able to run the same variety of software as the PC, however for most people sufficient facilities will be available directly Web portals such as Yahoo and MSN.
There is life in the PC yet, but in two years time it will not be the only device that people use to access the Web. Fears about an information underclass caused by a lack of computer facilities may be premature.
(First published 29 October 1999)
Amazon grace under pressure
1999-10-29
Despite a huge increase in revenues from $154m to $356m, Amazon.com’s reported a four-fold increase in operating losses, from $21m to $79m. Amazon has seen increases in just about everything else from new accounts to repeat business, but is also seeing increasing pressure on profit margins, both from rising internal expenses and the inevitable external competition. Amazon continues to invest in new product lines and in new parts of the world. The question remains – how much longer can it keep up this momentum?
It is commonly held wisdom that it is healthy for dot-com companies to post a loss. The argument concerns the investment in the number of customers who use Amazon by default, hence the importance of both accounts and repeat business. However, the Web community is a notoriously fickle bunch. Just as Lotus never expected to lose its spreadsheet monopoly to Microsoft, so it seems equally unlikely that Amazon could lose its customer base. There are three ways in which this could happen.
The first way is system failure. Lack of service is very quickly jumped on by the Internet news, as eBay, eTrade and Charles Schwab have all discovered in recent times. eBay’s failures caused a reported mutiny of customers to Amazon; similarly, eTrade has lost custom to other online brokers, both through downtime and through processing errors (for example, those which caused potential Red Hat beneficiaries to lose out). The Internet does not take any prisoners: news of failure and retribution spreads like wildfire. Amazon’s systems are holding up well – they were designed for scalability and Internet access. However the same could be said for eTrade and eBay, of which the latter is famously hanging on by its fingernails to support its burgeoning community. Failure of a dot-com company results not only in customer flight but also shareholder concern, and this is the last thing any tech stock wants in the current, volatile market conditions.
The second route is competition. The Internet purchasing model is still very much built around lock-in, such that it is easier to stick with one supplier than move to another. Amazon’s one-click purchasing is an example of this. Moves are afoot, however, to make it simpler to deal with a variety of supplier sites: Snaz.com, for example, offers a multisite shopping basket for exactly that purpose. To water down lock-in is to do the same to Amazon’s key performance indicators of customer retention. Amazon may choose not to play, but this could have the effect of locking the company out, of siting Amazon outside the mall, as it were.
Thirdly, we have the unknown quantity. Amazon exists by virtue of riding the Internet wave long before other companies knew it was coming. Much as we like to speculate, none of us really knows what the next wave will be. Plenty of so-called giants have been washed away by the tidal waves of technology, and Amazon is no more immune than any other company.
Some are saying that the really big hitters are still to join the fray. Companies like WalMart are on the point of launching their Web strategy – this may happen with a whimper, but it could well be with a bang. Industry and financial analysts alike have so far been unable to predict how things are to turn out. This is certainly no time for dot-com companies, Amazon included, to get complacent.
(First published 29 October 1999)
Mobile gives Voice Recognition its killer app
1999-10-29
Slowly but surely, voice recognition technology is gaining maturity. It may still be in the domain of the comedy club, but will soon form an integral part of the services and devices that make up the IT infrastructure.
There are currently three sectors for voice recognition technology. The first, and best known, is PC-based voice recognition, aimed mainly at the consumer market. Whilst this is achieving some success in the Far East (largely due to the difficulties in providing keyboards for eastern character sets), it is still a niche market. As discussed in \link{,our previous article}, dictation and navigation are not powerful enough reasons for the large-scale adoption of products such as Dragon Dictate and IBM ViaVoice. The second sector is the embedded application sector, in which specific products such as medical and manufacturing equipment are enhanced to include voice input. Thirdly, we have telephony-based products. It is this market which is evolving the most rapidly, and from which the popularisation of voice recognition will occur.
Many communications companies have had their own voice labs for several years, aimed mainly at “enhancing the telephony experience”. As comms and IT have converged, these technologies have been applied to computer applications. For example, last year Lucent announced a pact with Unisys, in which Lucent’s technology would be used to develop an integrated speech recognition software package. Start-up companies such as SpeechWorks and Nuance have been set up with the objective of providing voice-enabled application development tools. So far, the results have been fair to middling: in general the best results come from the systems with limited scope (such as eTrade’s and BT’s voice-driven stock systems). Herein lies the danger: prospective customers remain skeptical, while there is a lack of installed systems with sufficient wow factor to sway them.
In line with the test of the application development industry, component-based development is the adopted direction of the major players. SpeechWorks have released a comprehensive set of Java and ActiveX applets to recognise common structures such as names, addresses, dates and so on, as well as applets for standard applications such as SAP. Similarly, a few days ago Nuance released their first set of voice components, as Java Beans.
Voice recognition is no longer about the quality of the recognition engine, as this technology is sufficiently advanced and improving all the time. Rather it is about the ability to automate dialogues which may occur between end user and machine. Just as with the Web, the voice caller can very easily get turned off by requests for irrelevant or badly ordered information so usability becomes a primary issue, a fact recognised by SpeechWorks’ own development methodology. We are likely to see a number of voice-controlled applications in the future, based on the component model, and the chances are they will become acceptable as their quality improves and their numbers increase. The death of the touch-tone interface is an imminent and welcome consequence of this (though it should be remembered that the bad press of DTMF is as much down to the interface design as the technology). Call centres are the primary market for this: call centre managers will find voice technology indispensable in achieving their “improved service, reduced costs” mantra.
Even so, this use of voice technology is still concentrated more on the enterprise than the individual. The real, mass market, killer application for voice comes with the merger of the mobile phone, a device which we are used to talking to, and the PDA, a device we are used to looking at. The recent announcement by IBM and Nokia serves to illustrate the potential of the technology, which will one day come as standard in handheld devices, particularly as it includes the definition of an XML-based standard for voice. Agreed, a truly portable voice solution which relies on the power of the device alone is a long way off, however advances in wireless technology mean that the recognition can occur on the server side rather than on the device itself. In the keyboard-unfriendly mobile world, the use of voice for navigation and search becomes the better option. Voice entry will sit alongside text and pointer entry, with each being used in the most appropriate manner.
For voice recognition vendors, the future is bright for those that ignore the skeptics and embrace the broadest possible vision of the future. Recognition engines and application development tools are both OEM markets, suitable for the systems integrators. There does exist a third market, which bears a striking similarity to the Web site development market currently occupied by products such as NetObjects Fusion and Microsoft FrontPage. As voice-enabled mobile devices gain presence, site developers will require tools to enable the receipt of speech commands. There are very few players in this space at the moment – SpeechWorks is one of them. However there is still plenty of time for the other voice recognition companies to catch up.
(First published 29 October 1999)
November 1999
This section contains posts from November 1999.
Silver Surfers fuel UK Net growth
1999-11-22
It is generally agreed that surveys should be taken with a pinch of salt, particularly when commissioned by organisations standing to gain from the results. As results from three surveys appear on the same day, it is interesting to compare the findings.
First off, High Street banker Barclays commissioned a survey from NOP, which concluded that 41% of people surveyed were willing to switch to Internet banking within a year. It also noted that 36% of over-65s said they would like to bank online.
Second, a survey by Continental Research found that despite 11.1 million people use the Internet regularly (once a month or more), only 1% of all retail purchases were being made over the Web. It also found that average and lower-income families were being left out.
Third, a Which?Online report concluded that more than a million people over the age of 55 are regularly using the internet, primarily as an information source but also to chat.
Of course these are selected highlights of summaries of the surveys, however it has to be said that the different results bear each other out. Particular interest has been garnered by the “Silver Surfer” category. To date, it seems that insufficient attention has been paid to the group of people with more time, and maybe more wisdom, on their hands. Here is proof if any were necessary, that the Internet is more than a technology for the information-hungry whizz kids of our time.
For commercial organisations such as banks and retailers, the raision d’être of the Internet is to make money. While the current reality is that 99% of consumer cash is still being bagged at traditional outlets, the potential exists for a substantial percentage to be transferred to Internet-based transactions. Buoyed by the proof that the Web exists for more than just the propeller-heads, companies are trying every trick in the book (and plenty of new ones) in their attempts to garner a better ion of this market. This is currently geared around Web sites: fresh from another round of high street branch closures, for example, Barclays has announced new features to its online banking service. Sainsbury’s, the beleaguered supermarket chain, has also stated its wish to “launch, develop and own the best portals” in the food, drink, home and garden categories of Internet shopping.
The fact remains that nobody really knows what will trigger the consumer majority to start online shopping on masse. An exponential curve is assumed, but this will not be without significant effort on the part of the commercial organisations to make it so. It is unlikely, for example, that the PC-based Web site is the epitome of what can be done with the Internet. Bandwidth is still a problem which is likely to limit mass market adoption for another couple of years at least. Finally, the vision of most organisations remains sadly limited – Barclays’ list of online banking features falls well short of the blueprint for online banking described by Patricia Seybold in her book “Customers.com”.
Overall we have a way to go yet. Initial excitement and take-up by the hard core will undoubtedly give way to a wider adoption, but both technology and functionality need to improve substantially before this will happen.
One last word about “average and lower-income families.” Various initiatives have been announced, such as Gordon Brown’s PC rental scheme, but they all ignore one significant fact. The Internet is as much about reducing costs as it is about increasing profits. Central government will, sooner or later, work out how much it could save by providing Internet access to all. If this is coupled with the ongoing cost reductions in equipment and connections, then those on lower incomes should be able to take their place in the early majority, when it comes.
(First published 22 November 1999)
Real Time Linux to become a Reality
1999-11-22
For Linux to take over the world, it must prove itself viable in several, as-yet uncharted territories including desktop, enterprise server and embedded platform. Whilst it still has a way to go in all these areas, an announcement from Lynx suggests that a solution for the embedded platform may be nearing readiness. Lynx expects to release BlueCat Linux by June of next year. This version of Linux will be able to support embedded and hard real time applications (as opposed to pseudo-real time applications) and will be released as open source.
What is significant about this announcement is that Lynx is already a vendor of a POSIX-compliant, real-time operating system and so clearly has some experience to bring to the party. Secondly, the whole issue of embedded systems raises the question of “What’s in the box”. Embedded systems are not visible to the user of the device concerned, and it is this user that is the most susceptible to marketing. Selection of which embedded OS is based on a number of factors, including functionality, manufacturer credibility and access to trained staff. If Lynx can successfully prove itself relative to these three factors then it may have enough to go up against the more marketing-oriented approaches favoured by non-Linux vendors.
It is both unlikely and unreasonable that Linux would take over the world. Microsoft have already proved beyond doubt that the concept of a one-size-fits-all operating system is fatally flawed. However Linux has already proved itself as a capable, general-purpose operating system. If Lynx can demonstrate its credentials in the embedded market, this will further reinforce acceptance of open source Linux in other areas.
(First published 22 November 1999)
DNA points to chasm in the law
1999-11-22
Oh dear, oh dear. There are few areas which show how woefully inadequate the law was to meet the needs of this information age, than the subject of DNA data. As recently discussed on Wired, for example, only 3 states in the US have laws prohibiting the unauthorised release of DNA material which could be used for testing. Similarly, in the UK DNA testing looks like it will quickly become the norm. These weaknesses, coupled with the unseemly rush towards DNA testing, suggest something needs to be done fast to ensure that civil liberties are protected.
The human genome project is progressing stunningly well, so it seems likely that the whole of the DNA structure will be identified and mapped within ten years. Such advances are impressive, as are the benefits of such research. As usual, however, the protective measures against the misuse of such information lag behind the research. There will be an inevitable period where, in certain areas, Joe Public is at a disadvantage relative to the early adopters of the new technology. One such area looks set to be law making itself.
Several questions arise. Like the unauthorised taking of fingerprints, can officers legally take tissue samples without consent? Like with the debate over ID cards, would those people refusing to participate be treated differently to those that consent? As the capabilities of DNA testing are developed, exactly what information should be made available to the police, for example genetic defects or genes which indicate violent or other tendencies? This is not meant to be a Luddite diatribe, but the fact remains that many, many questions remain unanswered, and all the while technologies are improving and policing organisations are looking to how they might benefit.
It is nigh time to open these questions up to broader debate. Police authorities and law enforcement organisations worldwide would seem to be ahead of the wider public on these issues, who would do well to catch up.
(First published 22 November 1999)
IP becomes carrier protocol of choice
1999-11-23
Some interesting figures have come out of BT, who is investing half a billion pounds in their IP networking backbone. BT will be increasing the IP points of presence from 14 to over 100 nationally, and intend to implement the separation of Internet from voice traffic at local exchanges.
The trend towards Internet traffic from voice traffic is visible across the carrier marketplace. Nortel, for example, announced recently that it would be integrating IP routing capability into its transmission devices, with two effects: the first was that separate IP routers would no longer be necessary and the second was that all traffic, including both voice and data could be transmitted using IP. Another example is GTE, who recently announced a dedicated Internet for voice traffic to give customers the cost reductions of IP coupled with the performance guarantees of a dedicated network.
By bolstering its Internet access capability, in the short term BT are taking a strain off their voice backbone. However they are also preparing themselves for the general migration to IP traffic from voice traffic. It has generally been agreed that IP will be the protocol for all traffic types which use the global carrier infrastructures, voice, data or otherwise. Remaining questions are more a matter of “when” and “how” than “what if”.
Given BT’s monopoly position in the UK, BT’s announcement has some some interesting impacts on dial-up Internet users. First, consider the modem link to the local exchange. Modems are required to convert digital data to within the ranges required for the voice carrier network. If BT are demodulating and sending such data directly onto the Internet from the local exchange, it becomes questionable whether modems are necessary at all compared to DSL-based mechanisms. Second, the question arises as to whether BT are exploiting its privileged position as “owner” of the local exchange, by extending Internet access to the local loop thus offering potentially better performance to its own subscribers than to users of other ISPs. This remains to be seen, as it is quite likely that other carriers’ equipments are also to be installed in local exchanges.
Overall, announcements such as this are steps along the way to the Internet becoming the global communications network for all media including broadcast media. Local and global carriers need to position themselves for this inevitability, whilst being careful to not overtly exploit the advantages of their incumbent position.
(First published 23 November 1999)
Chip development goes up a level or two
1999-11-23
Despite fears that Moore's law is running out of stem, recent semiconductor announcements suggest that there’s life in the old technology yet. First off, scientists at Berkley University, California, have developed a transistor that is four hundred times smaller than those currently available to chip designers. Second, the Semiconductor Industry Alliance (SIA) released its technology roadmap which indicated that future gains in scale would be based on integrating multiple silicon and packaging technologies on a single chip. Finally, last week IBM announced how, using developments in chip layering, it had developed transistors which stood vertically on the chip’s surface, rather than laying flat.
All these developments add up to the suggestion of a rosy future for integrated circuits. Given the fact that the Berkeley development will not be patented so to enable “the widest possible usage,” and the existence of the SIA as a cross-industry body of competing vendors, the future looks even brighter. Most significant is the coupling of the IBM announcement with the plans of the SIA, as together these suggest integration of many device types, including displays, processors and audio equipment, onto a multi-layered, multi-component, single chip foundation.
It is unclear how much potential for innovation still exists in silicon but these developments should keep the ball rolling for a good five years, by which time research in other areas (For example, nanotechnology or Hitachi’s memory chip research) should be starting to bear fruit. Again, integration is the key: by the time the new generation of devices starts to roll off the design station, they will be able to slot right into the connectivity and packaging developed for the older generations of integrated circuits.
(First published 23 November 1999)
Bluetooth is going not going to change the local networking landscape… yet
1999-11-23
At least that was the conclusion of major vendors of the technology, blaming software and interoperability issues for delays in bringing products to market. Despite this initial disappointment, companies are still moving full steam ahead with Bluetooth. So what is it, and how will it affect end users?
Bluetooth is a short-range wireless standard designed for communications between electronic equipment. The standard has been agreed by the majority of hardware manufacturers (including Ericsson, Intel, Lucent, Nokia, Toshiba, Philips and Sony) and software vendors (including Microsoft). It therefore looks set to exist, which already a major hurdle cleared for any technology. A number of applications have already been indicated for Bluetooth, including its use as a replacement for wires or InfraRed connections between devices such as mobile phones and PDAs. It has quickly become clear that the potential for this technology goes way beyond this limited view, however. Home networking has now been recognised as a valid target for Bluetooth, not for fridges and toasters (which, lets face it, has met with significant disbelief) but for HiFi units, televisions and other electronic devices used around the home. The second area which is likely to receive attention is the Small Office/Home Office environment, as Bluetooth demonstrates its capability as a replacement for local networking and the general proliferation of wires in these environments.
There is one significant element of this development which makes Bluetooth an inevitability: it is targeted at equipment manufacturers and hence is likely to be included on any electronic platform that is suited for the purpose. Like the InfraRed ports that the technology is designed to replace, Bluetooth will be an integral part of the device. The difference is that it will actually be used.
Software may be holding up the development of Bluetooth devices, and it is software which will slow down its acceptance. Higher-level mechanisms such as Jini or Universal Plug and Play are still necessary to enable devices to identify each other and to open secure communications channels. Without these, Bluetooth will still be appropriate for a limited range of applications, but will be prevented from achieving its full potential.
(First published 23 November 1999)
Microsoft: the lighting of the long fuse
1999-11-24
The brewing legal storm for Microsoft has remarkable similarities to that already experienced by the tobacco industry. Class action suits have already been filed in Alabama, Louisiana, California and Ohio and there is every chance that more will follow. The probability is that the different cases will be consolidated, suggesting another legal battle which will cause more than a few headaches to the software giant.
There are two main differences with the tobacco industry, neither of which are good news for Microsoft. The first is that the defendant is a single company rather than an industry – yes, individual companies were targeted but this still led to a dilution of the overall thrust. The second is that there are no vested interests relative to the tobacco debate, which pitched smokers against non-smokers as much as individuals against companies. In the Microsoft case, the situation is much simpler – individuals and organisations want some of their money back as they feel that they have been overcharged. This isn’t about proving risks to health or substantiating research, this question has a yes-no answer which many feel has already been addressed by Judge Jackson.
Clearly a huge amount hinges on the final conclusions of the antitrust case, which may not be known until next year. Should the parties decide to settle then a final answer may not be reached, meaning that the class action suits will have to start virtually from scratch. In the meantime, investors are hanging on as long as possible to what is clearly a good earner, monopoly or no. It could be argued that settlement acts only in the interests of Microsoft as it does not resolve the fundamental issues of the case. Should the Judge reach a final ruling then Microsoft will appeal, taking the case to the Supreme Court if necessary. In the meantime, both the company and its investors can continue to milk the cash cow.
It is unlikely that other legal disputes will be able to reach any conclusion prior to the closure of the antitrust case. US lawyers are stunningly persistent when they smell money, however, and Microsoft cannot shake off the powerful scent. Legal organisations will try every possible approach to getting their clients’ hands on the cash: it may take several years but sooner or later, like with the tobacco suits, a case will succeed. Once the breach is opened, investors will hang on as long as they dare before they take the money and run.
(First published 24 November 1999)
Directory services lead Novell out of the mire
1999-11-24
The turnaround in Novell’s fortunes can now be said to be completed. In a year the share price has doubled, the growth rates for directory-based products have doubled and the company profits have virtually doubled. The company has clear strategy, popular products and a fast-growing consultancy arm. All change for Novell, who many were consigning to the junk heap of IT history only a few years ago.
It seems still recent that Novell were bullishly acquiring companies and products such as Tuxedo, WordPerfect and UnixWare (Novell’s name) in their drive to compete directly with Microsoft. Within a year it became clear that the strategy was failing and measures were taken including the sell-off of all the products that the company had been so feverishly collecting. Drastic action was necessary as the share price fell like a stone and the company was derided for a lack of focus or coherence. The company decided to fix their sights on NetWare and its directory service product NDS.
With NDS, Novell has set its sights on being the directory of the Web. Recent announcements concerning the open source release of parts of NDS will most likely be targeted at supporting this ambition. The company has been remarkably successful so far, down to its own marketing success and, it has to be said, Microsoft’s incapability of bringing its much-touted Active Directory to market. Originally presented in 1997, AD will now form part of Windows 2000 which will not be released until February of next year.
Companies rise, and so can they fall. Over the next few years Novell still faces an uphill struggle before it can truly say it is at the top of the directory pile. Microsoft will undoubtedly attempt to steal the company’s flag once Windows 2000 has been released, and this is a threat which Novell will be taking most seriously. Even so, there is still a huge market for directory services which, based on current performance, the company stands every chance of capturing.
(First published 24 November 1999)
Can Linux workstations change SGI’s fortunes?
1999-11-24
Desktop Linux may not be a reality just yet, but as far as Silicon Graphics is concerned it is going to have to be as the company is staking its future on it. SGI recently admitted that it had failed to find a buyer for its Intel-based workstation business. Fresh on the heels of its announcement of an attempted sell-off of its Cray supercomputer product line, the company could really have done without more bad news like this.
Silicon Graphics lost the plot a long time ago, admit sources from inside the company. Faced with the ever-decreasing sales of its proprietary, Unix-based workstations and servers, the company launched itself into high-end NT-based workstations based around a customised hardware architecture. In doing so it fundamentally misjudged the workings of the PC market, by giving itself insufficient differentiation from other vendors to justify the price premium. It also continued to focus on its traditional markets where it should have been aiming squarely at the corporate mass market.
The newest desktop strategy from SGI is to concentrate on the provision of Linux-based PCs. The company recognises that application support is still weak, but are prepared to take the short term risk. Also SGI intends to open up parts of its OpenGL graphics code to the Linux community.
Clearly, though, Silicon Graphics are adopting Linux because they have very few other directlions in which they can go. What is interesting is that while it remains unclear whether the strategy will succeed, it is clear that it gives a general boost to the concept of desktop Linux. The company can exploit its desktop Unix heritage to ensure its Linux workstations have the usability and ease of administration required for the desktop. It is highly likely that SGI will focus on the performance of desktop Linux, again an area which will benefit the movement as a whole. The biggest unresolved issue remains application availability and it is possible that SGI will be able to leverage its relationships with application providers to help resolve this.
There will be a future for desktop Linux. It remains to be seen whether the same is true for Silicon Graphics.
(First published 24 November 1999)
Compaq provides litmus test for OS debate
1999-11-25
If ever there was a single company which could illustrate the complexities of the current Operating System debate, then it would be Compaq. The company is pulling back from NT on Alpha and is offering existing customers an array of options for migration or replacement. Customers can stay on Alpha and choose a free upgrade to OpenVMS, Tru64 Unix or (already free) Linux, or they can get 90% off the cost of a new Intel/NT system.
There are two things we can get out of this. The first is that Compaq, having done considerable homework, is unable to reach any conclusion about relative benefits or customer preferences for the four platforms. Of course, some are perceived as “mainstream,” “niche” or “just arrived,” but overall the company has been forced (though it is unlikely that Compaq would use that word) to offer a choice. This paints a considerably different picture from that suggested earlier this year, which saw NT/Intel and Tandem being the two platforms that the company would promote above all others.
The second point is that the results of the migration, which should come to light over the next six months, will provide a deep insight into the state of the operating system market. It is tempting to believe that customers will choose a migration to another NT platform as they already have skills in this area and given the popularity of the OS. The migration comes with a not insignificant hardware cost, however, and for various other reasons customers may be tempted to keep their existing hardware and change the operating system.
It is to be assumed that there will be additional costs associated with application upgrades, plus the labour and downtime costs of the migration itself. It seems unlikely that many will choose this route, unless they are dissatisfied with NT. Of those that do, maybe the most interesting data to come out will be the proportion of customers that choose Tru64 Unix over Linux. Clearly Compaq will be pushing Tru64 Unix, in which it has a major stake, hard. Success for Linux could well force new changes on the whole Unix OS landscape.
(First published 25 November 1999)
Chip wars come to a head, just in time
1999-11-25
The leapfrogging of AMD and Intel towards the elusive goal of a 1GHz 32-bit processor looks close to reaching a conclusion. According to The Register, Intel will use a semiconductor conference in early February to demonstrate such a chip. However if rumours from AMD are true, Intel may find themselves pipped to the post.
The ongoing thrusts and parries of the two organisations are having several effects. First, AMD has managed to re-establish itself as a processor provider worth considering. A number of the major PC suppliers, including Dell, are starting to take chips from AMD. Gateway withdrew from using the company but there is information to suggest that the two organisations may be re-opening the channel. Success breeds success and AMD now look like they have achieved the kind of critical mass that is necessary to keep up, or even overtake, the erstwhile leader. The second knock-on of the knockabout has been that customers have benefited greatly. Prices have been forced down while new products have been released at an increasing rate.
The question arises as to whether this pace of innovation can be sustained. Or does it? Given the intensity if the x86 wars, it is possible to become oblivious to the other changes that are occurring in the IT industry. The fact is that we are on the brink of an explosion of diverse and innovative devices, with interface compatibility but a broad ranging base of software and hardware. All of the main players are recognising this with Dell releasing its WebPC next week, Microsoft changing its mantra to “great software.. on any device” and Intel making a broad range of investments to give itself a lead in the device explosion.
The next year will prove most stimulating, for one thing it should spell the end of the processor wars which have seen off a number of vendors including NatSemi, Cyrix/IBM and Motorola to name but a few. The closing of this chapter will enable the other players to rejoin the fray, and innovation and partnerships will prove more important than processor speed and MIPs. An important milestone it is, however the success story of the first past the 1GHz post will quickly leave the marketing collateral and enter the annals of IT history.
(First published 25 November 1999)
Zero cost IT – only...
1999-11-25
The current debate about costs of IT equipment and services illustrates the stage we have reached in the technology revolution. Microsoft, for example, have been accused of price fixing; Oracle of abusing their position when setting license fees. Now, on Silicon News, it is claimed that IT services companies are overcharging their customers. So – what is going on?
What we have here is the simplest value equation. In the past there has been a huge perceived value attached to IT. Relative to the manual systems and processes that mainframe processing represented, it was possible to justify spending vast amounts on such facilities. Similarly, new markets developed (and continue to develop) in parallel with technology advances – in the early stages, the price premium is set based on what the end customer is prepared to pay, and in order to recoup the costs of research and development by the vendor. Alongside all of this has developed a need for services, experts in the new technologies or people with past experience of making the necessary changes to organisational and technical infrastructures. Companies have been spurred on by the carrot of reaching new markets or the stick of keeping up with the competition. The internet has resulted in a new wave of this phenomenon, new products at premium costs and newly trained “experts” in the field.
Each new leap of technology will cause organisations to reappraise how they operate and what products and services they offer. Where product and service suppliers have come unstuck is down to one of two reasons. Either they failed to deliver, or they maintained the price premium long after the perception of delivered value had changed.
Failure to deliver is perhaps the most obvious and hence the easiest to describe and comment upon. Products may fail for a variety of reasons, from low quality to over-hyped functionality. Services may also fail, for largely the same reasons. At the earlier stages of a new technology adoption, partial failure matters less as anything is better than missing out on the race. Later, though, the picture is different: late adopters rarely stand for shoddy products or performance.
Price maintenance is harder to judge because it is based on perception. The competition in the hardware market has forced manufacturers to push the bangs per buck ratio ever higher. The copybook of many software vendors is not quite so clean: the price of Microsoft operating systems for example, have gone up rather than down over the past five years. Competition in the hardware space is not matched among software vendors who are often rightly accused of being less innovative and more grasping than their hardware counterparts. Initial demand for point services (for example, how to tune financial packages for best results) has commanded staggering rates, but this is to be expected. Services vendors often justify the maintenance of high prices based on the principle that “it was expensive, therefore it must have been right”. However there is little or no proof of any correlation between the two.
There will be several technology leaps over the next few years, in particular the current eBusiness movement, the revolution in broadband communications, device-based computing and Application Service Provision (ASPs). The costs and benefits of each of these are currently still being understood but it is to be expected that the costs of products and services will be pitched relative to the perceived value of the prospective user base. Older technologies, already perceived as commodities and whose R&D costs have already been recouped, should be priced at a substantially lower level.
(First published 25 November 1999)
December 1999
This section contains posts from December 1999.
Nokia forges Europe’s mobile future, threatens the silicon boy wonders
1999-12-07
Nokia became Europe’s most valuable company yesterday, as its net worth pipped BP Amoco at the post. Not bad for a Nordic TV manufacturer. Almost as staggering is its prediction that cellular subscribers will triple to one billion over the next three years, based on the adoption of the cellular Internet, where mobile phone users access Internet services through the Wireless Application Protocol WAP.
In case it wasn’t recognised already, Nokia looks fixed to be setting a number of standards, not only for mobile phones but also for PDAs and ultimately for computing devices. Why? Because they are all part of the same infrastructure. This is why Nokia is so fascinating. Consider:
- With Ericsson and Motorola, Nokia has brought WAP to market without a glitch, to the demise of 3Com’s WebClipping and Microsoft’s MicroBrowser technology
- With Palm and Symbian, Nokia has settled on the next generation of OS for its devices, sidelining Windows CE and giving both Palm and Symbian every reason to be cheerful
- Again with Ericsson and others, Nokia has been instrumental to the success of Bluetooth which (despite teething problems – no pun intended) looks set to bypass other wireless local networking protocols, for example from the likes of Apple.
Just as information is power, so it is that in the information industry, he who sets the standards rules the world. While the gorillas of the IT industry have been fighting trench warfare in the standards game, the mobile manufacturers have been co-ordinating their efforts in far more gentlemanly co-opetition. These companies are fast becoming some of the most powerful in the world and their size, and their ability to co-ordinate efforts, gives them the potential to trounce the bickering upstarts of the silicon age. As the worlds of IT and communications continue to converge, the next standards battles will give the IT incumbents such as Microsoft, SUN and Oracle a true run for their money.
(First published 7 December 1999)
Where lies the future of Web advertising?
1999-12-07
Hmmm… advertising, advertising, advertising. As illustrated by the launch of the latest Real Networks’ server software, this subject continues to garner attention, particularly in relation to the Web both as channel for and a focus of advertising. It is worth highlighting certain issues that are coming to light based on the nature of the Web as an interactive, ubiquitous medium.
The interactive nature of the Web is only starting to be exploited as far as advertising is concerned. Banner advertising is the most visible form of display, but (and this is surely common knowledge now) organisations can pay to have their names appear at the top of Yahoo’s search list. Adverts can be couched using the traditional methods of sponsorship, but it does not stop there – competitions, surveys and partnerships between organisations are all becoming valid forms of getting an organisation’s name and message across.
It is already common (but not common enough, maybe) for adverts to be targeted depending on a user’s country of origin, stated preferences or logged behaviour. Advertising is part of a pre-sales process which may be managed using Web-based applications: advertising systems will be linked with marketing systems and relationship management facilities to enable a smooth pull-through of the prospective customer. Of course, with interaction, there is risk: it comes as no surprise that advertisers such as Conducent are (probably illegally) collecting details of users’ computers, or that Amazon intercepted customer emails. Such is the Web.
The second point is that the Internet is everywhere, or at least it will be. Despite continuing efforts to preserve existing channels of communication, eventually the Internet will replace the radio, the television and the telephone. Telecommunications providers have accepted this, cable companies have accepted this. This does not mean that traffic will move from one network to another, rather that all networks are becoming internet-based. In the old world, there was the channel such as the radio station, the billboard or the magazine, and there was the advert. In the new world, the distinction between channel and ad is no longer valid, as it is equally possible for an advertiser to host a radio service as a radio site to host ads. On television, a click on the remote control will take the interested party through to the sponsor’s Web site.
For all of these reasons, it is logical to assume that advertising, as a pure form of communication, is on the way out. Traditionally the ad has had a life of its own but it will increasingly be tied into other forms of marketing communication that eventually lead to a sale. The advert is an entry point into a process: if it is seen as such, it can be better targeted and more closely followed. As punters we should be glad, as companies reduce the splattering of irrelevant banner ads and turn their attention to focused direct marketing. The downside is, of course, that such organisations will be more in tune with our needs and weaknesses, and as such are far more likely to succeed in selling to us.
(First published 7 December 1999)
Microsoft: Java comatose, W2K holds key to bypass operation
1999-12-07
Java, Java everywhere, such was the vision of Sun when it first launched the language. Java has gone through several incarnations, initially targeting embedded systems and then aimed squarely at knocking Windows off its monopoly perch. These days the battle cry is for the enterprise – a raft of powerful players such as BEA and IBM have been lining up behind the concept of Enterprise Java Beans (EJBs) running on application servers. But they are not there yet, according to Microsoft, who is seizing the opportunity to get to the higher ground.
To bolster its armoury against Java, Microsoft commissioned a survey of 3,000 developers across Europe, from independent market research company Romtec. The findings make interesting reading, particularly as they compare usage of Microsoft technologies and Java/Corba technologies. Across the board of different types of developers, Microsoft COM/DCOM technologies were shown to be holding their own. A growing number, currently running at between 25 and 40% of companies (depending on sector) were shown to be using these components. Users of Java beans, however, lay at around 5-10%, a figure that has remained relatively constant over the past three years. And as for EJBs, developers of EJBs, these barely registered on the scale.
Microsoft advocates will be encouraged by the findings. COM is growing, Java is stuck in the single figure percentage points. Of course, it is likely that Java flag-wavers will dispute the findings – either rebutting the figures with findings of their own, or bringing up the point that these are, after all, statistics which by nature cannot be trusted. Indeed, Bloor Research published a Java survey of its own earlier this year, which conflicts with the Romtec survey. But it is not the intention here to dispute who is right or wrong. The question is – what if Microsoft are perceived to be right?
There was one very interesting fact that came out of the Romtec survey. This was, year on year, a substantially greater number of development houses planned to adopt Java-related technologies in the next year. Year on year, it would appear, the adoption of Java may have been put off. Anyone who has worked in a development shop will know the difficulties of delivering existing applications and will recognise this conflict between hope and reality.
By focusing their efforts on enterprise applications, Microsoft may well be cutting Java off at the pass. There is a general feeling that Windows NT and its associated technologies have proved insufficient for Enterprise applications, and have preferred to adopt platforms from the likes of Sun, IBM and BEA. Java has a reputation for being slow but its suppliers build on the tradition of delivering true enterprise environments, hence the expectation of EJB application servers is already positive. Soon, however, Windows 2000 will be trotted out of the Microsoft stable. All signs are that it is performant, available and, most of all, scalable enough for the enterprise.
If Microsoft can hang in there for another three months, they will be in a position to influence the “real soon now” school of Java adoption. That is, those development shops that would love to adopt Java if only they could find the time, and a suitable project, to do so. With surveys such as the one quoted here, the software giant will explain how it has the skills base and the incumbent position already. “You’re right,” development managers will say. “Why change – after all, the MS environment is now mature.” Java shops and developers alike may throw their arms up in horror, bemoaning the closed environment, the upward-spiralling license costs and, most of all, the weakness of fellow developers who fail to adopt the J-vision and succeed only in lining the pockets of the world’s richest man.
So – is this truth or fiction? Ultimately it doesn’t matter. In this topsy-turvy, technological world Microsoft have shown in the past that how marketing skill is at least as important as good product. Even as the jaws of the US justice system attempt to close on the software giant, Microsoft may demonstrate yet again that its message making proves too much for the evangelism of the other camps.
(First published 7 December 1999)
Novell bets its future on ASPs
1999-12-22
Directories are dull, but not to Novell as the company is betting its future on them. The future of the company hangs in the balance – if it gets it wrong, it could be consigned to IT history. If right, it could sweep the landscape of IT like a forest fire. The question is, which is it? And the answer may be found in Application Service Providers, or ASPs.
In our opinion, the arrival of broadband communications will signal new models for using IT. ASPs are an indication of the way things will go, with the rental of applications which are accessed over the Web. However this model of ASPs is just the beginning – many different types of service, including communications services, information feeds, application services and business services, will be integrated and provided to businesses and consumers alike. This is the vision, but it is currently hampered not only by bandwidth constraints, but also by a lack of a set of facilities without which the ASP model cannot function. These are billing, management, security and, of course, directory services.
Directories will play a central role in the service provision model, as they will hold information about all the who, what and where of the service infrastructure. With products like Novell’s NDS the mechanisms for directory already exist; however they have not yet been adopted on a sufficiently widespread basis. Novell is betting that they will, and when companies start to pick a product, it wants NDS to be the de facto choice. This is why Novell have started giving away sections of NDS to the open source community, hoping that this will speed its adoption.
What could go wrong for Novell? In a word, Microsoft. Active Directory is currently waiting in the wings for its moment of glory, however this will come in the next few months. In the meantime Novell are attempting to get as big a head start as possible. With NDS release 8, the company has already established a reputation for the product as stable, performant and well-supported by the growing ranks of qualified administrators. This week’s announcement by Information Week, that NDS 8 would be their product of the year, can only have been icing on the cake for Novell, not to mention the recently broken deal with CNN Interactive. Novell’s biggest threat may also prove to have too many other battles to fight, particularly as the portability across many platforms is a clear differentiator for NDS over Active Directory.
Given the way IT is going, it looks like Novell have read the runes correctly. The company is positioning its flagship product at the heart of the Internet, and it will take a substantial knock to remove it once it gets established. ASPs may still have a way to come, but when they do, Novell will already be there.
(First published 22 December 1999)
3Com – first the strategy, the rest will follow
1999-12-22
When 3Com bought US Robotics in 1997, it was unsure what to do with the PalmPilot product. Rumour has it that the decision was made for the company as 3Com executives found they could not do without the device, which became an essential element of meeting room apparel within a short space of time. Realising it was on to a winner, the company hung on to the Palm product line, a decision which has paid huge dividends for the company. Last week 3Com announced it had filed to sell shares in Palm Computing, in an IPO which is generating great interest across the board. The decision has been seen as good for Palm, but where does it leave 3Com?
Let’s look at 3Com’s current position. The company announced its second quarter results two days ago, seeing a fall in both earnings and revenues of just over 4%. According to the financial analysts, this was largely down to a 20% drop in sales of its networking equipment products and modems. Also, despite beating analysts’ expectations on earnings per share, 3Com warned that third quarter earnings would fall to 24 cents per share, 8 cents below predictions. Observers are using terms like “stumbling,” “struggling” and “challenged.” The sell-off of Palm is seen as a positive move as it will enable 3Com to focus on its core business, but some say that it is a lack of focus in this area which has dogged 3Com from the start.
Despite all this, 3Com do seem to be getting it together. The company has launched its e-networks strategy, in which it positions itself as a provider of core building blocks for the converged voice, video and data markets. This move is not just about networking equipment – applications software and services will also play a big part in the overall picture. The recent $100M stake that the company took in wireless applications company USWeb/CKS, as well as the positioning and feature sets of products like its NBX family, are illustrative of how 3Com is moving forward with this strategy.
3Com is moving away from battles it knows it cannot fight, such as sales of “pure” networking equipment. Instead it is setting its sights on the new frontiers of convergent technologies, applications and services. While this new landscape shows all the signs of offering lucrative opportunities, which the company is perfectly capably of exploiting, it is still very much in the future. It will remain so until bandwidth issues have been resolved, and standards and tools for application communications, security, directory services and management are not only set but also adopted across the technology industries. This presents quite a barrier to be overcome: for 3Com’s sake it had better happen sooner, rather than later.
(First published 22 December 1999)
Y2K – believe us, we’d rather be wrong
1999-12-22
So that we can keep our last missive before Christmas full of seasonal cheer and good tidings, we thought we’d better round up the doom and gloom stuff on the penultimate news day for IT-Director.com before the Millennium. First off, is Y2K fact or fiction? Let’s look at some recent news stories, which may make up your minds.
- the US State Department announced on Monday that it would suspend visa processing at embassies worldwide, for the first two weeks of January. The reason cited was “Y2K issues.”
- On December 13, Wells Fargo & Co. sent 13,000 renewal-notices to its customers with the year 1900 printed on them. Apparently the error was down to a supplier forgetting to change the date on its printing machines.
- In June, a Y2K test caused four million gallons of raw sewage to be spewed onto the streets of Van Nuys, Southern California.
- A food distributor has discovered that its computer system was tossing out items that expired in 2000, thinking they had sat on the shelf for the past 99 years.
These are a selection of real events. A more speculative report from BSC Consulting in the UK has said that the world is categorically not ready for the challenges of Y2K. The reason it gives is that, despite the huge efforts that have been made, “many governments and business programs have been a mixture of incompetence and complacency.” The report also points out that significant problems will be caused due to the interconnectedness of compliant countries with others which have not resolved the problems. All talk? Well, maybe not. The Y2K readiness survey recently published by the United Nations’ International Y2K Cooperation Center stated that “because some businesses, schools, and governments will not be sufficiently ready, they, and some of the people who depend on them, will suffer economic harm from Y2K-caused errors. These local impacts will range from minor inconveniences to the loss of jobs.” The IYCC’s overall assessment may be summarised as “many Y2K errors, moderate impact.” Unfortunately we are getting mixed messages from the IYCC, who reported more recently that “health care services worldwide are at the greatest risk for Y2K disruptions … hospitals and the health care sector in general have been the slowest worldwide to ready themselves for the new year, and … some nations may be overwhelmed by the sheer size of the problem.” This perspective sounds a little more than the “moderate” point of view espoused by its Y2K readiness survey.
If these reports, produced by bodies with hands on experience, are to be believed then all will not be well in the Millennium. Even if the problem does not manifest itself in “complacent” Western countries, the in its survey IYCC warns of humanitarian crises which could have at least an indirect impact on just about everybody. Don’t get us wrong, we would love the harbingers of Y2K doom to be proved foolishly false, however we also think that the current levels of complacency are as misplaced as the visions of Armageddon.
What will happen? Nobody knows… yet. But New Zealand, the first industrialised nation on the dawn of the new Millennium has volunteered to keep us informed. Updates on Y2K issues will appear on http://www.y2k.govt.nz/, as they occur. Also, the IY2KCC’s own Web site will be host to a by-country breakdown of Y2K status. The European Union web site may be found at http://www.ispo.cec.be/y2keuro/year2000.htm. Finally, for the sceptics out there that believe that either (a) there is not a problem, or (b) we can trust our institutions to resolve it, it might be worth looking at the paper by Peter De Jager on the origins of the Y2K problem, at http://www.sciam.com/1999/0199issue/0199dejager.html. Clearly, according to the paper Y2K is a complex technical issue which is difficult to resolve even with the best will in the world.
Many years as a software tester are reason enough for myself to err on the side of caution when it comes to computer-related issues. Y2K is probably the biggest “computer-related issue” that the world will see for some time, and it is certainly the most important to date. Doom or dawn, it would not do to move into the new Millennium with anything other than a note of caution. I won’t be bothering with bunkers, but I shall be watching things very carefully as the New Year starts.
(First published 22 December 1999)
2000
This section contains posts from 2000.
January 2000
This section contains posts from January 2000.
Tokyo Joe suit starts the legal ball rolling
2000-01-07
“Moments of glory, desires, wealth, in the end all an illusion.” So proclaims the home page of Tokyo Joe, the darling of the Internet stock market. With four separate charges of fraud currently being filed against him, it would appear that the online stock guru is being hauled back to reality.
Amongst other things, the Securities and Exchange Commission (SEC) has accused Tokyo Joe of “scalping”. According to the SEC, TJ took advantage of his influential position to talk up the price of stocks, and he encouraged others to buy whilst selling his own share holdings. In addition he is said to have kept his own trading activities secret from his own clients and lied about his track record. Damaging charges indeed.
Whilst it is unclear whether the Tokyo Joe is in fact guilty as charged, or indeed whether the charges can be made to stick, this case serves as a landmark for the currently under-regulated area of Internet stock trading. As this case indicates, the legal issues surrounding dealing in eStocks are still very much on the drawing board. This is acknowledged by TJ’s lawyer who claims that his client had “legitimately taken advantage of legislative loopholes," according to The Independent. Hence why it is a good thing that the SEC has launched this, its first suit concerning Internet stock trading. Much of legal activity relies on precedent: while it is unclear what the outcome of this particular case will be, its results will provide at least a small stepping stone to future cases.
The Internet has left the law at a standing start. Its positively anarchistic nature can be said to have driven innovation in Web time, not to mention a potential new world of global free speech. Despite this, an appropriate legal framework is necessary to counter against the inevitable abuses of the system and its users. All in all, by raising the suit against Tokyo Joe, the SEC have taken an important step.
(First published 7 January 2000)
AOL-TV brings forth embedded Linux – by stealth
2000-01-07
Web TV has been long-predicted but, so far, has failed to stoke up the interest of the mass market. With the launch of AOL into the market, this looks set to change. And it looks like, where AOL goes, Linux is destined to follow.
At the Consumer Electronics show in Las Vegas, AOL showed off its interactive TV services which it is to launch, with satellite company DirecTV. The AOL solution is to use rebadged set-top boxes from Philips Electronics and from Hughes Network Systems. AOL intends to leverage the 20 million subscribers it already has, based on which (if it plays its cards right) it should secure it a substantial proportion of the 30 million US households predicted to be using interactive TV by 2004.
All very well and good so far, but – where’s the Linux link? There is currently one element missing from AOL’s portfolio. The current devices do not yet have the capability of caching live TV broadcasts such that they can be paused, rewound or just watched at a later date. Both Philips and Hughes intend to include such facilities in the future, and are licensing technology from TiVo to this end. TiVo products are based on a PowerPC architecture running (you guessed it) an embedded Linux kernel.
Should AOL’s strategy go to plan, the positive knock-on effects for Linux could be staggering with millions of consumers using devices with “Linux inside”. Consumers who turn to AOL are unlikely to care what operating system is running in the set-top box. Manufacturers and software vendors will care, indeed they will be watching with baited breath. Despite Linux’s strengths as an embedded operating system, its success in the consumer market will be more dependent on manufacturers’ preparedness to use the platform. Given a successful launch by AOL, other large-scale manufacturers will likely jump on the bandwagon: once established, there will be little that other companies can do to prevent the success of embedded Linux as a platform. Coupled with Intel’s recent announcements, it would appear that the future dominance of Linux on embedded devices is unassailable.
(First published 7 January 2000)
At Linus’ Transmeta, Crusoe finds its Friday
2000-01-25
If the recent surge of interest in handheld computing devices is anything to go by, it looks likely that Transmeta have found the goose that will lay their golden eggs. According to a survey by analysts NPD Intelect, December retail sales of plamtops grew by 169% compared to the same period in 1998. What is more, this success was not confined to the cheaper end of the scale. Sima Vasa, vice president of technology products for NPD Intelect, was quoted on News.com as saying that "Consumers are willing to pay a high price for the total mobility of personal digital assistants," with the result that high-end devices were also seeing expanding sales. Overall, Palm was the winner, gaining an increasing share of the market.
All of this is pretty good news for Linus Torvalds’ new venture, Transmeta, which has barely been out of the news since it announced its Crusoe chip last week. Crusoe uses a technology known as Very Long Instruction Word (VLIW), which dramatically reduces the size of programs but requires software to prepare the instructions for the chip. This software-hardware combination is what is garnering the most interest: because it works in software, the code preparation or “code-morphing” also enables Intel instruction set to be used without fear of infringing Intel’s hardware based patents. Despite Transmeta’s clear targeting on Intel’s market share, analysts concur that the giant has little to fear just yet.
There have been many projects researching how instructions would be better structured and fed to processors, but so far none have reached mass-market appeal. Sometimes things work because the time is right – consider the take-off of the Palm Pilot following many failed attempts by the competition. In Torvalds’ case, it is his personality that is creating the wave that he rides – when he first announced his intentions, few could resist seeing what the demi-penguin could come up with next.
On the subject of Linux, it would appear that the little man has not forgotten his offspring. Several extensions to the Linux kernel have been made to support the Crusoe chip, and Transmeta are fully behind efforts to bring embedded Linux-based devices to the market. This could raise interesting issues of conflict of interest – should Linus remain responsible for what goes into the kernel? We shall leave this debate for another day.
Crusoe chips, to be manufactured by IBM, are currently running at around the 500MHz mark but there is a 700MHz version on the horizon. The first customer for the chip is Diamond Multimedia, already noted as trendsetters through their launch of the Rio MP3 player a year ago. Diamond will be using the first chips off the production lines in their forthcoming WebPad device. These chips are reputed to support Linux only, so it looks like this could be the first example of a Linux handheld.
Overall Crusoe will give technology a boost by showing that other architectures are viable and, indeed, prefereable when used inside devices. So – it’s good news for handhelds, good news for Palm, good news for Transmeta, and good news for Linux. So who is losing out? Well, all of these developments put the squeeze on any platform which is trying to get into the space. If press coverage is anything to go by, it looks like “Powered by Windows” may well become one of the forgotten taglines of early 2000.
(First published 25 January 2000)
Linux consortium brings new weight to embedded debate
2000-01-25
Fears about the fragmentation of Linux may well prove unfounded, if the goals of the newly formed Embedded Linux Consortium are realised. The alliance, which includes such big names as IBM, Motorola and Red Hat, has set its sights on the rapidly growing market for embedded devices such as Internet appliances, set top boxes and PDAs. The advantage of Linux – price – may well prove to be the clinching factor to ensure the success of the platform.
Why price? The fact is that devices are a box-shifting market. Device vendors are used to the extremely tight margins associated with such products, largely caused by the fact that they are aimed at the mass market where competition is vicious. Anything that can be done to shave a few cents off the price of a device which costs £100 or less, is seen as a good thing, so what could be better than a platform which is entirely free of licensing costs?
Also, Linux is already proving that it is capable in this area. It can offer pseudo-real-time capabilities in a small footprint. There may be fears about its memory management capabilities, but if these are true then they are likely to be addressed. Such is the nature of open source.
We are already seeing products which are running Linux under the bonnet (hood, to you transtlantic cousins). In our \link{http://www.it-director.com/ts/linux/index.html,Linux technology spotlight} we mentioned digital video recorders TiVo, which are to be licensed by some of the major players in the satellite TV space. More recently, at CeBit Samsung announced a Palm-a-like PDA which runs Linux.
Fears about the future of Linux have centred around its fragmentation – its open nature means that developers are able to develop it in a variety of directions, which may or may not be compatible. This is a real fear, particularly as the variety of platforms for Linux continues to diversify. However announcements such as this one go some way towards dispelling the fears as the evolution of the operating system can be dealt with by the organisations as a group, acting in co-opetition.
Gartner group recently announced that Windows CE would overtake PalmOS as the PDA operating system of choice, by 2002. In the light of this announcement, they may wish to change their minds.
(First published 25 January 2000)
British Telecom – looped around a rock and a hard place
2000-01-25
At last – the local loop looks like it is to be opened up to competition. A set of draft guidelines, published \link{http://www.oftel.gov.uk/competition/llu30300.htm,here}, were released by Oftel on Friday which set out the requirement for BT to open up competition to its “last mile” by June next year. 2001 could be a good year for telecommunications users in the UK, but maybe not such a good year for BT.
Essentially, this boils down to one of the few remaining areas in which BT can be said to hold a real monopoly. This is the wire between the local exchange and the end-user socket on the wall: the socket, the wire and the exchange are all currently owned, managed and charged for by BT. Plans to change this have been in train for some time, but a timetable has not been forthcoming. Not, that is, until now.
The effects of this change will be far-reaching. According to the Register, which very kindly summarised the main points of the guidelines (which, let’s face it, wouldn’t win top prize in a clear English competition):
“Among the conditions announced … is BT's requirement to provide unbundled loops to other network operators, to permit the co-location of equipment at its local exchanges, and to provide any necessary services to open up the network to competition. It will also give Oftel the power to set the price for these services.”
Given the current challenges that BT is facing, from quarters such as free ISPs and government announcements about reducing telephone charges still further, this is one extra problem that BT could really do without. Inevitable it may be, but pleasant it is not. What is worse, the companies lining up to threaten BT’s monopoly position (from telcos like AT&T, cable providers such as NTL and ISPs like Alta Vista) are, in general, global players with few restrictions on what they can and can’t do. BT still faces a number of restrictions on its own practices: it is still unable to deploy cable services and, according to Oftel, is facing increasing pressure from the international calls market.
BT expressed fears about being bought following its recent share price falls – this may be the best bet for a company that is seeing its monopoly replaced by a regulatory framework which may leave it unable to compete. BT has been slow to move in the past, and its privatisation needed such a framework, but the world has moved on. The June 2001 date marks the end of an era for BT; it should also mark the beginning, with the company free to compete against some of the world’s largest communications companies. Even if its bonds are broken, these are battles which the company is not guaranteed to win.
(First published 25 January 2000)
Microsoft delays aspirations to be gatekeeper of the virtual enterprise
2000-01-25
The IT industry is rife with stories of companies trying to emulate previous world-shaking innovations, with varying levels of success. Lotus tried to follow 1-2-3 with Notes, mainframe IBM brought the personal computer to the masses. The continued need to innovate is driven by a desire to stay in the game and to appease shareholders, whose skill at abandoning faltering companies would put ship-rats to shame. Microsoft has made repeated forays outside its traditional PC market space with varying levels of success. In media streaming, it is beginning to look like the company may dominate, through a recently brokered arrangement with Liquid Audio. Elsewhere the future is not so certain - examples like MSN and CE serve to illustrate how Microsoft may possess the goose that lays the golden eggs, but not the underlying technology.
A test case is coming round with the now-delayed launch of BizTalk Server, Microsoft’s long-promised engine for XML-based eBusiness communications. The product is over six months late in delivering to beta testers, with the probable launch date being autumn 2000. The reason, say observers, is that the competition is already ahead of the game. According to a recent report on Cnet news, the missing link is the connection with business processes. This is the ability to link the communications required between organisations with the higher-level business logic, for example ordering a product or handling a customer request. Products from HP and Vitria, for example, already support such a linkage. It could be argued that Microsoft’s competition is looking through the right end of the telescope by building business logic before trying to automate it. The bottom-up approach used by BizTalk Server is not a good starting point with which to handle the myriad complexities of modern business. Microsoft insiders claim that the BizTalk architecture is changing to take this into account – the question is now, whether the competition can capitalise on the delays which ensue. One point in Microsoft’s favour is its ownership of Visio, the de facto tool for business process modelling among many business analysts – it remains to be seen how the company will exploit this advantage, for example by providing a Visio interface to BizTalk.
For Microsoft, its aims for BizTalk are far more than just sales figures for a business communications tool. This represents the battle for the gates [sic] of CyberSpace – in the virtual enterprise, business success will be built upon communications and it is likely that a single company will end up the de-facto standard. The playing filed is currently empty, but soon corporate IT shopping lists will include the item “XML-based business communications facility”. Microsoft would very much like to replace this phrase with “BizTalk Server” in much the same way that Hoover and Coke have done in the past.
Microsoft has ridden the PC revolution with enormous skill and dexterity, becoming one of the world's most powerful companies in the process. Even now, it seems unlikely that the dream will fade - the PC is moving into the machine room and Windows 2000 looks set to buoy up the share price for some time yet. However, outside the kingdom where the PC holds sway Microsoft’s less successful ventures should serve to keep the company firmly on the ground. Ballmer and “innovator” Gates are not gods, but mortal men who are no better at predicting the future than anyone else. There are no guarantees, particularly for a company which may see itself divided into three or more parts before the end of the year.
(First published 25 January 2000)
February 2000
This section contains posts from February 2000.
Free-PC is dead! Long live the free PC
2000-02-03
Not every good idea stands a chance of working in these topsy turvy-technology days, as discovered US startup Free-PC. The company was bought last November by Emachines who put the final nails in the Free-PC coffin by relinquishing all rights and responsibilities to the computers that had been handed out as part of the “free” deal.
Why the Free-PC model failed is a matter of economics and timing. Economics because the company founit could not generate the advertising revenue it required to cover the costs of giving out hardware. Timing because, well… have you seen what is just around the corner?
Samsung have just announced a disposable PC, which is likely to be based around Intel’s “system-on-a-chip” codenamed Timna. The device will be completely sealed and will retail at around $200. That’s pretty cheap for a PC.
Historians may well determine that Apple was to blame. When the iMac was released onto an unsuspecting market, the punters rushed to purchase a computer which was self-contained and, above all, simple to use. The trend was followed quickly by PC suppliers such as Dell, Compaq and AST, all releasing non-user-serviceable computers in rapid succession. Once the trend was started, products such as the one announced by Samsung became inevitable.
In the here and now, it is worth taking the wider view into account. Samsung’s move signals the removal of the final barrier from public perceptions about the nature of the PC. It is a device, just like all the other devices which are coming to litter our lives – the mobile phones, PDAs, set-top boxes, games machines and the like. As such it will take the same characteristics; this change of status signals the death of the PC as we know it. Out with the expensive computer which requires frequent, complex administration; in with the affordable, low-maintenance device. As such the PC looks set to be treated as any other device: to be sold, low cost, bundled with other deals such as subscription services and software packages. In the UK the hardware is often given away with such bundles: for examples, the latest mobile phones may command a premium of £100 or so, but last year’s models are given away as a sweetener.
Free-PC may have had the right idea, but as so often with technology, they came to the market too early. Come Christmas next year, when Samsung’s disposable PCs hit the shelves, the time will be right for companies to take advantage of the free PC model. By then, the chances are we will have stopped thinking about the PC as “the mainframe in the dining room” and it will take its rightful place alongside the rest of the gadgetry.
(First published 3 February 2000)
Look at me, Mum! I killed eBay!
2000-02-09
Denial of service attacks have always had an uneasy relationship with other types of security requirement. The main reason for this is that they do not directly impact on that all-important corporate data: nothing is lost, corrupted or revealed to prying eyes. Hence, in security policy definition and implementation, such attacks have often been given a lower priority than they deserve.
This point has been starkly illustrated over the past few days, as a number of major commercial Web sites have succumbed to denial of service attacks. On the 7th February, Yahoo was the first to be bitten. The next day Amazon, Buy.com, eBay and CNN were all brought to their knees for anything between one hour and three hours. While it is not clear whether the attacks were all caused by the same group (as nobody has yet indicated responsibility), it is clear that copycat attacks are inevitable over the coming days and weeks.
Why would somebody want to bring a Web site to its knees? There are as many reasons as there are Web sites. Everything from cyberterrorism to (ironically) disgruntlement with the service, from anticompetitive behaviour to sheer high-jinks can bring a person or group to assault a site. Now that the simplicity of such attacks has been revealed, using “innocent” computers to host a Trojan Horse program which, at a predetermined time or on command, will send a stream of requests to the targeted site, the attacks look set to move into the mainstream of security problems. Given the arrival of broadband communications technologies which enable home computers to keep “always-on” connections to the Web, the pool of relatively insecure devices which can be used as proxies looks set to increase. Denial of service attacks are harder to prevent than they are to cause: the best measures tend to involve the inclusion of tools which can spot this kind of behaviour and either alert the appropriate person or instigate a suitable response.
Above all, denial of service attacks serve to illustrate the fragility of the electronic infrastructures that we are building, if they are not properly constructed to take into account all possible security and privacy measures. The attacks are not only damaging to the bottom line of the businesses they hit, but are bad press for eCommerce as a whole. So far it looks like investors are prepared to ride the storm and stick with the dot-coms, but given the looming issues of the long term financial future of such companies, the tar-brush of denial of service is one which organisations such as Amazon could do without.
(First published 9 February 2000)
EU W2K case puts cart before horse
2000-02-09
There is always something to be said for planning ahead, but if the early reports from ZDNet UK are anything to go by, it could be that the European Union is take this a little too far. According to the reports, EU competition chief Mario Monti said that Windows 2000, the latest release of Microsoft’s flagship operating system, may be breaking EU antitrust law. A probe into the issue is planned, following “allegations that Microsoft could extend its dominance to server operating systems and electronic commerce”. Interesting allegation, but for two points. The first is that W2K is not a new product. Secondly, it is extremely difficult to work out how such a hypothesis could be tested in advance.
Windows 2000 is due to be launched in just over a week, undoubtedly with as much aplomb as the marketing muscle of the mighty Microsoft can muster. The OS, touted as “the business operating system for the next generation of computing,” is being presented as something new. However this perception is out of step with the facts. W2K is an upgrade of Windows NT. It may be much-enhanced for scalability and availability, transaction management and interoperability, but it remains an upgrade. Windows 2000 it may be to you and I, but to the developers and internal staff at Redmond the product is referred to as NT5.0. It is not, therefore, true that Microsoft are moving into areas which the company previously had nothing to do with. Indeed, had the operating system been able to support “enterprise-scale” applications five years ago it might well have won the Unix wars and the rest would be history.
The antitrust case that is underway in the US does not question the fact that Windows grew to be the dominant operating system. What it deals with is how Microsoft used their dominant position to stifle competition in other areas, such as Web browsers, by including them with the operating system itself. It is here that the EU may have a point: by including Microsoft Transaction Server as part of the operating system, it could be said that the company is exhibiting anti-competitive behaviour relative to other application server vendors (such as BEA, IBM and SUN). However, there are many application server vendors that support a number of platforms other than NT, and have already created a sizeable market. This is not the gorilla company stifling the activities of a bright young start-up; rather it is one company presenting its offering to an already-crowded market, of which both the evolution and the outcome are far from certain.
It is always worth keeping a watching brief on major companies to protect the markets against anti-competitive behaviour. However, in the IT sphere in which fashion plays as big a part as technology, it is impossible to determine in advance who the winners and losers will be. Windows 2000 is unlikely to sweep the world in the same way as its desktop sibling, for many reasons of which the most important is that Microsoft is no longer perceived as the only platform in the eyes of the board. Mainframes have survived the onslaught of client-server, Unix (with a little help from a softly spoken Scandinavian) is as popular as ever and new types of device are coming onstream every day. The context which helped Microsoft to world dominance no longer exists, and as illustrated by Palm, Linux and AOL, the company does not have a guarantee of success in every market it targets. The EU’s crystal ball is as opaque as everyone else’s: in IT, no company has a monopoly on predicting the future.
(First published 9 February 2000)
Java overtakes C++, takes number one development slot
2000-02-09
It's official – according to a survey by Bloor Research of over 40,000 job adverts in January, Java has taken the top slot in development language skills, with 36.8% of all programming job adverts in our sample now specifying Java. This figure is just ahead of demand for C++ skills. Meanwhile demand for VB and C are holding up with perhaps a tendency to decline, both hovering at around the 25% mark. These latter skills are still in widespread use but the slight decline suggests that we will start to see a drop in their use in 2001.
To use a well-worn phrase, of course there are lies, damn lies and statistics. A previous survey (reported on IT-Director) showed that the requirement for Java-based projects was holding static relative to Microsoft technologies. It has to be said, however, that a survey of the job market does present a pretty clear picture of exactly what is wanted right now, as opposed to projections and anticipated requirements. According to the ads, Java is what the world is using today.
It looks like Java’s position is becoming unassailable. Object orientation and component-based development are flavour of the month, both in the Microsoft camp (with COM) and the Java community (with Java beans). Non-OO languages such as C and Visual Basic are losing out to the brave new world of objects. C++ is still very popular, but it seems inevitable that it will go the same way as the other languages and Java will steal the field. Why? Because Java has already become the most popular teaching language at Universities. Because it is available on the largest number of platforms. Because it is of strategic importance to some of the largest IT companies, including Sun, IBM and Oracle. And because, once it moves ahead of other languages, it will build on its own success.
Fifteen years ago, few would have predicted that developers would still program in third generation languages such as the ones discussed here. The brave new world was one of self-generating code and dynamically customisable applications. Such visions remain pushed ever-further into the future; in the meantime, it looks like Java is set to rule the roost.
(First published 9 February 2000)
Microsoft gets back to its UNIX roots with Interix
2000-02-14
Given the current polarisation of the UNIX/Linux and the Microsoft/Windows camps, you could be forgiven for disbelieving Microsoft’s UNIX past. Today, IT history has chosen to quietly sweep under the carpet the fact that Microsoft played an instrumental part in bringing UNIX to the PC, a heritage which subesquently bequeathed Minix, then Linux. Quietly forgotten, but true. Today, with Microsoft’s announcement to integrate Interix with Windows 2000, it looks like the software giant has decided to take a leaf out of its own history book.
It was back in 1979 that Microsoft and The Santa Cruz Operation (SCO) collaborated to develop Xenix2, which was the first UNIX implementation for the Intel 8086 chip. Development was later handed over to SCO so that Microsoft could concentrate on its single-user operating systems in the form of DOS and Windows 2. Xenix2 gave rise to SCO UNIX, which became OpenServer, merged after a fashion with UnixWare and is now being integrated with AIX to spawn Monterey.
More recently, in September 1999 Microsoft acquired Softway Inc., developers of Interix. This product started life as OpenNT to replace the inadequate POSIX system that was bundled with Windows NT. From the outset the intention was to give companies the ability to more easily port applications from UNIX to NT, by providing a UNIX layer that runs on top of the NT kernel. The trouble is, Software did such a good job of developing Interix that the OS layer has passed compatibility tests of the Open Group, keepers of the keys to UNIX.
As it is, Microsoft are keen to distance themselves from their heritage. On its “Linux Myths” web page, for example, the company states disparagingly that “Linux fundamentally relies on 30-year-old operating system technology and architecture.” The line is (to quote Garfield) that Windows 2000 is new and improved, UNIX is old and inferior. So – why, oh why, in the shape of Interix, is Microsoft investing so heavily in UNIX?
The answer, according to Microsoft, is that by providing mechanisms which bridge the gap between the UNIX and Windows NT platforms, companies will migrate from the former to the latter. This is a tactic which has worked in the past, for example with the provision of facilities in Microsoft Office to read and write competitors files. Companies such as Lotus and WordPerfect provided read-only mechanisms and found to their cost that users were turning to Microsoft as a result. Interix, too, is a two-way bridge and Microsoft’s tactic, though proven, is not without risk. For a start by supporting a UNIX layer the company is, in some way, validating the existence of UNIX. Thirty-year old technology or no, says Microsoft, it runs perfectly well on top of our kernel. So it should, for that matter, and so will it run perfectly well in other guises such as Linux, Solaris, HP-UX or AIX. Secondly, Windows 2000 is already under pressure from a number of fronts. Linux is one example. Also, commentators are recommending that the new OS goes through a purgatory period prior to its more widespread adoption. Finally there remains the looming shadow of antitrust case, which is having knock-on effects in Europe and elsewhere.
This is not the first time that Microsoft has played the UNIX-for-Windows card. Last year, for example, the company teamed with Mortice Kern Systems to provide a set of Unix tools for NT. Like it or no, and despite Microsoft’s best efforts to shake it off, the reality is that UNIX will remain part of the landscape for the foreseeable future. A product like Interix may prove to be an asset, not to lead the company to world domination but to keep it in the game.
(First published 14 February 2000)
Lastminute.com provides an IPO acid test
2000-02-14
Almost a year after the company initially revealed its intentions to float on the stock market, Lastminute.com announced at the end of last week that it would be floated on the Sock Exchange and the NASDAQ some time next month.
The company is valued at around £400 million. The question is, is it worth it? As with all Internet IPOs, nobody really knows the answer. However the mutterings that IPOs are overvalued have become a chatter. A couple of weeks ago, the Financial Times reported that speculators were unsure about whether the company would achieve this valuation when the time came. Unfortunately for Lastminute.com, the only real test of IPO success is to just do it and see. The company has one chance: it may succeed, and then again it may succeed less well.
There are three kinds of Internet IPO.
• The first is for companies that provide the fabric of eCommerce – such as CyberTrust and CommerceOne.
• Second, we have Web businesses – such as Amazon, BlackStar, TicketMaster and Lastminute, who use the Internet as a direct revenue stream.
• Third we have the accumulators of potential revenue – Portal sites such as Freeserve and Yahoo!, who grow their eyeball business prior to exploiting their subscription base for profit.
Of the three, the fabric providers are most likely to succeed as they are the most tangible, making a profit whether their customers – the other providers – succeed or fail. Least optimistic is the third category, who may be unclear about how to turn potential into revenue (hence the difficulties suffered by Freeserver last year). Lastminute.com sits in the middle category: the chances are that the company will turn a profit, the questions are how much, when and who are the competition.
Unfortunately for Lastminute.com, competition is the one thing of which there is no shortage. Travel is a land of huge opportunity for me-too companies, which differentiate themselves on the scale and the level of the service offered. A recent contender, for example, is LateRooms.com, which claims to have 1500 hotels signed up to its site. Travel sites in general are subject to the vagaries of the online consumer – the competition really is only a click away. Customers will vote with their feet – my own experience a few months ago is that, whilst the offers on Lastminute.com were good, the breadth of choice was not. This may have changed but remains an indication of the rocky road to be travelled by a online bookings company attempting IPO.
(First published 14 February 2000)
IBM strengthens ties with Rational but keeps the agnostic faith
2000-02-14
There’s something about IBM at the moment. According to repeated announcements, the company is embracing new technologies, forging alliances and striding headlong into the future with all the gay abandon it can muster. At the same time, reading between the lines, IBM is doing exactly the opposite – holding on to the old, keeping the competition sweet and, above all, ensuring that it does not close off any options it may have. In these turbulent times such a risk-averse position may be wise, despite falling short of the visionary attitude that we could hope for from the largest computer company in the world. A recent example is IBM’s stance on operating systems: Linux is the future, but so is NT, AIX, OS/390 and any other you care to mention. More recently the focus has been on development tools or, more specifically, Rational’s suite of application development products.
Rational first got into bed with IBM in July last year, forging a strategic alliance that would “help customers accelerate the development and deployment of e-business applications,” according to the press release. In practice this meant the provision of an XMI (XML Metadata Interchange) bridge between Rational Rose and IBM’s Visual Age, with both tools continuing to be developed and promoted. It has taken the two companies a full six months to deepen the interoperability between their toolsets - last week’s announcements, “strengthening the alliance,” were the further integration, via another open standard – WebDAV – of VisualAge with Rational’s ClearCase and ClearQuest configuration and change management tools.
IBM’s slow agnosticism can only be frustrating for Rational, despite the clear potential revenue stream for the company that the alliance can make. The bridges between the products are all open standards that are being adopted by the majority of tools providers including Microsoft, Sterling and Princeton Softech. Hence Rational may be being given a head start on the other vendors, but ultimately the aim is that all platforms will interoperate and exchange information. Rational’s frustration, founded in its desire to leverage the relationship before the other vendors catch up, was hinted at by Eric Schurr, senior vice president of Marketing and Suite Products at Rational, when he said that “this [recent announcement] is evidence that the IBM-Rational relationship is more than a paper relationship.” This begs the question – was it just a paper relationship in the past?
In the relationship between IBM and Rational, it is the latter that stands to gain the most. Unlike IBM, Rational cannot afford the luxury of covering all its bases. Its product set is limited to the relatively closed world of application development, whereas IBM can offer it safe passage to the Shangri La of eBusiness. Tools providers have always faced the squeeze – this is a situation which will not change (if anything, it will become more difficult, as Microsoft exploits the most valuable resource that is Visio – you heard it here first). Frustrating it may be, but without such relationships it is difficult to see how Rational will succeed.
One final note about WebDAV, or “Web-based Distributed Authoring and Versioning." This standard has been ratified by the IETF and has the support of Software Configuration Management vendors and Document Management vendors alike. I have long been of the opinion that the two disciplines represent two routes up the same mountain. It is very good news that an interchange format has been agreed between the two camps: Hopefully we shall see some merging of their related disciplines before too long.
(First published 14 February 2000)
Transport and environmental groups stress real impacts of eCommerce
2000-02-16
In a report on BBC radio this morning, a very unusual event happened. A transport organisation, representing goods delivery services in the UK was in agreement with an environmental organisation, namely Friends of the Earth. Now traditionally, these organisations have been fighting tooth and nail, pitching economic argument of goods transportation against its environmental impact. So, what caused this seemingly unprecedented event? The answer is eCommerce, or the effects of it.
We techno-ostriches tend to keep our heads firmly stuck in the electronic sand, forgetting that with a large proportion of eCommerce transactions there is an associated fulfilment process. Even something as neat as a CD or book still requires to be transported to the sticky hands of its purchaser. As we turn to the Web more and more for our purchases, the amount of goods to be transported is inevitably going to increase. For example, companies like Tesco in the UK and Peapod in the US are starting to see success in online supermarket shopping. The models are still to be thrashed out, but the trend looks inevitable.
For the consumer, the electronic shopping Nirvana may be very attractive. Just click on a few buttons and, within a couple of hours, bags full of heavy groceries arrive at the door, accompanied no doubt with a smile and a wave from the delivery person. The realities being stressed this morning were delivery trucks clogging up the back lanes, duplicating journeys being made by competing companies in the rush to win each other’s business. In addition, the point was made about what would the now-liberated customers be doing with all that extra time? Probably spending it in the car, off to some leisure activity or other. The hyper-efficiencies of the virtual hypermarkets might well result in a hyper-gridlock.
Now environmental groups have a reputation for doom-mongering, but transport groups have not traditionally held this role. The fact that both groups agree is sobering indeed. As Robin Bloor said in eRoad, “eCommerce is coming, ready or not.” The tidal waves of change are already having many impacts, not all of them pleasant. However, it should be noted that both organisations are a little guilty of mapping tomorrow’s plans on today’s realities. IT will not only change the way we buy, but also the way we distribute goods and spend our leisure time. Supermarkets may well become warehouses, but might the corner stores become depots? Has anyone considered how letterboxes may need to change, or the possibility of overnight deliveries such that groceries are on the doormat, fresh for the day ahead? What s probably most important here is that we all sides enter the debate, transport and environment, IT, business and consumer, such that we can make the best of what is coming whilst protecting ourselves from the pitfalls it may present.
(First published 16 February 2000)
Web travel companies on the back foot
2000-02-18
The US travel industry looks set for the heart of the sun, in a looming battle that looks set to generate as much heat as light. Three opposing forces are joining the fray – in one corner, we have the Web travel companies; in the second, the traditional travel agents that the Websters are so successfully undercutting. Last but by no means least are the airline companies themselves, that have risen to the challenge of the Internet and have set out to be a case study in disintermediation. When these three warring parties come together, what a battle there will be!
According to the bricks-and-mortar travel agents, the groups which are threatened the most by the Web, the last straw came when a number of airlines, not content with running ticket sales from their own Web sites, agreed to join forces and set up a travel portal. Cries of “permanent and irrevocable damage [being] done to the competitive process," were enough to see ASTA, the American Society of Travel Agents, rushing off to the Justice Department to file an antitrust complaint. Ironically enough it is not the independent travel agents that the airlines have in their sights, but the established Web travel portals such as newly floated Expedia and its main competitor, Travelocity. The question is, can the airlines compete with online companies without being accused of anti-competitive behaviour by independent travel agents? The answer is no, which leads to the inevitable collision course ahead.
The airlines may be facing a barrage of criticism from the independents. Maybe it is true that a consortium of airline companies will form, resulting in price fixing and “less value to the consumer,” as the mantra goes. However airline agreements are nothing new – OneWorld is one example, and there are undoubtedly plenty of others. Let us remember one thing above all: the airlines did not cause this. They are faced with the same set of opportunities and threats as the rest of businesses and they are reacting accordingly. Consider this: five years ago, nobody spent money on travel over the Web. Last year about 1% of the travel - $2.2B – was purchased online. This is expected to grow to $17B by 2003. The Web is anti-competitive, it breaks all the rules and, worst of all in the US, there is nobody to sue about it.
Slowly but surely, individuals and organisations are realising that online travel is cheaper than travel purchases from travel agents. Traditions are difficult things to drop: for example, when an analyst colleague was flying over to a recent briefing in Colorado, he was discussing ticketing systems with an analyst from another firm. When our man compared his agent-provided ticket price with the online cost that the other analyst had obtained, he realised he could have saved £200. Even analysts can be slow to react to the changes, but what this proves is that travel agents are living on borrowed time. Speaking both as observers of the IT industry and as consumers, we watch with interest the developments in the travel industry. The Web is creating winners and losers and, so far, nobody has been able to come up with a hard and fast set of rules as to what is going on. Not changing is not an option, which is a fact that traditional travel agents would do well to remember.
(First published 18 February 2000)
Microsoft Windows – the Gates left open
2000-02-18
Microsoft has been quick to backtrack from claims by a certain W. Gates that the company is prepared to open up the Windows source, if it would prevent the break-up of the company. Is this the kind of innovative idea that Uncle Bill wanted to focus on, when he resigned as head of the software giant?
Oh, the underhand nature of journalists. When is off the record off the record? Clearly not just after the videotapes have stopped rolling, as illustrated in last week’s televised interview between Bill Gates and Bloomberg. According to the transcript on Bloomberg’s web site, “After the on-camera portion of the interview was completed, Gates was asked whether the company would be willing to open the Windows source code in order to settle the case, and Gates said “yes.” He then added, smiling, “if that's all it took.” ” Unfair tactic or no, “yes” is a pretty fair indication of what the great innovator was thinking.
We have discussed the potential perils of Microsoft opening up the Windows source, if nothing else for the scrutiny it would cause. Taking a pop at M$ is a common pastime in the IT industry, so just imagine the howls of delight from hackers and hopeful hecklers, as they find potential security flaws, weaknesses or just plain bad code. Like Greta Garbo without makeup, there are maybe some things that just shouldn’t be made public.
The bigger question is whether there is really room for two open source operating systems. I’m not including Solaris in this debate, because whether it is truly open is questionable. The issue lies between Windows and Linux: people want Linux because it is free, stable and perfectly adequate for a large number of uses; they want Windows because it runs all the right applications and because it is what everyone else is using. If Windows is opened, there is nothing to stop (indeed, just TRY to stop them) the open source community from linking the two OS’es. WINE, the Windows emulator, would be dropped like a stone, after all, why emulate Windows calls when you can have the real thing?
Open source Windows is a logical development as it equates to the rising tide of commoditisation in software. Mobile phone users would not expect to pay extra for the software that runs on the phone; rather, there are a certain set of expected facilities that are delivered with any device, and the OS is one of them. Traditionally, we have paid for PC operating systems and (grudgingly) upgrades, but as products like WebPads from Samsung and Diamond illustrate, the line between PCs and other devices is diminishing and so must the pricing models. It is a fact that Linux is being chosen as an embedded operating system for a wide variety of devices, from video recorders to the aforementioned WebPads, because it does not incur software licensing costs. Microsoft knows that it can only really establish itself in the device-driven market if it cuts its licensing fee structure to the bone, or if it drops it altogether.
Microsoft is in a quandary as it grows out from its desktop PC home ground. As it moves upward into the server space, where premiums are currently higher, is risks incurring the wrath of its user base due to the higher licensing costs it is demanding. Downward, the company’s success in the device space is dependent on it making the product much cheaper, or even free. This “Open-Windows” thing may solve these problems, by enabling the company to concentrate more on applications and services, but may also cause some problems of its own. Either way, it is an issue that is unlikely to go away.
(First published 18 February 2000)
PointCast – Did it fall, or was it…
2000-02-18
This is the way a once-darling of the wired world ends – not with a bang, but a whimper. PointCast is finally giving up the ghost, calling it quits, coughing its last. Nobody notices, nobody cares. The most tragic thing is, the technology it espoused is alive and well. Where, oh where, did things go wrong for PointCast?
Founded in 1992, PointCast was credited for beginning a revolution in Internet usage and was quickly copied by the likes of Microsoft with its Active Desktop channels. However, the dream quickly tarnished as the bandwidth requirements for pushing Internet content to the desktop overcame a number of companies' capabilities, preventing email and “normal” Web browsing activities from taking place. Back in the mid-nineties many organisations were discouraging Internet access at all, as it was seen as distracting individuals from their work. Network Managers banned access to PointCast, the share price plummeted and the company was, eventually sold for a song. Consider this: at the height of its popularity the founder of PointCast, Chris Hassett is reputed to have turned down an offer of $450 million from James Murdoch of News Corporation. In December 1999 the beleaguered company, reputedly losing $2million a month, was bought by Idealab who wanted to use the technology to deliver advertising to customers of its eWallet software. Today the company, now merged with Launchpad to form EntryPoint, is on the brink of launching a new product but there is no guarantee of success now that the hype has gone.
So, where did things go wrong? At the highest level it is possible to say that the technology did not deliver on the hype. The IT industry is one of the best at delivering solutions without feeling that it needs to worry too much about the problems it has to solve. Push technology, like other technological also-rans (such as Document Image Processing and Satellite phones), was a solution without a cause, sold as a new way of working without due thought being given to how it could fit in with established practices. In itself, Push is an enabler, which comes at a cost of bandwidth and administrative overhead. Where the cost-benefit balance has been struck, Push is now establishing itself as a perfectly acceptable solution to a whole variety of problems, such as:
• Automatic distribution of software updates to end-user desktops, as illustrated by Marimba
• Online video-on-demand, for example online training which can be delivered on demand or following a predefined schedule specified by the customers - Multimedia Solutions, Inc. are having some success here
• Download of information to PDAs in idle time, which can then be accessed offline, for example through the service offered by AvantGo.com.
PointCast had the technology right, but aimed its sights at the wrong market. Corporate users, ultimately, had better things to do all day than configure and trawl through the wealth of information they were receiving. They could also do without watching their machines slow to a crawl once every ten minutes, as updates were received. Sure, all these things were configurable, but even that took time. If anything, this is an example of the fickleness of the Web community – in the end PointCast failed because its users couldn’t be bothered with it.
Whilst all this is a shame for the now-defunct company (and particularly for its founder), it is not the end of the technology. Push will continue to find niches where it can render itself indispensable and hence lucrative, for example in corporate information portals (keep an eye on companies like SageMaker). Once Internet bandwidth constraints are removed over the next year or so, Push technology will gain ground in providing real-time video news and sports feeds, consumer video rentals, audio channels and so on. It is no coincidence that this list overlaps with services currently provided by broadcast media. This *is* broadcast, with a difference - it is user selectable in real time. After all, what is Push and Pull, if not supply and demand?
(First published 18 February 2000)
Alcatel joins the new generation networking crowd
2000-02-23
Was anybody wondering about a slow-down in the consolidation of networking equipment vendors? If so, such thoughts may be premature, as demonstrated by Alcatel’s $7.1 billion acquisition of Newbridge Networks yesterday. The move adds a much-needed component to the Alcatel portfolio, namely presence in the Internetworking market. Not that Alcatel lacked products in this space, however it has lacked – how should I say – IP mindshare.
All this looks set to change with the acquisition which will enable Alcatel to go up against the giants of “new generation networks”, namely Lucent, Nortel Networks and Cisco. Alcatel is a company that has grown dramatically through acquisition. When it was first privatised, it concentrated on growing its multinational presence by the purchase of, and partnership with, European and US vendors such as SEL in Germany, Telettra in Italy and Sprint in the US. More recently it has set its sights more closely on mobile and data networking companies, such as Xylan and, of course, Newbridge. Newbridge has been hoping to be bought for some time. Despite successes a few years ago, its more recent past has not been so healthy. Speculation started in November last year, with include Ericsson and Nortel. Apparently these companies walked away from the table over the past few weeks leaving only Alcatel, who have accepted the mantle with confidence. Unfortunately the shareholders are not so sure – Alcatel’s share price has dropped four percent to 227.9 euros following the announcement. This is unlikely to perturb the company, which is no stranger to market volatility – following a profit warning in October 1998, the share price dropped 40% to a low of 68 euros.
Alcatel should not be underestimated as a communications equipment provider. Its greatest strength lies in the breadth of the products and services it offers: for example, in 1999 it carried 35% of the worldwide ADSL market – a share that increases to 52% in North America. Alcatel has major stakes in wireless and satellite communications, not to mention terrestrial and submarine cabling. All of this puts the company in an ideal position to appreciate the nature of convergence – data availability any time, anywhere and across any medium. The company has been maligned as an old-fashioned, “traditional European” company which is slow to change and slower to innovate. If this was ever true then it most certainly is not now – over the past few years the company has refocused, restructured and risen to the challenges that are faced across the networking industry. Europe has reinvented itself over the past three years as a hub of communications innovation and development, with companies like Nokia and Vodafone leading the way. Other technology companies based in Europe, such as Alcatel, are now able to bask in the glow that has been created.
Alcatel is placed to provide end-to-end solutions in ways that few other companies can approach. The company provides mobile phones, ADSL cards, set-top box chipsets at the client side, plus access systems for all of these devices and more, plus the underlying global connectivity. Perhaps Lucent can come closest to this soup-to-nuts vision, but even if this is the case, Alcatel remains one to watch.
(First published 23 February 2000)
Ford, GM, Daimler put business at the helm of technology
2000-02-28
A week is a long time in this topsy-turvy technology world. Just last Monday (in “Oh, no, not another marketplace”) we were bemoaning the fact that Ford and General Motors were going it alone by setting up independent eProcurement marketplaces, a model that we do not see as sustainable. By the end of the week the two companies announced their intentions to collaborate with a third, Daimler-Chrysler, to develop a common exchange for automotive transactions. The collaboration will take the form of a new company with its own identity, which will offer services to all automotive manufacturers and their associated companies.
Now it would be wonderful to believe that the world’s largest automakers had changed their minds based on the advice of IT-Director.com, but sadly we cannot make any such claim. Last week’s announcement will have been based on a great deal of discussion and negotiation between the companies and their representatives. It will have been going on in parallel with the deals struck up, for example between Ford and Oracle, or GM and Commerce.One. It is currently unclear about which technology companies will be the winners (or losers) in the still-infant venture, however winners there will be, big time. A single market will require single sources for primary technologies, such as hardware, transaction management and application software.
What is perhaps most fascinating here is that the businesses involved are seizing the nettle of defining an electronic environment in which they are all prepared to work. Traditionally this role has fallen to technology companies or standards bodies, with businesses being involved to agree the form of any dialogues but not the environment itself. This move by automotive companies is unprecedented, not only because of the level of collaboration it requires but also because it puts them in front of the technology companies. “We’ll sort out the bigger picture,” the car makers are saying, “and we’ll be in touch.”
This change should not be underestimated: it is a major sea change in how technology is defined and used. It suggests a growing understanding, on the part of businesses, of the capabilities of technology. It also suggests a maturing of the technologies themselves. “What” – i.e. the exchange – is less the issue than “how” and “when” it should be implemented. Finally it indicates a transferral of power, from the companies providing the enabling facilities, to the companies wanting to use them.
The decision by Ford, GM and Daimler-Chrysler may mark a watershed but it is not over yet. As we indicated in the last article, there will be a consolidation and standardisation of marketplaces not just within industries but across them: specific exchanges, such as aircraft spares, car parts or electronics raw materials may be kept separate for practical purposes but will be based on the same underlying standards and technologies. The power of business to drive technology should not be underestimated, nor should the power of the consumer: the Internet has already given ample demonstration of how customers vote with their feet.. These changes in the power base between technology and business, business and consumer are happening now. Traditionally, the IT sector has kept its position through hype and gadgetry, exploiting the FUD factor to the full, but the monopoly that such companies have held on technological understanding is coming to an end. It may be difficult at this stage to predict what the impacts will be: all we can say is that they will be deeply felt.
(First published 28 February 2000)
Do you Yahoo!? Go on Murdoch, you know you want to
2000-02-28
It is now six weeks since AOL and Time Warner announced their intentions to wed, throwing the industry into turmoil. But which industry? This is the question that is making things so hard for other companies to follow suit. AOL is an ISP, a Portal, a software company and a service provider. Time Warner is a news organisation, a film studio and a publisher. Not to mention the wide variety of other interests that the two companies now share. Orrin Hatch, the US Judiciary Committee chairman, was quoted on News.com as saying that the deal could have “profound public policy implications”. Too right – it creates a company the like of which the world has never seen.
Driven by fear and loathing as much as by the smell of opportunity, many others are keen to follow the lead that AOL Time Warner have set. One such example is Rupert Murdoch’s News Corporation, which has revealed that it is holding preliminary talks with Yahoo!, the portal company. News Corp has – or had – many similarities with Time Warner, owning film studios, television channels and publishing companies. But what of Yahoo!? Is the company right for News Corp? We would argue no, not in its current form, but it shows potential.
The problem with Yahoo! is easy to spot when it is positioned next to AOL. The two companies were never seen as rivals because, simply, they were not: Yahoo! was a portal, and AOL was a global ISP. Despite having teamed with AT&T and Mannesmann to provide Yahoo!Online, the company is not renowned as an ISP – indeed, as these partnerships demonstrate, it is not. AOL, however, most certainly is an ISP with global reach. Yahoo!’s lack of its own ISP presence is a weakness for the company, as it means that it has less control of its potential customer base. An individual may have several ISPs, but is likely to retain one as its primary access to the Internet. However, to most, Yahoo! is one online resource amongst many.
The AOL Time Warner merger demonstrates the coming together of three components, all of which are essential. The first is the communications channel, akin to the Web portal or the TV station. The second is the information to be transmitted or published. The third is the underlying technology – the communications network. It is this network which is lacking to both Yahoo! and News Corp – the latter has an enviable satellite service in the shape of Sky, but this is not to be the dominant communications channel.
An alliance between Yahoo! and News Corp can be successful, but only if it is followed by investment in a communications infrastructure company. This could either be a wireless provider or a land-based provider but is most likely to be a combination of both. It is unclear at this stage whether the combined News Corp/Yahoo! would be the suitor or the target of such affections. What is clear is that only the resulting corporation would be sufficiently powerful to take on the giant that is AOL Time Warner. Suggested tie-ups between News Corporation and Yahoo! can only be steps on the way.
(First published 28 February 2000)
March 2000
This section contains posts from March 2000.
For Palm, it is just the beginning. For 3Com…
2000-03-03
Yes, yes, the Palm IPO has finally happened, and it was all we hoped it would be. At $53 billion Palm, Inc. is now valued at nearly double that of its proud parent, 3Com. The road ahead may be rocky for the fledgling company: nobody, not even Palm, can claim a monopoly on the future of mobile devices. As for 3Com, it success at nurturing the company which now holds a 70% share of the handheld market may be overshadowed by the difficulties it faces following the Palm IPO.
First, to Palm. Against all odds, some would say, the company has succeeded where the likes of Netscape and Lotus failed. It has taken on the company with a reputation for standing on the shoulders of giants, the “innovator” that is Microsoft, and it has won – despite a less functional product, a less glamorous interface and a far less effective marketing machine. IT observers have used their best hindsight and determined the reasons why Palm has been so successful, but the truth is that nobody really knows. Whether it is down to battery life, weight or Zen, the fact is that the punters prefer Palm.
The future is likely to be less predictable than the past – we can say this with some certainty. Just because Palm owns the lion’s share of the PDA market, this does not guarantee its future success. It is not so much that the competition in the PDA space is fierce. Rather, the PDA space is a transient thing, involving collision rather than convergence with other technologies such as wireless technology and portable computing. Already the company is feeling the heat of its mobile rivals and does not always appear to be jumping in the right direction. As we covered \link{http://www.it-analysis.com/00-01-13-2.html?its,here}, Jean Baptiste Piacengino, Product Line Manager for GSM products at Palm Computing saw a differentiation between PDAs and mobile phones, a line which we do not feel it is valid to draw. Symbian’s Quartz platform, which combines standard palmtop functionality with Web access, mobile telephony and Bluetooth facilities, was launched a CeBit as an indication of the kind of pressure that Palm would soon be facing. Other factors are at play, too: the mobile phone market has a far-greater reputation for product replacement than the IT-based PDA market. The two products may end up the same, but the purchasers may opt for devices from mobile phone manufacturers rather than from PDA makers, through force of habit.
Palm is unlikely to be resting on its laurels, and this fight is unlikely to ever be over. We only have to look at the turnaround of fortunes at UK-based Psion to see that, even if Palm lose a round, there is still plenty to play for. A huge strength for Palm is in the massive range of applications it supports – existing users may be loathe to give up the apps they know, not to mention to transfer the data that the device may store. There may be difficulties ahead for Palm, Inc., but the company is as in a good a position as any to overcome them.
Meanwhile, what of 3Com? In some ways, the sell-off of Palm is a good thing as it will enable 3Com to focus on its “core business” – whatever that means, given the current, chaotic IT landscape. However, Palm has played a large part in the recent success of Palm – the PDA division made up 13% of 3Com’s 1999 revenues. This is not to mention the lift it is reputed to have given to the overall share price, even prior to to announcement of the Palm sell-off.
3Com has a reputation for solid strategies, but has not always been so successful at making them happen. The company’s bread-and-butter market of network cards and modems is now crowded with low-cost manufacturers, leading 3Com to refocus its efforts on the converging worlds of telephony and networking. Here, however, we are back to crystal ball gazing: the company is entering new territories that do not guarantee success to all comers. It is unlikely that 3Com will fail: the company has a reputation for innovative products and is striking partnerships with companies such as Microsoft, Hewlett Packard and Samsung to help it along. The separation of Palm may well be what 3Com needs, as it can now get down to a business it understands. However, it has to be said that 3Com’s base is far shakier than that of Palm, as the latter already has substantial momentum and presence in its own markets.
3Com’s launch of Palm sees the birth of two organisations, both of which will be subject to the whims of technology’s Lady Luck. Palm has come out of the gate with new products and an enviable market presence, leaving behind a 3Com that knuckle down and make something of its renewed focus. Neither company is guaranteed success, but both stand to gain from their new status.
(First published 3 March 2000)
Thomson Travel: there’s life in the old dog yet
2000-03-03
At the end of last week, UK travel operator Thomson immediately followed its report of sliding profits (down 38 per cent on last year) with the announcement of a £100 million Internet strategy. In Friday’s Independent newspaper it was reported that analysts saw this as “too little, too late,” when positioned against the rising web stars which are Lastminute.com and Travelocity. In this game, though, it ain’t over until the well-proportioned lady sings, and we would like to point out a few points in Thomson’s favour.
The whole disintermediation concept is proving very difficult to follow. It seemed that intermediaries would be the first casualties of the Web revolution. Then it became apparent that new types of intermediary were starting to turn a sizeable profit – these are the transaction infrastructure providers such as Tibco and CyberSource, not to mention security, directory, billing and other service providers. Now with the clear success of the travel sites, it is obvious that intermediaries are here to stay – at least on the Internet. Companies that still lack a Web presence are quite right to be losing sleep.
Thomson’s main strength lies in the fact that it is not really a travel agent at all. High street outlets such as Lunn Poly are little more than a front for the wide variety of Thomson holidays, sold under different company brands. In other words, it is its own intermediary and it has real products to sell. To see why this is a strength, we only have to look at the proposed alliance between US and international airlines, to set up a portal for air fares and effectively cut out the middle man. A similar alliance between package holiday operators would certainly put the squeeze on the Lastminutes and other pretenders.
Second, we must emphasise the importance of the over-used phrase “the competition is only a click away.” In my own experience, when hunting for a last-minute weekend stay in London, I found Lastminute.com lacked the choice of other, similar sites. It was also – heaven forbid – slow, the navigation was non-intuitive and the end result cluttered. Even the best players in the game, new breed or old hands, can fail – to offer the necessary services, or to offer them in a way that turns off the prospective customers. Thomson have as much chance of success as anyone else.
Finally, nobody has the monopoly on the future. Financial analysts are making the best guess on how dot-coms should be valued and while it is generally agreed that many are priced way above what is reasonable, nobody is prepared to say which ones. The dot-com crash may come, and the pure clicks-and-mortars are not guaranteed to beat the companies they are trying to oust. The Web is a channel for products and services: it is true that it is indispensable, but ultimately it is the products and services which will determine the success of a company, and not the channel. Thomson cannot afford to miss out on the Web: to succeed, it must establish itself as an eCommerce player, and the quality of its web presence must be second to none. Given that it achieves this, its future will once again be down to the attractiveness of its holidays, and its ability to deliver them.
(First published 3 March 2000)
Cable, wireless and now the servers: C&W goes ASP
2000-03-03
There seem to be two options on the cards for telcos today: consolidate or specialise. These options are not mutually exclusive, particularly for companies with an interest in keeping abreast of the latest developments.
One such company is Cable and Wireless. Despite a fair reputation as a telecommunications carrier, the company has thus far failed to make much of an impression in the Internet space. This all looks set to change with the announcement that C&W are to set up “the largest Web hosting centre in Europe.” The centre, which is to be located in Swindon, “will have the capacity to host every Web site currently running in the UK.” Whether this will still be true when the site is complete is unlikely, but so is the chance that C&W will host all of the sites in the UK. It is clearly the company’s intention, however, to grab a sizeable share of the hosting market, both nationally and internat9ionally. Similar sites are being set up elsewhere in Europe, in the US and in Japan.
So – what’s it all about? Hosting is such a passive word, hardly doing justice to the potential of such sites. The current model is that companies can run their Web sites on C&W servers, paying for guarantees of service such as availability and performance. In the future, this model will become much more complex, with infrastructure service providers partnering with providers of commerce services, application services and business services in order to provide a one-stop service shop for the eBusinesses of the future. C&W know their strengths – with a telecommunications background, they are ideally placed to provide infrastructure services with the necessary service guarantees. Once established as a hosting company, partnerships will prove the key to C&W’s continued success. To quote one film, “build it and they will come.”
Cable and Wireless are now the seventh largest carrier of international traffic in the world. This is not to be sniffed at, but neither does it give the company a place at the top table. By moving into Web hosting, C&W are lining themselves up to catch the next wave – that of service provision. This specialisation is a logical step for a company which does not want to become an also-ran of the communications revolution. By doing so, C&W are making themselves an attractive proposition for the global content-and-delivery giants such as MCI WorldCom, Vodafone Mannesmann and AOL Time Warner. To specialise can also be to consolidate.
(First published 3 March 2000)
Big Cis hits the top
2000-03-28
Well, Cisco Systems have done it and suddenly hardware is ruling the roost again. The company’s market valuation rose yesterday to $555 billion, overtaking Microsoft which fell to $541 billion. For a company that has not been known for grabbing the spotlight, this really is quite a coup.
Cisco has, in a way, been the poor nephew in what is a very rich family. Certain companies caught the technology wave of the eighties and nineties and left their compatriots behind. Here we are not talking about the Oracles and Suns, although such companies have done exceedingly well in their own areas. There are a handful of firms which have become the de facto standard, and they have done it globally. These are Microsoft, Intel and, well, Cisco.
While it is easy to spot why Microsoft and Intel did so well, riding on the back of an IBM PC clone which undercut the young and arrogant Apple, the picture for Cisco is more hazy. Once there were many vendors of networking equipment, but despite the best efforts of the competition one rose up above the rest. It is difficult to say why this was – probabilities point towards the company’s early realisation that it was not selling equipment, but infrastructures. However it happened, Cisco has grown and grown its market share by continually developing its products and moving with the flow. Unlike most other networking companies, it has not grown through mergers, rather it has favoured stringing acquisitions of smaller companies, buying key technologies and expertise rather than direct market share. This strategy has clearly worked.
Cisco has seen its biggest rivals merge and merge again, to no avail. Back in 1994, Synoptics and Wellfleet merged to form Bay Networks and “give Cisco a rival,” according to a San Jose Mercury News article of the time. In 1998 Bay and Nortel then got together, ostensibly to challenge Cisco. They’re still at it - recently Nortel Networks made an \link{http://www.it-analysis.com/99-11-10-2.html?its,announcement} which, threatened the company, would bring Cisco to its knees. Of course, it didn’t. Cisco overtook the second most valuable country in the world, General Electric, a few weeks ago and now it has overtaken Bill Gates’ empire.
What is perhaps the most remarkable is that Cisco has grown into a leader not only in networking but in telecommunications, a market which is not known for sharing business. The company’s technology-agnostic stance is a key driver to this: the problem to be solved may be the unrelenting growth of the internet, but the solutions are legion – be it LAN or WAN, the best technology wins and no option is ruled out, be it ATM, frame relay, xDSL, or Ethernet to name a few. This approach has kept the company flexible, enabling it to grow into new markets through its reputation in networking. As the world goes IP, the world is choosing Cisco.
Where now for the world’s most valuable company? Onward and upward is the answer. The desire for infrastructure is speeding up rather than slowing and at least for the next couple of years, new developments are bringing new challenges which Cisco will be priming itself to exploit. The rise of mobile may open some chinks in the company’s architecture, which will be interesting to watch but there is plenty of growth in big Cis yet.
(First published 28 March 2000)
Online photo clubs – the killer app for SSPs
2000-03-28
Yahoo! has joined the growing band of portal companies to offer online photo album services. The Yahoo! Photos site offers 15 megabytes of free Web space in which users can upload, share and even have photos printed and mailed back. The growing interest in this sort of service is doing more than threatening the high street photo services. It is also bringing Storage Service Provision, by stealth. And if the companies want this, they’d better start pulling their socks up.
There are now a good handful of companies offering online photo management, with big names including AOL, Kodak and Hewlett Packard. Services do not depend on the ownership of a digital camera – with Kodak’s service, for example one can send a 35mm film and the company will notify by email when the photos have been posted to the site. Digital camera owners can benefit from online albums to which photos can be uploaded and shared with friends and family. It all makes some sort of sense.
So far, we are informed, the service has yet to make too much of a stir. This may not matter, as factors such as bandwidth and the availability of digital cameras will affect it in the short term. The potential is there – companies are currently setting out their stalls and grabbing mindshare, so as to take advantage when the time comes. It seems that, once people start building online albums, it is relatively easy to get hooked and once the expectation is there the market will grow considerably.
Where things get interesting is that expectations are starting to be placed on the portal companies as service providers. For example, it is assumed and expected that there is some form of disaster recovery policy. Yahoo’s terms of service disagree – quote: “YOU EXPRESSLY UNDERSTAND AND AGREE THAT: a. YOUR USE OF THE SERVICE IS AT YOUR SOLE RISK.” Not too good if you lose the family photo collection, really. This is an indication of the shape of things to come: if companies want to be leaned on for services, then they’d better start understanding what the nature of services are.
(First published 28 March 2000)
Directory sky hook too late for Novell?
2000-03-28
Novell was due for a shake-up and now it has had one. It was only a matter of time before the company moved its focus once and for all, away from its operating system heritage and into a directory-based future.
In Salt Lake City yesterday, Novell unveiled a new strategy that can be summed up with the d-word. The Internet has been the reason for Novell’s current instability – the market for a commercial network operating system kind of goes away when the whole world has standardised on a NOS which is available for free. Faced with a shaky set of results, the company has had little choice but to put its business firmly where it has been seeing the most success. No surprises that this should be back on the Net.
We have written about this before. We have had high hopes for Novell. The NDS technology is good, the vision is right and, up until recently, the market was wide open for a directory product. The only competitor – Microsoft – was so late with its Active Directory product, first announced in 1997 and finally launched with Windows 2000 a few weeks back, that all Novell had to do was occupy the space. How very, very easy life can be sometimes.
Novell seems to suffer from a most unfortunate sense of timing. In the end Microsoft proved stunningly accurate about the launch date of W2K including Active Directory, so Novell cannot claim that they did not know.
There are a couple of points in Novell’s favour. The first is that the product is tried and tested. Most pundits are recommending that W2K is allowed to bed in for a while before it is adopted more generally – this may throw Novell the slack it needs. The second point is that the company, having dropped its baggage, is free to work with whatever leads to its success. The company’s recent Linux announcements for its directory services are an indication of this – it is unlikely that Microsoft’s Active Directory will run on anything other than a single operating system.
Novell still stand a chance of success. There is no question about the importance that directory services will start to play as the Internet evolves to become a virtual services infrastructure. Given the fact that Novell have lost its lead, whether it is NDS that is chosen may well prove to be mainly a question of slick marketing. Whatever it does, Novell had better do it fast.
(First published 28 March 2000)
April 2000
This section contains posts from April 2000.
Will USB become the system bus of the future?
2000-04-28
It is a surprise that nobody thought of it before. A flash memory device in the form of a Universal Serial Bus (USB) connector – plug it into the hub and it becomes near-instantly accessible as a drive on a Windows 98 system. This is exactly what a Japanese company - Shin-Nichi Electronics (SNE) – has done with the launch of the Thumb Drive. Though touted as a competitor to Sony’s memory stick rather than the beginning of a more general trend, it doesn’t take much of a leap of faith to see the potential for such a form factor.
In fact, somebody did recognise the potential of the USB port as more than a cable connection. Over a year ago Aladdin Systems were giving out free samples of its eToken, a USB-based security dongle. This device, which can be carried on a keyring, can be used as a smartkey holding authentication and encryption information about a specific user. At the time that the eToken was launched, Aladdin was worried about the potential uptake due to the relative newness of USB. Today, there isn’t a motherboard or a laptop which does not have at least one USB port.
There are several advantages to these tiny USB devices – they are extremely light, portable and robust, and due to their plug-and-play nature they can be used by the least computer literate. Not to mention the fact that USB is rapidly becoming the interface technology of choice for PCs and peripheral devices such as PDAs and digital cameras. Now, with the advent of flash memory supporting half-decent data volumes, the USB-based form is becoming appropriate for peripheral storage. This move towards USB could have significant impacts on other interface standards, both outside and inside the box.
Efforts to build the internal bus of the future are lumbering forward, driven by the Infiniband consortium which was formed by the merger of two industry associations, Future I/O and System I/O. Infiniband offers gigabit transmission speeds managed by a switching fabric, all well and good. USB 2.0 products will only be able to support 450Mbps when they become available in the latter part of this year. While Infiniband may be compelling for PC based servers, it is highly probably that USB will become the interconnect standard of choice for client and consumer devices. There is a drive towards integrated client PCs such as those from Hewlett Packard and Compaq (based, it should be said, on the iMac architecture) which consist of sealed PC boxes which can be extended using USB-based peripherals. Wyse has taken this one stage further with its Thinovation initiative. All of these devices have a common factor – they all rely on the expansion capabilities provided by USB.
USB is more than a connection standard, it is becoming a means by which cheap, extensible computing appliances can be constructed and exploited. Devices such as the Thumb Drive and the eToken, coupled with USB-based modems, cameras, wireless networking and other peripherals are the shape of things to come, driving down the cost of computing whilst increasing its flexibility both in the workplace and in the home. To bring things full circle, Sony’s PlayStation 2 may support Memory Stick but it also includes two USB ports. It is difficult to believe that, of the two interface types, Sony’s proprietary solution will take over from a standard that is rapidly gaining global acceptance.
(First published 28 April 2000)
Sony goes Symbian: whence Palm?
2000-04-28
Sony shocked observers at the end of last week with the announcement of a collaboration with Texas Instruments and Symbian in order to produce a next generation mobile device. The announcement moved Sony squarely into the mobile mainstream, at the same time as raising the question – what happened to Palm?
In November of last year Sony and Palm announced that they would be collaborating on future versions of “the Palm Computing platform,” a.k.a. PalmOS. Palm would integrate Sony’s Memory Stick technology, whilst Sony committed to use the Palm architecture as a basis of “handheld consumer electronics products,” according to the press release. Last week’s announcement by Sony, TI and Symbian raises the obvious question about where next-generation wireless phones sit alongside handheld consumer electronics products, and what happens to Palm as a result.
It may well be argued that handheld computers and PDAs are different. This has certainly been Palm’s stance in the past, as we have already discussed in a previous article. However, as we argued, this position does not align with mainstream opinion as WAP phones and devices such as the Nokia communicator demonstrate that the merger of the two technologies is already with us. If Sony thinks the former, that never the twain shall meet, then Palm may already have a fight on its hands to keep its relationship. If PDA and phone are two sides of the same technology then the battle may already be lost for Palm.
Palm may not be too bothered, with a reported 75% of the world’s handheld computer market, but this figure is misleading. Again, if phone and PDA are kept separate then Palm is on relatively solid ground but if not, then the statistic should be put relative to the number of mobile phone users there are in the world – this would put Palm well into the minority. Ultimately, if sources inside Palm are to be believed, the company is more interested in growing its applications portfolio and becoming a platform company than it is in hanging onto its hardware base. This agrees with the alliance formed between Palm and Symbian, in which Symbian provide the underlying kernel and Palm provide the “user layer”.
Overall, then, all may not be lost for Palm. This is just one alliance amongst many and on the ground there is little to suggest that Palm’s success is waning. Sony’s collaboration with Symbian may be a disappointment to Palm, whatever they say, but rest assured there is life in the old dog yet.
(First published 28 April 2000)
May 2000
This section contains posts from May 2000.
Microsoft – down, not out, but not in neither
2000-05-01
Following the Microsoft trial has become an obligatory part of the job description of anyone in IT. There are two reasons for this. First, there are few individuals that have not had their jobs changed in some way by Windows, Office, Exchange, SQL Server or any of the other products in the Microsoft portfolio. Second, there are even fewer people that would not accept even a fraction of a percent of Bill Gates’ fortune. Even so it is difficult to see what impact the ultimate ruling will have.
For those who missed it, on Friday Judge Jackson revealed a “proposed Final Judgement” which, amongst other things, advocated the breakup of the company into two businesses. This was supported by a share ownership restriction meaning that Bill Gates and Steve Ballmer could only own shares in one or the other of the two companies. “Ha, that’ll show ‘em,” will have been the thought that ran through the minds of Gates-and-Ballmer wannabees across the globe. After all, this is a personal issue however well disguised in corporate legalese it may be. Other provisions included protection for companies which had given evidence against Microsoft in the trial, a levelling of the playing field for PC manufacturers that OEM Microsoft Windows and restrictions on Microsoft’s ability to use Windows to promote its own products over anybody else’s.
So what does it add up to? Let’s start with a couple of stories. Once upon a time there was a king who believed he was buying cloth so cleverly woven that only the most intelligent could truly see it. Once upon a time there was a company which so cleverly marketed their products, everyone in the world thought they would answer all their problems. Six or seven years ago Microsoft was such a company, the undisputed pretender to the IT throne. IT decision makers across the globe were implementing one-word IT strategies – “let there be Microsoft” – as they truly believed that they had found a company offering the silver bullet. In the former tale a little boy shouted “that man’s naked!” and all hell broke loose. In the latter, the many cries of “Microsoft is not the only answer!” have, in the most part, fallen on deaf ears. Even those that believe in Microsoft’s diabolic nature, begrudge that it is better the devil you know.
All good stories have a moral. The king’s new clothes may be summarised as saying “never trust a salesperson”. The Microsoft story has an additional moral, “know what you want.” Fair to say that, in most parts of the world, Microsoft is no longer seen as the keeper of the silver bullet. We have learned our lessons and it is for this reason that we may rest assured that, whatever Microsoft may have succeeded with in the past is no indication of the company’s continued success in the future, court case or no court case.
(First published 1 May 2000)
Phew! ISP’s found not liable for email content
2000-05-01
In these days of information overload, it seems a bit of an anachronism to expect an Internet Service Provider (ISP) to be responsible for the content of messages passing through its site. Not to mention the fact that the ISP may not even be the origin of a message, merely a step along the way. Fortunately, a court in the United States ruled yesterday that one ISP – Prodigy Communications Corp., was not liable for its “failure” to spot a threatening email passing through its servers. It has to be said that this must come as a bit of a relief for ISPs across the globe where, though internet laws may not yet be internationally binding, legal precedent is everything.
The email in question was sent by an (as-yet untraced) person pretending to be Alexander Lunney who was fifteen at the time. Young Lunney’s father sued for damages as soon as it was revealed that his boy’s details were being taken in vain – it is this damage claim which has now been quashed. This is the upside – the downside is that the impostor will probably never be traced.
Fair to say that cyber law has quite a way to go. Still grappling with the vagaries of eCommerce, it now has to deal with the problems of the wireless Internet. For example in whose jurisdiction is an off-world bank account? The recent MP3 copyright case may show how existing laws can be adapted to the cyberworld (as does this case) but it is fair to say that unheard-of situations are being invented every day and, no doubt, exploited by those that fail to see the difference between unethical and legal. Consider, for example, the merger of contract-free, pay-as-you-go telephone services (available in the UK) with the Wireless Internet – such facilities may offer true anonymity, with all of its strengths and weaknesses.
Every development marks a whole series of opportunities, coupled with ever more complex threats - the technological sword cuts both ways. We are already a long way off having a stable, relevant, global legal framework for the Web: before this can come about it will be necessary for things to slow down a while. This seems unlikely: the way things are going the technology revolution is still accelerating and has a good few decades in it yet. In the meantime rulings such as this one show that common sense is still a valid legal currency. Long may it continue to be so.
(First published 1 May 2000)
Tele2 – a niche wireless solution for the corporate masses
2000-05-05
It was unfortunate that Patricia Hewitt, eCommerce minister and Member of Parliament for Leicester West, had to go into hospital at the end of last week for a knee operation. Otherwise she would have been available at the launch of Tele2’s innovative wireless Internet service, which she has been closely involved in for some time. So - what’s it all about?
Tele2 have developed a hybrid radio and microwave-based Solution which offers high-speed, always-on bandwidth. End-user customers install a “squarial” which connects to an Ethernet-based LAN device. The squarial communicates via radio to a local access node, which in turn is linked via microwave to a base station. A standard configuration for a small town is one base station and seven access nodes. Tele 2 are currently offering a service of between 128Kbps and 1Mbps, with users being charged for the amount of bandwidth they require and the amount of data they transfer in both directions.
What is interesting about Tele2’s offering is that it is relatively easy to deploy and offers a high level of communications quality. Tele2 are actively advertising on \link{http://www.tele2.co.uk,their Web site} for landlords and small institutions to permit an access node to be situated on their (preferably flat) roof. The node requires only a mains connection and is no more than a foot high. Given this, it sounds ideal for business parks and campuses, not to mention smaller, less accessible towns. The high bandwidth, relatively low cost and the always-on capability has resulted in a number of trials of the technology for security cameras in car parks and service stations. It is also of potential benefit for organisations that want to run Web sites in-house, rather than outsourcing them to external ISPs.
Because the service is packet switched and is purely aimed at data users, it does not need to be backwards compatible with voice communications facilities. The downside is that end users still need their existing voice or ISDN lines but this may change in the future when Voice over IP reaches maturity.
Prior to the launch, Tele2 has been running a Pilot service in the Reading area of the UK, which is currently supporting 500 connections. The launch involves Leicester, Leeds, Bradford and Nottingham, with a wider launch planned for September. Tele2 is focusing on the corporate space, but its target market is clearly the smaller organisations which cannot afford the equivalent bandwidth from land-based services.
It is in keeping with the present speed of technological change, that new, innovative ways of using technology should appear. Tele2’s technology and business model will broaden the reach of high-speed Internet and, if done right, will give the lumbering roll-out of land-based, broadband technologies a run for their money.
(First published 5 May 2000)
We are all to blame for the I Love You virus
2000-05-05
If you didn’t know by now that there was a new virus about, you must have been on holiday in a place where the technology has yet to reach. Once again, some bright sparks have used their skills of innovation and techncal expertise to ensure that they will have something to tell their grandchildren about. Once they get out of prison, that is. The poor, misguided youths have the FBI on their tail and it seems like only a matter of time before they are hauled before the international courts.
Don’t get me wrong. Clearly any human being that intentionally wreaks havoc on the scale that this recent attack has seen, is a weeping sore on the face of society and should be removed. There remains a nagging doubt, however – what bunch of idiots would design anything so fragile that a few students could put it out of action for days, particularly something on the global scale of the World Wide Web? Let’s face it, park benches are these days made out of concrete and fire-treated wood to protect against the expected acts of mindless vandalism that they will experience. Why do we not treat our fragile, Internet-based computing infrastructure with similar caution?
As with so much to do with technology, the answer is one of convergence. The hugely stable, resilient Internet was designed separately from the application suites and email systems, which were originally designed to run on closed corporate networks. It was a small step to design gateways which enabled email systems to exchange information over the Internet, but a giant leap would have been preferable. What we have ended up with is a mentality which says “Oh dear, I am getting email from a whole variety of unpleasant sources, and I can do nothing about it.” Nobody is taking responsibility, not the Internet Service Providers or the application vendors or the end users. All, however, are culpable. Mechanisms have existed for years to permit computer systems to be secured and sources of emails to be checked. The majority of the world’s computer users accept an operating system on which the end user is essentially the super user, and applications with only a minimum of security checking set up by default. There is the argument about whether the user is to blame, whose involvement is necessary for the I Love You virus to spread. But what chance does the hapless user stand, when every feature and function of the system is left wide open and accessible?
Some will be saying that Microsoft should be held in some way responsible for the spread of this latest virus, as it is the provider of the operating system (Windows), the application (Outlook) and the programming language (Visual Basic) amongst other things. But this is missing the point. Microsoft has provided what customer demand has sought. Admittedly the company can guide requirements and expectations but it cannot decide them – if users want poorly secured systems, they will get them. In the past we could perhaps claim technological ignorance but this line is starting to pale. No more excuses – as a global IT community, we must either act to implement systems that really meet our needs, or forever live in the shadow of being held to ransom by some bunch of mavericks on a faraway isle.
(First published 5 May 2000)
Web users may be going up, but service quality is not
2000-05-05
The UK Net population may be on the up, but unfortunately the same cannot be said for the quality of the services that said users are getting from Web sites, according to the results of two separate studies. The first, reported on \link{http://www.zdnet.co.uk,ZDNet UK}, was from Internet measurement firm MMXI Europe. MMXI found that, in just six months, the number of Internet users in the UK as risen from 7.8 million to 9.2 million – that’s a rise of nearly a million and a half. Meanwhile, on \link{http://www.theregister.co.uk,The Register}, a study commissioned by Sunrise Software found that, though 55% of companies noted that Web-generated enquiries were on the up, only 40% offered facilities for customers to email or otherwise communicate via the Internet.
It was not so very long since the Financial Times ran a mischievous little test in which it sent information request emails to the address provided on the Web sites of the major corporates. Only a minority of companies responded.
Clearly companies still do not get it. Most (if not all) major businesses now have Web sites, but this seems to be more through luck than judgement as companies are still not entirely clear what it is all for. The well-documented evolution of a Web site, from brochureware, through online catalogue and basic transactional, up to eCommerce site, should not need to be repeated (though we would be happy to provide a full explanation, should anyone request it). The Web is more than “just another Sales channel” – it is an integral part of an organisation’s business processes, branding and reputation. Companies which are leaving such vital information such as email details for information requests of their Web sites, are doing more than just missing out on prospective customers (a number of whom would rather look for another Web site than make a call or write a letter). They are sending the message that they are difficult to deal with, slow to react, unfriendly and backward. Not too good really.
Now for the good news. The Internet really does offer huge rewards for those that exploit it successfully. Provision of email details is one part of the story – the Web site of an organisation should be designed with the same care as the marketing collateral, the sales training and the management of operational services. The Web is the company’s shop front, and it opens onto the world. As the number of Internet users continues to increase, organisations still have the choice of risking lost business through inadequate Internet facilities. However there will come a point at which the risk becomes a reality and companies with shoddy Web sites will not survive. This isn’t a “get online or die” message, rather it is noting that the Web site is a fundamental, integral part of each and every business. Put simply, companies with the best Web sites, coupled with the best products and services, will do better than those without. Your choice.
(First published 5 May 2000)
G8 jaw, jaw, jaw about Cyberspace law
2000-05-11
It's good to talk, agreed delegates at last week's computer crime conference in Paris. Attendees from G8 nations - the United Kingdom, France, Germany, Italy, Russia, Japan and the United States - spent three days discussing the issues surrounding computer and internet-related crime. Nothing concrete came out of the talks, apart from a general commitment to international co-operation to combat the growing threat. But that is already a good start, yes?
No. The agreement reached by the delegates was to co-operate more fully, a process that is long overdue in starting. What is missing is any concrete actions coming from the G8 concerning exactly how cybercrime is to be combatted. At the same time, chinks are already appearing in the international armour. As reported on Silicon.com, different countries favour different approaches: for example, the European nations have already signed up to a Council of Europe treaty that favours tighter laws and regulations. However the US prefers self-regulation to form the basis of the fight against cyber crime.
Of course, the problems faced are not easy ones to solve. The problem is that cybercrime, like everything else technology-related, is raising its ugly head in most unexpected ways. The "traditional" view of computer-related crime, immortalised in "The Cuckoo's Egg" by Clifford Stoll, involves seasoned hackers breaking into government computers and selling the uncovered secrets for drug money. The real world has moved on however. Consider, for example, the denial of service attacks on high profile eCommerce sites (eBay, CDNow and the like), or the students breaking copyright laws by exchanging MP3 versions of their favourite CD tracks. Consider the "I Love You" virus, purportedly the most expensive criminal act that the Internet has seen so far. It appears that the fragility of the Web is under far greater threat than the unauthorised access to company or personal data. The arrival of mobile and always-on, high bandwidth technologies will no doubt bring with them whole new weaknesses to be exploited. For example, there is a strong possibility that the convergence of PDAs and mobile phones will result in a new breeding ground for denial of service worms. What is more, the arrival of always-on Internet connections will open the threat of attack to PC owners across the globe. Personal firewall manufacturers will have a field day.
The main problem is one of catch-up for the institutions. Legal practices and international conventions are way behind the technology curve. With the current political set-ups, this looks unlikely to change. Let's face it, if it takes over a year for Strasbourg to draft new guidelines for the expenses claims of Members of the European Parliament, what chance do we have of getting the G8 nations to agree on a global cybercrime framework before technology has changed the landscape beyond recognition? In corporate law, too, the world is moving too fast - consider the Microsoft trial, where the anti-competitive acts committed a few years ago are now hopelessly old hat. We need an internationally agreed, slick, flexible process of communication and mitigation of the threats posed, as they start to become possible and not after the events which will undoubtedly occur. One thing is for sure: the time for talking is well and truly over.
(First published 11 May 2000)
Storage Networks - the storage space goes ASP
2000-05-11
The fog surrounding the Application Service Provider (ASP) market is starting to clear, as companies move away from the traditionalist view and start to demonstrate solutions that reveal the real potential of ASPs. One such company is Storage Networks, a company that is coming from "over there" to spread the word about Storage Service Provision "over here".
So - what is an ASP? At the moment there seem to be two prevailing views. ASPs are generally accepted to be concerned with delivering "applications over the wire" where an application can be any software package. A more specific model is that being touted by enterprise package vendors, such as Oracle, Siebel and SAP. For these companies, the ASP model is a means of marketing their products to the Small to Medium sized Enterprise customers that have been unable to afford such products as an outright purchase. With a rental/lease-based ASP model, smaller organisations can benefit from such applications and, of course, vendors can benefit from reaching a previously tapped customer segment. Taking the two views above, the ASP model is both an architecture and a business model.
With a bit of imagination however, ASPs can be much, much more. Rather than considering ASPs as applications that can be served, the model should cover the provision of the widest possible variety of services. Services can be arranged as a stack, with lower level communications, storage and processing services serving higher-level information, application and business services. This view has already been discussed from a theoretical standpoint in the IT-Director.com ASP feature. Now, real examples of its use are coming onstream, not as valiant attempts by niche technology players but through companies who are already reaping huge rewards from the model.
One such company is Storage Networks, which offers a managed raw storage service to other ASPs. Storage Networks has implemented fibre-based Metropolitan Area Networks (MANs) in 36 US cities. Each MAN is used to connect a variety of types and configurations of storage equipment (for the technologists, Storage Networks is running both SAN and NAS equipment over the same fibre infrastructure). These configurations are then used to provide raw storage, backup/restore and data replication services to Storage Networks' customers who pay a fixed monthly fee per Gigabyte available. The company is launching a service next week in the UK, and intends to be in ten European cities by the end of the year.
All is not roses in the storage camp, not least due to the fact that the different storage vendors have yet to standardise on a single mechanism for managing storage. Storage Networks' customers can monitor their own virtual store through a read-only interface and can request repartitioning or increased bandwidth. All such requests are funnelled back to a management centre where updates are made using the proprietary software of each vendor. Against this backdrop, the other issue faced by companies such as Storage Networks is the accelerating requirement for storage. The company is finding out that despite the continued improvements in disk sizes and data rates, storage supply is only just keeping up with demand.
There are clearly still some problems to be over come in the storage market, but let us not be distracted from the fact that the underlying trend is towards provision of storage as a service. Such facilities go way beyond the 10Mb of disk space currently allocated by ISPs, often with little or no guarantees of service levels. What we are talking about here is the evolution of IT towards enterprise scale, performant, available facilities that are delivered as a utility rather than in a crate. Faced with the increasing cost and complexity of technology, the service provision model is one that few organisations will be able to avoid forever.
(First published 11 May 2000)
Love and viruses: the upside of crying wolf
2000-05-19
Security software companies have been accused of hyping up the potential risk of new viruses. It stands to reason: the more dangerous it is out there in cyberspace, the more we shall turn to trusty virus detectors, firewalls and other protection mechanisms. It’s all good for business. With the ongoing saga of love bug viruses hitting the news over the weekend, even the protection software vendors are starting to get uncomfortable about the dangers of over-egging the viral pudding.
According to Dan Schrader, chief security analyst at Trend Micro (reported on News.com), there is a fear that all the noise and stink created around the love bug risked making the whole issue fasde into the background noise. "If we cry wolf often enough, they'll tune us out entirely," said Schrader. This may be true, but at the same time there are positive signs that the PC proletariat are starting to understand the nature of such things as malicious attachments. After all, these new worms are not like viruses of old, which would infect executables without any outward indication of their presence. It is understandable how, say, a utility or game could be passed from one computer to another, carrying the virus with it. This did not have to be malicious – there have been several instances of viruses being passed on the cover disks of magazines. However, malicious email attachments require user intervention, namely a double click on an attachment that is a virus in all but name. It isn’t buried in the middle of a harmless program – it is a program in its own right, relying on a lack of caution on the part of a user. If having a lack of caution is a crime, I daresay we are all guilty (who did not double click on an email from a friend last Christmas, containing the message “Have a look at this!” and the attachment “Elf bowling.exe” or somesuch?)
This is where crying wolf is working in our favour. Far from being turned off by the constant media blare about the latest variant of worm X or Y, a large number of people have been on the receiving end of such malicious programs. I, for one, received two emails from acquaintances saying “do not open anything I have sent you! I was on the receiving end of the I Love You virus.” I have certainly been a little more cautious when opening unexpected executables or VBS files (Visual Basic Scripts). Not that I have yet received any of the latter, but you know what I mean. All the media attention in the world cannot interest us as much as some real experiences: there are few organisations that were unaffected by the recent attacks, and as a result both organisations and individuals are becoming a lot more savvy.
As is discussed elsewhere today [link to G8 article], we cannot fully predict the vulnerabilities that will be opened up by new technologies. However signs are encouraging that the water level of eNouse is rising. We are getting smarter – hopefully we shall learn fast enough to beat the next generation of viruses before they happen. After all, the solution is just a double click away.
(First published 19 May 2000)
Monterey on borrowed time
2000-05-19
Every now and then, industry observers have to stick their necks out and this is one of those occasions. No ifs, no buts – Project Monterey may be nearing its release date, but it will find that it has only a short life span. Why, oh why, I hear you ask, is the flagship OS to be left by the wayside? The answer is simple – Windows 2000 and Linux will make Monterey an irrelevance.
Project Monterey was launched a couple of years ago as a joint project between SCO, IBM, Sequent, Intel and Compaq. The plan was (and still is) to produce a Unix operating system for Intel’s IA-64 platform. The intention was that the Unix-based PC server would become as much of commodity item as the Windows-based PC client – a standard supported by the majority of hardware manufacturers and software vendors, with the resulting economies of scale for suppliers and customers alike.
As usual, in the duration of the project, the technology world has changed. There are two major changes which should be looked at. First, we should consider Microsoft. At the inception of Project Monterey, Microsoft was on the back foot when it came to server operating systems. Attempts by the company to demonstrate the scalability of Windows NT were not impressing the new generation of infrastructure customers, and Unix was becoming the OS of the Web. Microsoft has taken its time and now, with Windows 2000, it is in a position to fight back at least at the lower end of the server market.
Second, Linux has moved from academia to the mainstream, winning mindshare as the commodity operating system that Monterey had designs on becoming. The strength of Linux’s position is in the fact that it has been ported not only to IA-64 but also to just about every platform under the sun. Last summer it was already being reported that Monterey consortium members, whilst remaining bullish about Monterey on IA-64, were tellingly quiet about porting the new OS to other platforms. The attitude to Linux could not be more different: last week, for example, IBM announced that services and software were now available for Linux on the S/390 mainframe. According to CRN News, Greg Burke, VP of Linux for S/390 saw this step as a revitalisation of the S/390 platform and of mainframes in general. This line just doesn’t tie up with IBM’s current stance about Linux being an interim step for customers wanting to move to higher end systems running AIX or Monterey – after all you can’t get much higher than a mainframe. Try asking Mr Burke when Monterey will be available for S/390.
Ultimately it will be the customer that decides, and it is here that we are already seeing the last nails in Monterey’s coffin. According to the Register on Friday last, Fujitsu Siemens already has around 60 customers who are trialling 4-way Itanium servers based on the IA-64 architecture. And what about the operating systems the prospective customers are choosing? “Most users want Windows 2000, others ask for Linux but hardly anyone is interested in Monterey,” said a source from Fujitsu Siemens. Telling stuff.
(First published 19 May 2000)
Microsoft SOAP on the ROPEs
2000-05-25
Hmm… very interesting. Microsoft seem to be keeping its SOAPy hands firmly to its chest. SOAP, or Simple Object Access Protocol, is a technology standard announced by the company as part of its DNA initiative. Essentially it involves using XML as a communications language to enable object and component services to be accessed remotely, for example over the Internet – a kind of long-distance remote procedure call. Great idea, but now Microsoft seems to be balking at the principle of opening up a dialogue (as it were) about this new “standard”.
There are several reasons why Microsoft does not want to reveal its hand at this stage. For a start, SOAP is not ready for use, indeed it is little more than a twinkle in the company’s eye at this stage. If Microsoft was to reveal all it would be opening itself up to both ridicule and idea stealing. Let’s face it, there are plenty of people that would not be able to resist having a pop at the software giant, it’s a way of getting a bit of payback for the extortionate costs of the software over the years. As for stealing of ideas, imagine what would have happened if SUN had released details of Java to the IT population before it was fully fleshed out? There would have been no shortage of companies prepared to copy and even patent the fledgling ideas. Okay, there are the positives. Of Java, more later.
On The Register yesterday a second suggestion was raised. SOAP is to be an instrumental part of Microsoft’s Next Generation Windows Services (NGWS), also known as MegaServices. Detail about these is sparse but the oft-quoted example is of Microsoft Passport, which can be used as an authentication mechanism for any internet-enabled application. Other MegaServices could include transaction management, payment/billing, inventory and so on. Clearly some mechanism is required to access these services. SOAP is the ideal candidate, coupled with a communications handling mechanism such as the Remote Object Proxy Engine or ROPE. Now, suggested The Register, SOAP is being kept quiet not for its own sake but for the sake of NGWS, upon which Microsoft’s whole future may depend. Having seen Microsoft’s fears that the ongoing court case may kill NGWS, this may well be true.
There is one more reason why Microsoft are being reticent about the detail of SOAP. As we’ve already mentioned, SOAP is following the same path as Java and with good reason – like Sun, Microsoft do not want to lose control of the “standard” once it appears. Why? Because, to Microsoft, SOAP is a Java killer and more – the company has set it sights on the whole EJB/CORBA caboodle. With an XML-based standard for application intercommunication, why bother with the layers of complex interfaces that have evolved around the Java spec? That’s the marketing theory anyway – the reality is that, with ROPE, Microsoft are re-inventing the request broker in their own image and hoping that its adopters will squeeze those nasty competitors out of the picture.
We’ll just have to see how it gets on.
(First published 25 May 2000)
From blue screens to screen blues for handheld manufacturers
2000-05-25
Not so long ago, the bane of computers was the blue screen of death. Today, the world has moved on – handheld manufacturers are more concerned about having any screens at all, or at least ones worth looking at. Hewlett Packard are red-faced over the fact that their new PocketPC-based device can only display 4,000 colours, whilst Palm are facing supply problems of their own. These are exactly the sorts of events that cause shake-ups in the industry.
First of all, let’s look at Hewlett-Packard. The manufacturer with a reputation for the highest possible quality was embarrassed to admit that its new handheld had been beset by design problems. According to News.com, a 16-bit component was accidentally substituted with a 12-bit component, meaning that the number of colours that can be displayed has been constrained to 4,000. This is an order of magnitude less than that expressed in the marketing literature. It is quite a surprise that Hewlett Packard should let this one through. It is also an indication of the reality that is faced by all technology companies developing complex products, where even a small error can have enormous consequences The message is: buyer beware, don’t believe the marketing. Despite the fact that this error has not involved the king of hype, namely Microsoft, this may be just too much reality to bear coming as it is just at the moment when the company is pushing hard its new PocketPC platform. This release of the now-defunct Windows CE is “new and improved,” covering the tracks of poor synchronisation, bugs and usability problems that beset previous versions of the software. The last thing the Seattle software giant needed was further evidence of technological weakness, even if this time it was not the cause.
Palm has had its own share of display problems – this time, due to supply being unable to keep up with demand. This is not just an issue for Palm, but for the whole LCD market, particularly with the need for enhanced screens for WAP phones and other electronic devices. Let’s face it, today if it hasn’t got a natty screen it’s probably not worth having. Plus, if it wasn’t enough to have screen shortages, Palm has had problems sourcing sufficient Flash memory. At the moment it is still possible to find Palm handhelds on the market, but some companies are already out of stock and are taking back-orders. Interestingly no problems have yet hit Palm clone manufacturer Handspring. Maybe the latter had a stock surplus due to its initial difficulties in managing the glut of requests for the devices when they came on the market a couple of months ago. Also, the Handspring device does not use Flash memory. It could just be that the fault lies with Palm, being unable to manage its own supply chains. Whatever the case, the danger signs should be flashing for Palm, which cannot afford to trip over and lose the lead to an increasingly slick competitor from Microsoft and its partners. If Palm is having screen shortages now, it needs to get its act together before the summer, when a lowering of LCD prices will fuel a new boom in their use.
Palm still holds the lion’s share of the market, but the next few months will be make or break for the handheld manufacturer. Ultimately the company could be living on borrowed time as smarter, more functional mobile phones start to replace the need for handhelds. So far, Nokia, Ericsson and the like have shown only partial interest in borrowing technology from the handheld market, with Symbian as the main contender. The next six months will be very interesting indeed, and neither Palm nor Microsoft can afford to make any mistakes.
(First published 25 May 2000)
Is it time for a socially responsible Internet?
2000-05-25
This week, a Paris court judged that Yahoo.com – that’s the US-based firm, not the French subsidiary of Yahoo – was in breach of French laws concerning the sale of materials with racist overtones. What is more the company has two months to prevent French surfers from accessing any auctions of Nazi memorabilia and other items deemed racist. Ouch.
Of course, this is not the first time that auction sites have been condemned from hosting the sale of unseemly stuff. In November last year, eBay was taken to task by the Simon Wiesenthal Centre about its auctions of memorabilia which “glorify the horrors of Nazi Germany”. Indeed, according to this week’s New York Post, eBay was back in the spotlight, as it removed from sale the charred remains of what was reputedly a gun salvaged from the former Branch Davidian compound in Waco, Texas.
At first sight it appears that auction sites are attempting to have the butter and the butter money, as they say in France. eBay does not permit Nazi memorabilia to be sold on its German eBay site, as that contravenes German law. However the Waco gun was removed from the site due to eBay’s policy on not allowing gun sales. The two cases reveal a significant difference in policy. In the Waco gun case, spokesman Kevin Pursglove was quoted as saying “eBay won't host any gun sales.” However in the case of the German memorabilia, Pursglove was quoted (this time on \link{http://www.zdnet.com, ZDNet}) as saying "We expect eBay users to adhere to the policy and guidelines of the country in which they are living. It is not our role to police compliance." So it is unclear whether eBay is prepared to take a moral stance on the items that are auctioned, or not.
What is also unclear is the matter of jurisdiction. The French law suit has effectively said that a US company is not permitted to make available materials to French citizens via the Internet. This ruling is virtually impossible to implement and, indeed, its validity is unclear. Just as it would be possible for un Francais to telephone a bid to an auction in the US, so such a bid can be made over the Internet. It is difficult to see how the auctioneer can be held responsible for the actions of the prospective bidders, particularly across international boundaries. Even if successful, it is even harder to see how such rulings can be enforced. As noted by President Clinton in the \link{http://www.mercurycenter.com, San Jose Mercury News}, trying to police the Internet is "like trying to nail Jell-O to the wall."
Ultimately, auction companies that want an international standing will require certain, globally accepted standards to be upheld. The example of eBay suggests that it is applying national laws in its offerings to individual countries, but it is applying its own (apparently Californian) moral standards across all its sites. The definition of “globally acceptable” may be a tough nut to crack – for example, even Barbie Dolls might offend the sensibilities of certain nations – but nonetheless a line has to be drawn between what is acceptable and what is not. Companies such as Yahoo and eBay may claim immunity from such standards, but if so they are not walking their own talk – Yahoo, for example, has been accused by civil liberties groups of “outing” authors of offensive postings on its chat boards (sometimes without even a complaint being made). Such companies are either proactively responsible for what is happening on their sites or they are not – they cannot have it both ways.
(First published 25 May 2000)
June 2000
This section contains posts from June 2000.
Cybercafes reach the farmers of India
2000-06-01
If there is one thing that India has got, it is railways. With railways come both stations and many, many thousands of miles of straight track, the latter including channels for telecommunications cables. Just north of Madras in the Bay of Bengal, a pilot project has started which will link five stations to the Internet by the end of June. BBC News tells us that, at each station there will be a cybercafe, with Internet terminals available for rent on an hourly basis. Perhaps more importantly, one of the stations will act as a wireless base station providing Internet services to up to fifteen households.
From one perspective, what we are seeing here is how existing technologies can be combined innovatively to spread the reach of the Internet. Like Tele2 in the UK, which uses a combination of radio, microwave and land-based services to provide high-speed Internet access to business parks, this is about combining technologies in the most cost-effective way. In a way, each technology is competing against the others, encouraging progress and exerting a downward pressure on costs.
In addition there is a more important process at play here. Technology does not exist for its own sake, but as an enabler. To coin a cliché, the Internet is the great leveller, both for businesses and for individuals, and that is what will give countries still at the trailing edge of technology some freedom to catch up.
Not that India has anything to be ashamed of, of course. We have already written about how the second most valuable software entrepreneur in the world, Azim Premji, has built his fortune on India’s ability to provide high quality software at a lower cost. Similarly, Indian companies such as Hexaware have shown how innovation does not need to be at the expense of software quality. All the same, Indian cities have many advantages over its rural communities. The Great Leveller is now turning its attention to this divide.
What can the Internet offer the rural population of India? The answer is simple – education. The wealth of information resources offered by the Internet, and the form in which these resources are offered, go hand in hand: the Internet never sleeps, and computer-based software never loses patience. Furthermore, once the connection is in place, the majority of resources are available for free.
There is still plenty to be done – this project is a pilot, after all. Many villages only get electricity once or twice a day (again, a problem to which technology may well have the answers). Internet resources may not be entirely suitable in their current form (but could be made suitable, once the need exists). Educational organisations used to working in one way may well find that it is worth their while supporting the efforts of this project, rather than supporting educational processes directly as they have in the past. One thing is for sure - the tendrils of the Internet may reach around the globe, but it will not truly be a global phenomenon until the world’s population has access to it. The Great Leveller will not rest until old and young, rich and poor, city, and country, West and East, are afforded equal access to this fundamental resource
(First published 1 June 2000)
SurfTime: BT drops the Beach Ball
2000-06-01
After much speculation and some ribbing from friend and foe alike, it looks like the end is nigh for BT’s SurfTime Internet service. Despite a relaunch effort last week, even spokespeople inside the company are seeing the writing on the wall. So – where is it all going wrong for BT?
According to BT, “SurfTime is the new way to use the Internet without paying call charges.” Unfortunately, this is wrong on two counts. SurfTime may have been new when is was first touted last year, but today a number of ISPs and other communications companies, such as Amazon, Freeserve and NTL, offer similar services. BT'’ offering may now be competitive to these other providers, but it is not free – it costs 19.99 for a 24-hour service and 5.99 for evenings and weekends, plus (read the small print) users may pay an additional charge for the ISP service they connect to. Okay, there are no charges for individual calls, but a flat fee for the month does not really qualify as “without paying” in anyone’s language.
SurfTime’s launch on Thursday last week was followed a ruling by Oftel which requires BT allow other telcos to allow unmetered access to the Internet. Essentially, this will force the company to release its iron grip on the “last mile” between the local exchange and the home, at least for Internet access. This will cause whole new levels of competitions which, over the next few months, look set to cause BT to replace SurfTime with something competitive. BT has been competing before, but against rival technologies: this time, it faces competition on the local loop, which has proved unassailable in the past.
What is more, a little research on The Register discovered that, though BT announced 20 ISPs supporting the SurfTime service, 17 of these belonged to the same service provider - Affinity Internet. Hardly a ringing endorsement. A BT spokesman was quoted on ZDNet UK as saying “If neither ISPs nor the public want it, it will die a death and so be it.” If that’s painting a gloss, we’d love to hear what BT spokespeople are saying behind closed doors.
What we are seeing here is the dying throes of a very old company, which has already announced a strategy to release it from the shackles of the past. BT has reached a point at which it can no longer rely on momentum to bring in profits, as proven by its less-than-successful results announced a month ago. The company’s new ventures should stand it in good stead for the future, as long as it is prepared to act as one of the new generation of communications service providers and not as an old-style, monopolistic telco. It is right that Oftel has acted to level the playing field, at least for unmetered Internet access: we wait with interest to see how BT will react. Any decisions it makes over the next couple of months could well seal the company’s future.
(First published 1 June 2000)
NetSanity brings one-stop publishing to the Web
2000-06-01
As we claw our way out of the primeval swamp of technology, we are hitting a wave of diversification. The move towards appliances and devices may have its attractions but it also brings a series of problems with it. One of these is how to handle the range of forms that information must take to reach this variety of devices. As always, where there is a problem there will be a start-up which claims to have found the solution and in this case it is NetSanity.
NetSanity’s primary goal is to bridge the gap between information display on the Web Browser and on WAP-enabled smart-phones. There are several possible solutions to this, depending on where one thinks the problem should be solved. NetSanity have gone back to what it sees as the origin of the problem – the stage of preparing the information for publishing. The company offers a one-stop publishing solution in which organisations can prepare their information in an agreed structure based on XML. NetSanity then provides an ASP-type that which takes the information and delivers it to the supported range of formats and devices. As well as HTML for browsers and the Wireless Markup Language (WML) used for WAP phones, NetSanity supports other formats including SMS which can be delivered to existing mobile phones and pagers.
A rather clever part of NetSanity’s offering is the preference facility it gives to browser and phone users. Rather like (the sadly passed away) PointCast, NetSanity provides a console (termed the SmartBar) that offers a selection of content – news, sports results and the like. However the user’s preferences are not stored on the device, rather on the server so users can keep up to date whether they are using a WAP phone or a Web browser. Preferences can be managed on a per-device basis – for example, a user may wish to see only the headlines on a WAP phone but the full story on the desktop.
NetSanity is keen to stress that it is more a conduit than a service provider. Its prospective customers are the content and service providers themselves, who can turn to NetSanity to get their message out to a wider audience. The model appears to be working: the company has already signed up Nokia to trial a NetSanity-based service.
The future target of NetSanity is to support the drive towards mobile commerce and location-based service provision. Both of these goals will pose new challenges for the company, as it will not only be the information which needs to change but also the processes and transactions involved. Even if NetSanity succeeds in the short term, it will face an even bigger hurdle later on. Whatever the case, it is good to know that innovative companies such as this will continue to exist for as long as there are such hurdles to overcome. The giants will not be able to do it by themselves.
(First published 1 June 2000)
NEON’s iWave bridges the business process divide
2000-06-07
World religions are defined by some as "different ways up the same mountain" - a phrase that could also be applied to the ever-converging domains of IT software. The border between enterprise packages and bespoke applications is becoming ever more fuzzy, and the question is moving from whether to build or buy, towards how to achieve business solutions with both. At the same time, the business (by which we mean all non-technical elements of an organisation) is playing an increasing part in defining technical solutions. In fact, in a number of areas there is little division between the two: take, for example, Customer Relationship Management (CRM) in which business capabilities are largely dictated by the underlying technology services.
One of the keys to resolving the increasingly disparate portfolio of applications has been the provision of integration software, a family which includes middleware, messaging and, more recently, Enterprise Application Integration (EAI) packages. This latter group does what it says on the box - it provides a means of communication between enterprise applications such as CRM and ERP packages. It is widely admitted that EAI companies have their work cut out, largely because the scale of the problem is far greater than can be resolved by simple means. Analyst firms report that EAI is in its infancy - it may be able to deal with standard integration paths between common applications but can flounder when faced with more difficult environments.
Rather than attempt to be all things to all applications, one vendor is adopting a different tack. NEON Systems, responsible for the software which has enabled Thompson Holidays to power its Just web travel service, is to extend its iWave range of EAI software to support business processes. This merits a little explanation.
iWave Integrator is a many to many EAI product. In other words, it can be used to enable communications between multiple enterprise applications. The product currently supports the gamut of CRM and help desk products, so if for example a customer service call was being dealt with using Peregrine, a request could be passed to Siebel to provide the customer's sales history. iWave does not use a hub approach, rather a management console permits the registration of different packages and platforms.
iWave Integrator works by allowing the registration of a number of objects that are supported by a packaged application (such as 'customer', 'order' and so on) and the definition of a number of verbs which can be associated with each object. For example, 'customer' may support 'create' and 'delete' as well as more complex verbs such as 'make purchase' or 'change address'. Thus far, however, each application is acting as a separate entity - either of the packages, or none, can act as "master". All that is provided is a means of communication between the two.
Now, get ready for the exciting bit. What if, rather than using these objects and verbs as a common language for packages to communicate, they were used to build business processes in their own right? This is exactly what NEON Systems are doing. A new facility, to be known as iWave Business Flow, is on the launch pad: this enables the graphical construction of workflows that can then be executed using the objects and verbs of the packages available. In one fell swoop, the EAI platform has become a tool for combining package functionality to define and automate business processes. This is powerful stuff.
There may still be some weaknesses in NEON System's approach and products. iWave Business Flow has not yet been released, and its first version is likely to have its limitations as first versions do. Also, iWave Integrator currently lacks connectors for the wide range of packages a fully-fledged EAI environment should be able to support (such as SAP, Peoplesoft, Broadvsison and the like - although we are assured that these will be available within the next few months). Nonetheless, iWave is an innovative facility that bridges the gap between package integration and application development. It uses 'objects' and 'verbs' as its terminology, but it could so easily have used 'components' and 'services'. Furthermore, the iWave portfolio includes an Interface Development Kit which permits the creation of bespoke interfaces to legacy or hand-crafted applications.
The different approaches taken by IT software may well be different sides of the same mountain, but what they boil down to is delivery of software functionality that automates business processes. Coming from an EAI position, NEON Systems seem to have grasped this fundamental and looks prepared to run with it. We can only hope that the iWave products, when they come fully integrated with a workflow engine, live up to their aspirations.
(First published 7 June 2000)
96
Normal 0 false false false EN-GB X-NONE X-NONE
/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman",serif;}
DOJ separates Windows from its reason
2000-06-07
Sometimes the simplest of statements can hide the most complex of matters. Take this one, for example: “The separation of the Operating Systems Business from the Applications Business.” As the whole court case was started on the basis of the line between the two being less than simple to define, it is worth delving a little into the detail: what, in technical terms, is the planned breakup said to entail? We need look no further than clause 7 of the final judgement. To quote:
"Operating System" means the software that controls the allocation and usage of hardware resources of a computer.
As far as the judgement is concerned, the operating systems business need concern itself only with the OS as defined above. The term “computer” is given a pretty broad definition, including PDAs, mobile phones and set top boxes (but, noticeably, not games consoles). Even broader is the definition applied to the term “Application”, comprising anything else you can think of. Which leads us to a very interesting situation, very interesting indeed.
First of all, Microsoft is currently shipping versions of its OS family with a whole raft of facilities that go way outside the scope of the definition of “operating system”. These include such high-level features as transaction servers and email facilities, but also lower level software such as calculators, media players and even text editors. The judgement considers all of these as “applications”, to be assigned to the applications business. The Microsoft OS business has been allowed a license in perpetuity to all “applications” which it currently sells as part of an OS package (apart from the browser, that is). In the future, it appears that it will be free to develop new applications but each will be under the scrutiny of the compliance committee. This makes things messy for the company, to say the least – after all, who decides when something is part of the operating system or not? For example, is internetworking software part of the OS? If so, what about secure internetworking software, such as encryption/decryption? Is a music player part of the OS – after all, it drives a hardware element of the computer.
The second issue is one of bundling. Microsoft is currently a customer facing organisation, delivering operating systems and applications that work in harmony. However it is the applications that are of more interest than the OS. Computers are evolving into devices and appliances – essentially sealed packages that perform specialised functions. Think of Network Attached Storage or Email appliances, or the EasyPC sealed PC initiative that has spawned the iPaq and NetVista, or even set top boxes and internet phones. All of these are manufactured from both software and hardware components, and the user of each is encouraged to ignore what is happening under the bonnet. In other words, the only way the OS can gain attention is by self-promotion, à la Intel Inside. Microsoft’s success has been helped by its appeal to the business end user and consumer alike. However the proposed division could hide OS for good behind an application fascia, forcing the company to become a pure OEM business – a far harder market to keep sweet than its traditional stamping ground.
Microsoft’s OS business may be permitted to develop new applications of its own, but it will have a standing start. It could buy its traditional competition (how ironic, if the new company acquired Corel and Borland), or start from scratch. One thing is for sure, it cannot survive on the strength of selling nothing but operating systems – the market is too slow and the potential for innovation is limited. The breakup of Microsoft will see two companies leave the Redmond fold – one is an applications business with good potential for growth, one is an operating system company that needs all the luck it can get to stay in the game. Windows owes a large part of its popularity to the availability of applications: without the latter, the former cannot survive.
(First published 7 June 2000)
Encryption: Should the RIP bill rest in peace?
2000-06-08
The debate over the Regulation of Investigatory Powers (RIP) bill rages on, according to Silicon.com, but at the end of last week a lone voice appeared on the scene to welcome its measures. At the heart of the bill is the issue over encryption keys, or access to them: as it mentions on Silicon.com, individuals could be jailed if they cannot produce an encryption key for data sent over the Net. This measure does appear a bit harsh, considering that flushing non-electronic evidence down the toilet as the Police hammer down the door is not yet treated as a jailable offence.
The lone voice in question was Frank Coyle, IT director at John Menzies. To quote: "We cannot underestimate the threat to businesses from organised crime using the Internet. I think we have to try it. At the moment we have nothing, and that puts the initiative in the hands of organised crime. If the government did nothing it would be accused of being inept."
So – where does the truth lie? So far an impressive array of organisations have lined up against the bill. Specifically, the British Chamber of Commerce, the Data Protection Commission and the Institute of Directors, not to mention numerous civil rights groups and seventy percent of respondents to a recent Silicon.com poll. Facing these massed ranks are the UK government (specifically, the Home Office) and Mr Coyle (not to mention the remaining 30% of Silicon.com pollsters).
Organisations against the bill are pretty clear in expressing their worries. Business organisations fear the damage that the encryption key measure might do to UK business, particularly in the light of Britain’s attemopts to become an eCommerce hub for Europe, if not the world. Other organisations are expressing concerns about the disregard for the basic human right of innocence until guilt can be proved. Possession of an encryption key is not a criminal act, it is argued. The third argument is that the measures, draconian as they appear, will have little or no effect on cybercrime.
Meanwhile, the government’s fears about patrolling an encrypted Internet also appear to be well-founded. What with the European Union relaxing export restrictions on encryption technology, there is a real danger (in the eyes of the government) that existing surveillance methods will become inadequate or worse. In Japan, for example, 1024-bit keys are now de rigeur: even the current generation of supercomputers would take months to crack the simplest of messages encoded in this way.
The issue of encryption is fraught with hazards. It is undoubtedly true that strong encryption would hamper attempts to track down criminals that use the Internet to communicate. What is also true, however, is that even today, technology is providing workarounds for such criminals, such as the (legal) practice of steganography which involves hiding encrypted communications in other files such as video clips.
Whilst it may be true that new laws are required to deal with new types (and means) of crime, we must not throw the legal baby out with the bathwater. The organisations that are crying out for this bill to be repealed (or at least, amended) are not just local pressure groups, but national organisations which represent our industries and our rights as individuals. It may well be true that strong encryption prevents surveillance, even to the extent that a tried-and-tested law enforcement technique becomes relegated to history. True or not, this is no time for the government to panic and implement a law that fails to achieve its objectives and causes a great deal of damage in the process.
(First published 8 June 2000)
Cheer up, Lou – $2.1 Trillion should ease your pain
2000-06-14
Lou Gerstner is a worried man. According to a leaked internal memo posted on The Register, he is fretting about many companies’ apparent seduction “by the lure of the magic market-cap wand.” He is worried that organisations are seeing the Web as key to the electronic city, as a license to print money. His fear is that organisations are not looking beyond eBusiness to other, equally fundamental changes in the global market economy. Oh dear, oh dear. Well, he may choose to seek some solace in a report just published by WITSA (That’s the World Information Technology and Services Alliance), an international consortium of 40 IT trade organisations. Among the findings of the report was the fact that global IT spending topped $2.1 trillion in 1999. Now that should give even the dour Mr Gerstner something to grin about.
It may be that Lou has a point. There is plenty of evidence that companies are e-Nabling e-Verything with the hope of huge returns, or at least the fear of losing their existing places in the market. Organisations like our own are quick to point out that businesses without an eBusiness strategy might as well pack up and go home. Are we wrong to suggest that the Web is changing business? No, not according to Gerstner. “Don’t get me wrong,” says Lou. “I’m convinced that eBusiness really is changing the entire basis of the global economy.” In other words, don’t ignore the threat and the promise of eBusiness. But don’t let it cloud the other, ongoing changes in stock markets around the world. And (perhaps most importantly) don’t think that any old eBusiness strategy will yield huge returns. Call us old fashioned, but we’re hoping that the Boo.com debacle burst that particular bubble.
The New Economy might be dangerous ground for businesses in general, but according to WITSA it is being very kind indeed to the technology sector who, let’s face it, stand to gain whether the businesses they supply succeed or fail. In the nicest possible way, it’s a bit like arms dealers who sell to both sides and clear off quick before the bombs drop. This is certainly the picture painted by the WITSA report, a summary of which is available \link{http://www.witsa.org,here}. Furthermore, the report paints a “bullish” picture of the future, with a prediction of $3 Trillion annual IT spend by 2004. The drivers it indicates are:
- continued expansion of the Internet, fuelled by wireless, broadband and the device explosion
- privatisation of government-owned businesses
- adoption of eBusiness facilities such as vertical electronic marketplaces
- harmonisation of international law concerning the electronic economy
- emerging markets, such as China, India and Brazil.
All in all, technology companies have a great deal to smile about, particularly companies such as IBM which are ideally placed to take a slice of the pie just by continuing on their present course. We are sure your feelings were heartfelt, Lou, but we don’t imagine you will be losing sleep for too long.
(First published 14 June 2000)
Spam ban, thank you Ma’am
2000-06-15
It all sounds so simple. Late last week the US House of Representatives’ Commerce Committee agreed to support a bill that could make the problem of unsolicited email, or spam, a thing of the past. According to \link{http://www.theregister.co.uk,The Register}, central to the bill is the requirement that spammers include a valid return address in their emails – this alone could be sufficient to deter the majority of their number.
Spam is as big a problem as ever. It was reported on News.com last week that a recent study found ISPs and free email services were capable of blocking up to 73% of unsolicited email. The downside is that the 27% figure refers to an ever-increasing pile of junk email, with spammers becoming increasingly determined to get through the Net. On a positive note, the anti-spam measures were not found to be preventing a single kosher email from being delivered.
So – why should a simple measure like the return address make such a difference? There are several reasons. First of all, as mentioned, the measure is likely to put potential spammers off. It is one thing to spam anonymously, but to move out into the open is an entirely different matter. Second, with the existence of a source of the email, it will be easier for the recipient to take steps to prevent further emails from the same place. The address can be blocked, or reported to the ISP or to an anti-spam organisation, or even flamed with a few hundred responses. Third, by making anonymous unsolicited email illegal, the case can be brought against individuals more easily than under present laws.
Of course, there will be nothing to prevent spammers from continuing to use insecure servers as through-routes for the thousands of messages that they send. The method is simple: dial up to an ISP, set mail.acme.com (where Acme have an incomplete or incorrect security configuration) as the mail gateway, then fire a thousand or so emails at the Acme server which then forwards them to the destination addresses. After a while, disconnect from the ISP, reconnect to a different ISP (with a different IP address) and do the same thing. Tracing the initiator of such emails is virtually impossible at present. It can be hoped that the number of servers that leave themselves open to this kind of attack is dwindling as sites become more security savvy, either before or after finding themselves victims.
Unsolicited email is a product of an insecure global network, weakly legislated and poorly policed. Things will probably get worse before they get better, for example as the mass rollout of ADSL provides a new pool of insecure computers which can be used as unsuspecting host for spam forwarding. It will take a combination of better legislation and tighter computer configurations to make spam a thing of the past.
(First published 15 June 2000)
Sun Storage: the future is bright, the future’s Purple
2000-06-15
One of the big surprises about Sun’s recent storage announcement was that it was now making more money out of sales of storage devices, than of servers. Now that’s food for thought. Clearly, this is the factor that has spurred it on to go head to head with companies more traditionally associated with the pure storage market, notably Compaq, HDS and EMC. Sun have announced a new storage strategy and a new product line, the flagship of which is the StorEdge T3 – code name Purple – which fits 300 Gb in a box the size of a desktop computer.
Sun’s StorEdge range is designed to plug into AIX and Windows NT networks as wel as Sun’s own Solaris environments. Linux support is also planned - the goal is to provide open, scalable storage to work in heterogeneous environments. Having said this, fifteen years ago Sun was one of the first companies to apply the “Open Systems” label to its Unix-based workstation and server products. In this context, “open” meant, “can communicate with other Unix systems.” It is worth delving a little into what Sun means by open storage – look no further than Jiro.
Jiro (previously known as StoreX) has been on the cards for a long time – a good couple of years, if memory serves. Essentially Jiro provides a set of interfaces to enable heterogeneous storage systems to be managed, configured and allocated as a single, virtual hierarchy. The StorEdge range of products will be Jiro-enabled and, as Sun’s \link{ http://www.sun.com/storage/index.html,Storage web site} proudly boasts, “many technology industry leaders support the Jiro effort.” This includes companies like Veritas and Legato, who together have the storage software market pretty much sewn up.
Trouble is, Jiro isn’t the only storage management standardisation effort in town. The Storage Networking Industry Association (which can be found \link{http://www.snia.org,here} is looking to provide exactly what Jiro supports. Similarly, the Distributed Management Task Force is also working on storage management standards. The good news is that the two organisations are working together. So what of Jiro? Companies like Veritas freely admit that they are standards-agnostic and they are supporting the work of SNIA and Jiro, not to mention efforts of other organisations such as the Fibre Alliance. Companies such as EMC and Sterling (now part of CA) have also proudly presented their storage management facilities, all the time actively supporting the standards bodies.
All in all, storage management standardisation is not a pretty picture. No doubt, as with Java, Sun’s idea of an open standard is one that Sun has the casting vote over. Jiro may have its technical merits and it may have the buy in of other organisations, but it appears that the possibility of a truly open storage environment is a long way off yet.
(First published 15 June 2000)
Telecomms unbundling – watching the watchers
2000-06-15
The UK’s telecomms watchdog, OFTEL is between a rock and a hard place. Back in March, it confirmed that the 1st of July 2001 loop was to be the “absolute deadline” for the unbundling of the local loop. At the same time however, PM Tony Blair was in Lisbon signing up to a pan-European agreement that fixed the deadline for the end of 2000. Now it is looking like the EC will hold OFTEL in breach of directives that are hastily being drawn up to carry forward the Lisbon agreement. A storm is brewing, and it looks unlikely to be the last for poor OFTEL.
This will not be the first time OFTEL has fallen foul of European directives, but the third. The watchdog has already been held to account for its less-than-agile approach to competition in the mobile market, not to mention an ongoing issue surrounding carrier preselection, that is the ability to use another telephone company without entering additional digits at the dial tone. As if it were not enough to be accused of sluggishness by the EC, when it acts, it is accused of pandering to the incumbent telco BT and penalising UK businesses in the process. At the end of last week, OFTEL’s attempts to kick-start the stalemate over the xDSL rollout were met with incredulity by some members of the Spectrum Management forum, set up by OFTEL itself to keep momentum in the unbundling process. Already, OFTEL is expressing fears about missing the unbundling deadline and this is without taking into account the fact that, according to the EC, it has already slipped the date by six months.
Not everything that OFTEL has done has been seen as such a bad thing, however. The decision made at the end of last month, that BT SurfTime was uncompetitive as BT did not offer a wholesale version of the product, was hailed as a “victory for consumers,” according to \link{http://www.zdnet.co.uk,ZDNet}. ISPs can now re-sell BT’s SurfTime in whichever way they choose: this will effectively open up the market to a whole new wave of unmetered access deals. This has both up- and downsides – on the upside, prices will come down still further than the currently offered deals; however the complexity of the market will no doubt increase, while the quality of service may drop in the short term anyway.
Whatever the final date for unbundling, be it Christmas or the middle of next year (and we rather suspect the former, after all Tony Blair is never wrong), the pressure gauge will rise and OFTEL will have no choice but to accept its share.
(First published 15 June 2000)
Microsoft registers www.NewStrategy.NET
2000-06-22
The IT industry is a strange one sometimes. On Thursday last week, Microsoft pulled the covers off its gleaming new strategy, an announcement that it has been building towards for many months. Products incorporating the fruits of this strategy will become available next year, which begs the question – what was the previous series of big strategy announcements all about?
Let’s work back from the answer. Last week Bill Gates announced .NET – an Internet-centric vision that sees Microsoft’s current product lines as being the building blocks for a seamless, global computing environment. As reported on The Register, we get Office.NET, Windows.NET and MSN.NET for starters (and you can work out the rest for yourselves). Products will be .NET-enabled by bolting on XML-based interfaces. The whole thing begs a few questions.
First off, there are issues of (if I may) a techno-architectural nature. It is one, relatively straightforward thing to draw a blueprint diagram of the “ideal” architecture for Web-enabled applications. There aren’t that many ways to skin that particular cat, at least not in theory. In practice however, things get a little more complicated for two reasons. Yes there is the issue of legacy – “this is not a green field site,” say the consultants, no doubt earning plenty of money in the process. There is also an issue of global complexity. Microsoft has drawn up a reasonably comprehensive framework of its own but it seems to be dependent on a couple of factors – that the whole world adopts it, and starts from scratch to do so. Either factor sounds just a trifle infeasible. Let me stick my neck out here: the way of the future will be one that supports the heterogeneous mass of complexity that already exists (and there is more on the way, what with Mobile and all). Single-company solutions, however elegant and widely adopted they may be, cannot succeed.
The second issue is one of strategy versus product. At the beginning of last year, Microsoft launched BizTalk, an XML-based framework to support business communications. XML is becoming a bit of an overachiever however – business-to-business traffic is not enough for the megalomaniac language, which (as SOAP, in partnership with Microsoft and now IBM and Sun – see the link?) is being touted to be the format for communication between application components as well. Trouble is, Microsoft may well be quick to see the exponential potential of XML, and are changing their strategy on a monthly basis to fit. However the products are forever trying to catch up with the vision. Not long ago, delays were announced to BizTalk Server to include support for business processes, another conquest of the XML strategy. BizTalk Server may be out after the summer but by then, may well be eclipsed as companies hold out for BizTalk.NET, unlikely to be available for at least six months.
There is one more question that must be asked – is the .NET strategy anti-competitive? Of course it jolly well is. What Microsoft has done is pulled together its own product lines as building blocks to act as a foundation for the future of the Web. The company is between a rock and a hard place: clearly, the future lies in the integration of today’s applications and operating systems, however Microsoft does not wish to concede that customers should have a choice of different platforms. Keen that its own portfolio should be used in preference to others, the picture the company paints is exclusively Microsoft. This is a flawed perspective, which will ultimately cause the Seattle giant more problems than it solves. It looks like the future holds plenty more opportunities for Microsoft to change its strategy.
(First published 22 June 2000)
IBM’s voice recognition grows up
2000-06-22
We knew that Ozzy Osbourne’s talents were diverse, but we didn’t expect to see him popping up as general manager of IBM Voice Systems. Okay, it is a different chap but somebody who associates himself with the hard rock icon must be either barking mad or know exactly what he is doing. Judging by IBM’s latest announcements in voice recognition, we would suggest the latter.
IBM has announced a complete revamp to its voice strategy, which reflects and confirms the direction taken by the IT market as a whole. Essentially, the technology landscape is moving away from the fat client architecture using general purpose PCs, towards a structure that concentrates hard processing on the server and performs certain specific functions on embedded chips in client devices. These two areas – thick, general-purpose servers and thin, function-specific clients – provide the model upon which IBM is to base its strategy for voice recognition in the future.
It has to be said that something had to give. As an advocate of the potential for voice recognition for many years, I felt obliged to run through the learning mode of IBM’s latest release of ViaVoice. After over an hour of reading sections of (most appropriately, I must say) Alice in Wonderland to my computer, I then attempted to dictate an article. The results ranged from the hilarious to the baroque – progress was slow, not helped by my own, frankly puerile giggling at some of the phrases that were generated. Great dream, but the reality sadly lacks. More success has been seen by companies such as SpeechWorks, which concentrate on specific, server-side areas such as telephone share trading or flight checking, but even these have been subjected to the mockery of the general public. Even assuming that the ability of computer software to interpret the spoken word does become a reality, the fact is that we do not speak as we write – dictation is difficult enough to beg the question – why bother? Which is why IBM’s strategy makes a lot of sense. Let’s look at why.
IBM is focusing on two areas. The first is the server, in which (like SpeechWorks), the aim is to integrate speech recognition into enterprise applications. According to \link{http://www.news.com,News.com}, planned for autumn release is WebSphere Voice Server with ViaVoice Technology, a suite of tools for helping call centres better use the Web. Also to come is a product that will enable Siebel users to integrate voice calls and Web-based queries. In addition, IBM announced a partnership with Internet speech specialist General Magic that is to target voice for eCommerce applications. The second area of focus is the embedded device: IBM is to release embedded ViaVoice, a Java-based software development kit which is targeted at PDAs and mobile phones as well as in-car devices and the like. According to W. S. (Ozzy) Osbourne, IBM is positioning its technologies as a framework for others to use rather than trying to go direct to the end-user.
Why is IBM’s strategy sound? To believe this, one first has to accept that voice recognition does have a future, albeit more focused than the generalised “human-talks-to-computer” model. Given this, on the enterprise IBM is concentrated on specific application areas such as call centres - no doubt, with limited vocabulary though enterprise servers will have the processing power required to support more general recognition. On the client side, IBM are facilitating voice features to be built into devices, rather than implementing such devices themselves – it is likely that product developers (such as Motorola, likely to bring out an in-car facility) are better placed to identify and develop workable applications for voice.
IBM is not dropping ViaVoice, but they are recognising that the shrink-wrapped market is never to be a major target for voice recognition. Rather, they are focusing on areas that make good business sense and also give these still-young technologies a better chance to shine. Even then, it will be a while before voice recognition manages a reasonable interpretation of the lyrics of Mr Osbourne’s namesake. Not even IBM can do everything.
(First published 22 June 2000)
European tech investment rides the storm
2000-06-28
There has been much in the press in recent months about the dot-com bubble. It either burst, or is looking decidedly leaky. In either case, it doesn't seem to have affected the continuing rise of interest, and injections of capital, into the European technology sector.
According to a report in Tornado Insider, a pan-European survey from PricewaterhouseCoopers saw venture funding in technology companies up 70%, to 6.8 billion euros. Overall, about five thousand individual investments were made with an average value of 1.4 million euros. Unsurprisingly, the largest share of investments went to software companies, with business to business eCommerce and Wireless companies both proving popular. According to the PwC research, the trend is continuing upwards in 2000. So - what was all that about a shake-down in technology stocks?
According to Marco Rochat, the change may not show up in the trend charts, but it is still profound. "We don't think the market crash has changed much, just where the money is going," said Rochat. Reading between the lines, the technology bubble may well not have burst, but one bubble certainly has - that of throwing money at all possible ventures, in the hope of making some fast returns.
We have recently seen what we hope are the last vestiges of this practice, which is as doomed as it is foolish. Recently, we heard reports of potential investors saying, "put money into the mobile space. Don't ask questions - just invest!" Scary stuff, as it is based on the fundamentally flawed assumption that new companies in this arena are more likely to succeed than to fail. The message that caused the tech stock crash earlier this year was that there was no guarantee of success, not even for high-profile players. Having learned the lessons, discernment and foresight are the qualities that venture capitalists are bringing to bear. Companies are springing up which enable the quality of ideas, the strength of products and even the capabilities and skills of the staff to be tested before any investment is made - it can only be considered surprising that VC's were not making these checks in the past.
The technology roller-coaster ride is thrilling, and there are great rewards at stake. However this does not prevent investors from looking before they spend. It is telling that, even given the additional care that is being taken prior to funding decisions, the same number (or greater) of investments are being made. This can only mean one of two things - either a careful eye is enabling companies from investing in the good companies rather than the less-good, or the start-ups themselves are coming to the table with more than a good idea, enthusiasm and flair. Either brings an increased maturity to the game, which will add to the stability of the market in general. Long may the trend continue.
(First published 28 June 2000)
Ain’t the PC dead yet?
2000-06-30
There’s only a few drink cartons and strips of gaffer tape to indicate the presence of PC Expo, which came to a close at the end of last week. One of the debates it spawned was whether there was any future in the Personal Computer: the answer, was of course, as diverse as the IT market itself. Essentially there are two camps, the gadget freaks and the PC diehards: ho would ever have thought that PC companies would be the ones looking staid, next to the new generation of technology companies?
A major strand of activity at PC Expo was caused by companies such as Palm, Handspring and Sony, all touting there wares as the future of technology. Certainly there is a place for the device – there may be more PCs than there are handhelds, but the latter is about to merge with the mobile phone which has already overtaken the PC in worldwide sales. Given the rate of churn in the mobile phone market, in two years time there will e few phone users who do not have PDA application functionality on their phone.
At the same time, PC manufacturers were quick to explain how the market for PCs is going full steam ahead. Based on current statistics the prediction is for 1 billion PCs to have been sold by 2005. According to a study released last month, one of the drivers for PC sales is the PDA – users require a central repository for the data that is spread across multiple devices. Enter the PC, in its role as “the mainframe in the living room.” Trouble is, there are a couple of factors that the report appears to have missed.
The first of these is broadband technologies. Ralph Martino, VP of strategy and marketing for IBM’s personal systems group, was quoted on \link{http://www.news.com,News.com} as saying that broadband communications, in both the wired (xDSL) and wireless (UMTS) forms, would provide a backbone making the PC even more indispensable. Trouble is, broadband is also the enabler of a new technology model – that of the Application Service Provider or ASP.
Individuals and businesses with broadband Internet access be turning more and more to services available over the Web. The reason for this is simple cost-effectiveness: it will be cheaper to do so than to buy and install the applications. Broadband games users, for example, will not have to purchase a copy of a multi-user game before going head to head against their pals; rather, they will put in a credit card number and start to play. The end-node device need only be capable of receiving and displaying the graphics.
Maybe the debate is centring on the wrong topic. Many PCs wil be sold in the future, but they will be very different from the ones we use today. The self-contained units based on the Easy PC initiative, produced by companies such as Dell and HP, are an indication of what is to come. Further indication came at PC Expo, with IBM’s impressive demonstration of a watch-sized PC. The PC architecture may never die, and why should it? It may not be optimal, but it is a perfectly reasonable basis for computing. What is already on the slab, is the need for expensive, complex, noisy, error-prone combinations of hardware and software, either in the home or in the office. This model of computing has never worked, and the sooner we can get it in the ground, the better.
(First published 30 June 2000)
Guess how Clinton signed the Digital Signature bill?
2000-06-30
A major piece in the security jigsaw dropped into place at the end of last week, as US President Bill Clinton travelled to Philadelphia to sign a bill give electronic signatures the same legal status as handwritten. Quite why he had to travel all that way in this wireless world is quite beyond us, but all the same the Signature signature (sic) represents a very important step indeed.
Anyone who has attended security conferences over the past few years will have realised that there is very little left for the technologists to do. Most of the major problems – authentication, non-repudiation, encryption and the like – were solved years ago and the biggest problem facing security vendors is now how to ensure the adoption of such technologies. One of the main blockers to this adoption process has been the legal basis of the digital signature itself.
Digital signatures employ a public key mechanism. Two keys are used, one private which is used to encrypt the message, and one public which can decrypt it. The rather clever effect of this pairing is that, if a message can be decrypted using a person’s public key, then it must have come from the person as nobody else could have encrypted it. Hence we have the concept of non-repudiation – it becomes possible to guarantee the source of a message.
Security facilities are built into email systems, transactional systems and vertical applications, but they have not been getting the use they deserve. For example, the only emails that we have seen using digital signatures, have been those coming from security vendor companies. This lack of desire has been due in part to the all-or-nothing principle – if nobody is doing it, then nobody does it. It does, however, open up a weak link in the chain: it is possible to hold up an email as evidence of a commitment, but it does not provide a legal basis of its own. Also items such as contracts still require to be signed and posted, or faxed, before they are acceptable: both options are slower and more onerous (and costly) than a purely electronic means. That is, until now.
Once again of course, it will be necessary for other countries than the US to adopt the measure before it will really kick in. This will happen – it is only a matter of time. When it becomes possible for parties to agree contracts and other transactions electronically, the final nail will be hammered into the coffin of paper-based communications.
(First published 30 June 2000)
Sun’s second thoughts about Open Source Solaris
2000-06-30
Sun is developing a bit of a reputation for raising expectations to the highest levels then sheepishly admitting it cannot carry them through. It happened with the standardisation of Java, when Sun lodged the Java specification with standards body ECMA before withdrawing its application a few short months later. It looks likely to happen with Jini, the much-touted plug-and-play standard for devices which has thus far failed to deliver a product. And last week, it was the turn of Sun’s open source initiative, on the rocks only five months after it was announced.
What’s the problem? In this case, it seems to concern the scale of the task. There are nearly ten million lines of code in Solaris , and Sun did not feel it could just plonk it onto the mass market without “making it user friendly” first. It is understandable that massaging ten million lines of code would take a while, but a couple of questions remain.
The first is – what is so unfriendly about the code? Fair enough if this equates to adding a few copyright notices and generating an HTML code browser (à la Java). Less reasonable is if the code needs restructuring or additional comments to make it readable, or worse – are there parts of the code that are currently too convoluted to see? This brings us to the second question – surely a plan was put in place in January, so what has changed to delay the plan? It may be that, on investigation, Sun realised that the problems with the Solaris code were going to take longer to solve than they expected. Now, it may be that this is an attempt to overstate the issue – us commentators are always looking for a wound to rub salt in. But the fact remains that Sun is having to lower expectations about when and how it releases its code to the open community. Probably the biggest issue is this: Sun has realised that, given the profile of Solaris, there will be a horde of points-scorers ready to identify weaknesses, bugs, design faults and security holes in the code and Sun must do everything in its power to minimise any risks.
Sun’s code giveaway will come, but more slowly than was previously announced. Sun are likely to hit their Q3 target, but only for certain sections of the code. According to Anil Gadre, Sun’s vice president and general manager for Solaris software, “the other thing we are finding out is that maybe people actually wanted certain parts and not the whole thing.” Sun is therefor considering releasing the code in a piecemeal fashion. Gadre’s remarks sound a little like Sun is hedging its bets (use of the word “maybe” is the giveaway), and will be a good fig leaf if the time comes to stagger deliveries of the code. Sun will release the code sooner or later, but more with a whimper than a bang.
(First published 30 June 2000)
July 2000
This section contains posts from July 2000.
EC to Microsoft: We just don’t trust you
2000-07-06
Just two days after denying the Wall Street Journal news report claiming that the European Commission would block Microsoft’s investment in a UK cable company, both parties confirmed on Friday that this was indeed the case. According to The Nando Times Microsoft no longer plans to up its stake in Telewest Communications plc, a deal which would have seen the giant investing $3bn to gain control of the company.
Microsoft has never been cagey about its real intentions when investing in cable and other communications software. The plan has always been to develop new markets for the sale of Microsoft software and products: the company knows full well that the cash cows of the PC market will dry up one day. However we are talking about one of the biggest corporations in the world here, and it has got a bit of a reputation for being over-bullish. Hence the anti-trust case which is still running its course, and hence also the stance of the EC which is determined to do what it can to ensure Microsoft does not engage in anti-competitive practices.
It is easy to take a pop at Microsoft but at the same time it is worth considering the other side of the argument. In blocking the deal the EC has effectively prevented a US firm from making a $3bn investment in Europe. This is an indication of the kind of muscle that the EC now holds – without the details of the case we can only hope that the decision was made wisely, and quite clearly it is not something that can happen too often before it starts to damage the economies of the member states. Equally clearly, the EC is so fearful of the monopolistic might of Microsoft that it is prepared to step in and act.
The message that comes through loud and clear is that it is not over yet for the software giant. We have said in the past how the company’s every move would be subject to the most intense scrutiny – having been found guilty of abusing its position (albeit subject to appeal), most onlookers will watch each deal with no little cynicism. The failure of the Telewest deal is indicative that the EC has teeth, and that Microsoft’s corporate future can be compared to walking on eggshells. Bill Gates has already expressed his worry that the DOJ ruling will limit his company’s ability to innovate. Even before the antitrust case reaches a conclusion, it looks like the chickens are coming home to roost. The road ahead could be rocky.
(First published 6 July 2000)
Web service: still no cigar
2000-07-06
Oh dear, oh dear. Some of the largest corporations in the world have had a year to make improvements, but cannot yet claim to deliver the level of service that is expected of them. At the end of last week, \link{http://www.theregister.co.uk,The Register} reported a Rainer UK study into web responsiveness of the UK FTSE 100 and the US Fortune 100. By “Web responsiveness” read “how long it takes for a company to respond to an email sent via its Web site.” The conclusion of the study was that “40 per cent of the leading UK and US public companies are failing to take the Web seriously as a communications channel.” The results included the following:
· Of the 200 companies, 148 provided a reasonable Web-based contact mechanism. Of these, 113 responded to the contact email within 30 days
· The most responsive companies sent a reply in an order of minutes, with pole position being taken by the UK’s National Power
· The three companies that took over 20 days to respond were all technology providers – Colt Telecom won the wooden spoon, followed closely by SBC Communications and Dell Computer.
Stephen Waddington, managing director of Rainer in London, was reported to be appalled by the findings of the study. “Two in every five of the Fortune 100 and FTSE 100 Web sites are little more than corporate wallpaper,” he claimed.
Waddington was right to be appalled. The Internet has been taking some stick in recent months, with some major casualties of eCommerce proving that clicks-and-mortar ain’t necessarily going to win over bricks-and-mortar. It is becoming abundantly clear that the success stories of the future will be those companies that can run successful businesses in both the physical and virtual domains.
It is fair to say that email is not the be-all and end-all of eCommerce. The cliché “the competition is only a click away” refers to difficulties purchasing goods and services, rather than getting a response to a electronic enquiry. However, this survey is a good indicator of the integration between clicks and bricks. A year ago, eCommerce was seen as a mechanism for reaching out to the global marketplace. Six months ago, it was driving the need for back-end application integration. Today, eCommerce is recognising what many businesses have always known - that customer service is the major differentiator. Companies achieving sub-hour response times on queries have clearly had the wherewithal to acknowledge this and to do something about it. The other companies have a stark choice: to learn the easy way, from surveys such as this, or the hard way as customers vote with their mice.
(First published 6 July 2000)
Oracle’s NC relaunch – Its not about winning, its about taking part
2000-07-06
Guess what? An Oracle spin-off company is to launch a new type of computer, called the Network Computer. Hang on – haven’t we heard this before? Let’s just check the facts before we go on.
Oracle first attempt at the NC came in 1996, hot on the heels of the negative press the PC was receiving. Total Cost of Ownership was the name of the game – the PC was turning out more expensive than its list price had implied. It is academic whether Larry Ellison was driven by a brave new world vision or by a desire to kick Bill Gates while he was down: in either case he failed. Microsoft’s Zero Administration Initiative may have been a fig leaf, but it served to allay the overhyped fears of the time. In any case, the momentum behind the Windows-based PC has proved unstoppable.
We knew in November last year that Oracle would be relaunching the NC. Surely the company can’t still be chasing the rainbow of preventing its greatest rival from taking over the world? It is fair to say that the landscape has changed considerably over the past four years – the Internet has arrived and shaken up IT vendors, end-user businesses and consumers alike. You’ve read the stories before, involving great riches but – oh look – some failures too. You know that the future lies in devices and appliances, and that anything goes in the thin client world of the browser-based Internet. You know that services are moving more and more online, what with eCommerce hosting, electronic marketplaces and the stealthy arrival of the Application Service Provider or ASP. Against this background, Oracle’s NC has become an Internet access device, a Net appliance promising a low-cost point of entry to the Web. The question is – will it succeed?
There is no shortage of companies bringing consumer Net appliances to the market. Companies like Netpliance, who only last week increased the price of its I-opener device from $99 to $399, not to mention Intel and Emachines are all sure that a market exists for “pure” net devices, that is computers which can do little but access the Web. One thing is for sure: this market can only exist if consumers have high-speed Internet access – this may be true for cable or ADSL connected users, but these are not yet in the majority even in the United States. In the UK, ADSL trials may have started but the general roll-out is not expected to complete before June next year. In the meantime, companies such as The Free Internet provide 0800 ISDN access for a one-off yearly payment. The second issue is whether the range of services currently offered on the Internet is an adequate replacement for software packages on the PC. Online games still require users to install software; the Microsoft standard word processor is not yet available to consumers over the Web.
For organisations with a budget, such as corporates and government/educational establishments, the NC model starts to make more sense either for companies running a server-based computing model or for those trialling ASP services. Even then, a Citrix-style model is currently needed to ensure that all required applications can be provided over the wire: this model will not suit all comers.
There can be no doubt about the validity of the thin client model. What is less certain is the maturity of the server side of the equation. Even once thin client becomes the dominant model, there is little reason why Oracle should win the lion’s share of the device market as one of the key factors is the interchangeability of devices. Oracle may well succeed in increasing the momentum behind Net appliances, but it is unlikely to claim the market for itself.
(First published 6 July 2000)
Red Sun is rising on Linux
2000-07-14
And the conclusion is – Linux will not conquer the desktop or the laptop, but will win on embedded devices. This isn’t idle speculation: let’s face it – when a significant number of electronics manufacturers from the World centre of such products line up behind Linux, then the rest of us should take note.
Linux isn’t necessarily doomed on the desktop. Indeed, it might well have faced a rosy future if it wasn’t for the question: “what is the point?” The world has already chosen an operating system and hardware architecture which, whatever its faults, is proving adequate for most uses. Linux will not succeed on the desktop any more than, say, Windows 2000 – neither gives a user sufficient additional value to merit the swap. There will always be advocates for desktop Linux but the mainstream has already flowed one way downhill and would take some pushing to get it down a different route.
On embedded devices, however, we can see a different story. The advantages (and disadvantages) of embedded Linux have already been covered, but advantages do not a product make. How different the world appears when companies like Sony, Fujitsu, Toshiba, Mitsubishi and so on – 23 of them in all – line up behind the operating system. This isn’t one company setting a strategy to give it USP against its competitors, or a hopeful start-up looking for a niche. The message is clear: Linux is a perfectly adequate operating system for us to use in our devices. So much so, that we want to work together to make it even better.
Sony has been one of the loudest advocates of Linux. Already it has announced its support for the Linux-based TiVo video appliance, which can store up to 30 hours of TV programming. On ZDNet in March, it was noted that a Sony representative revealed the intention to use Linux in future generations of the Playstation. "We needed a stable operating system," explained tongue-in-cheek Phil Harrison, of Sony Entertainment America Inc.
So – what is enticing Japanese companies down the open source route? The obvious reason is cost – apart from the obvious research and development investment, there is no charges for licensing Linux which brings down product costs significantly (compared to licensing, say, Microsoft or Palm products). The second reason is the chicken-and-egg argument of choosing what everyone else is using. This is what makes it clear that embedded Linux is being taken seriously – everyone is doing it. There may also be an element of Japan seizing the opportunity to leap-frog the US software industry, an area that has only had limited success for Japan in the past.
There is one other consequence of the Linux move. Leaving the PC industry aside, electronics companies have traditionally taken proprietary approaches to platforms. Take, for example, the games system market with each of Sega, Nintendo and Sony guarding their own platforms for their own games. With Linux at the core we may well see an opening up of such platforms, with differentiation being on brand and functionality rather than on available software. Whatever happens, there can be no doubt that Linux has won over a large and powerful proportion of the electronics industry. It may not win on the desktop PC but given its huge potential elsewhere – plus the potential for embedded devices to put the squeeze on desktop computers – it is unlikely to be too upset.
(First published 14 July 2000)
MCI WorldCom and Sprint run into the sand
2000-07-14
All is not well for either MCI WorldCom or Sprint, as the failure of the proposed merger between the two companies,. has left each in danger of being taken over. While this may be not such a bad thing for Sprint, whose smaller size and mobile technology capability make it an attractive addition to the portfolio of any major carrier, it can only be seen as damaging for Bernard Ebbers' company. Coupled with the outages at its MAE West facility that sent shock waves across the Net, last week was not one of the best ever in MCI WorldCom's history.
Not that the deal was very likely to go ahead. As we reported \link{http://www.it-director.com/99-09-29-1.html,here} in September last year, the deal hit rocky ground as it left the starting blocks. At the time MCI WorldCom was already in trouble with the EC over its takeover of Cable and Wireless, and the proposed takeover of Sprint caused an immediate hostile reaction from the Department of Justice. The final decision by the two companies came Thursday last week, two weeks after the DOJ started the legal process of blocking the $120 million merger.
MCI WorldCom must be devastated. The deal with Sprint would have added to MCI's portfolio some capabilities that the company wanted to enable it to keep its position as one of the major US and international telecommunications companies. The deal is mobile: Sprint may be the number 3 long distance carrier in the US, which is the reason the DOJ wanted to block the deal, but this is not what attracted MCI WorldCom which urgently needed a slice of the burgeoning mobile market. Now the merger is off, the company must think again at the same time as steeling itself against a raft of takeover bids of its own (not to mention a share price which has dropped over a third since the merger was first announced - down to $47 from $75). BT has been rumoured to be considering a purchase, as well as NTT and Deutsche Telecom.
Where next for MCI WorldCom? If it was not bought first, maybe it would consider buying Orange if the latter's deal with France Telecom runs into the weeds. This would at least give the company some mobile capabilities, albeit not as extensive (or as geographically attractive to the US-based company) as would have been the case with Sprint. All is not lost but it will be a good few months before MCI WorldCom can put these events into the past.
(First published 14 July 2000)
Microsoft for rent
2000-07-14
If it's Microsoft, it's good, right? Well in this case it just might be. On Friday the company launched its software-for-rent strategy in which software licensing will be paid for on a subscription basis rather than as a one-off fee. This announcement is expected to be the first of a series that will align the company with the principles and practice of Application Service Provision (ASP), or the delivery of applications over the wire.
The main strengths of the subscription model for applications are based on addressing the current issues of software delivery and licensing. The current approach causes multiple versions, incompatible installations and (perhaps most infuriatingly) the need to pay for irrelevant parts of bloated software bundles. Let’s be honest here – how many times have you really used that copy of Microsoft Access? Furthermore there is the constant bugbear of having to pay for upgrades to packages in order to resolve bugs in previous versions. Wouldn’t it be great if, for a one-off, yearly fee we only have to buy what we want and all upgrades are free? Yes – if the price is right.
The only flaw so far in Microsoft’s strategy is the issue of pricing. In pilot studies, the company took the list price of a package and divided it by 24, effectively meaning that if you are likely to use a package for more than two years without upgrading then you might as well buy it outright. That sounds a bit steep to say the least – in other words, one advantage Microsoft will not be promoting over shrink-wrapped software is that of cost. Of course this does not have to be the only costing model. It should be possible to buy a package for a month, for example, or even for a minute (for example to open and print an attachment created by an obscure package). Certain packages – the obvious ones being word processors, spreadsheets, email and Web browsers, should rightly command a premium as they are them most used (but conversely, maybe should also be subject to quantity discounts). The fact is that the issue of cost has yet to be fully fleshed out, by Microsoft and everybody else. It is clear that different application types will need different models – for example it is unlikely that any corporation will be running SAP on a pay-as-you-go basis. However the definition of these models – and how they fit together – will take time.
Ultimately we see one of the greatest strengths of the ASP model to be one of granularity. Everything costs, but it should (as in, we need it to be) possible to pay for specific software functionality on an as-needed basis rather than through purchasing applications just in case. This is true for office applications, enterprise packages or anything else. Over the next few years, hopeful vendors will attempt different ways of enticing corporate customers to invest in their own approaches. The customer, or the business case, shall decide.
(First published 14 July 2000)
eCommerce comes in from the cold
2000-07-21
The bubble that is eCommerce may have burst, but that does not downgrade its potential for companies new and old. This is the message that may be garnered from recent events, notably the closure of News International Network News service, the acquisition of CDNow by Bertelsmann and the go-ahead given to LetsBuyIt.com by financial analysts.
Perhaps it is the closure of the Network News office in London (with the result that 30 staff are being made redundant) that is the most telling. The intention is to roll the production online versions of newspapers such as the Times and the Sunday Times, back into the paper-based newspaper development. Clearly running Network News as an autonomous operation did not work, however we can rest assured that online versions of these papers will continue to be produced.
Secondly, Bertelsmann are buying the beleaguered CDNow for $117 million. CDNow’s shares have tumbled from $21 last July to the current levels of $2.8, and the company has been searching for a buyer since early this year. Bertelsmann will be keeping the CDNow brand going, but the company will become just a front for part of its eCommerce division. Ironically, CDNow also recently closed its London office.
Following last minute discussions, banks are now satisfied that the LetsBuyIt.com launch is worth the risk. By the time you read this, the company should have been launched.
Put all these three things together ans a pattern emerges. The launch of LetsBuyIt.com is an indication that, despite the negative publicity surrounding dot-com companies, there is still money in them there hills. The Gold Rush may be over but the gold mines are still profitable. What is interesting though, is the change of perspective. LetsBuyIt.com wasn’t given free rein, based on the (now disproved) assertion that being on the Web was a license to print money. The company had an uphill struggle to convince the banks that it was all worthwhile. The other assertion which can now be laid to rest is that dot-coms are the only way of doing business. Bother Network News and CDNow have found that their new-and-improved business models were no substitute for the old, established methods and both have now been brought back in line.
Things are not as simple as this of course. Bertelsmann has gone through substantial restructuring, not all of it successful, to meet the demands of the Web. However it has shown that it can compete. Similarly, by closing Network News, News International are saying “we don’t need a separate company for this – we can change ourselves to meet the demands of the Web.
This is all sobering stuff and is indicative of the maturity level that the Web has now reached. The promise of eCommerce remains, but not at the detriment of all that went before it. There is still room on the Web for the Bertelsmanns and News Internationals among the Amazons and LetsBuyIts, and vice versa.
(First published 21 July 2000)
Would mad cows use mobile phones?
2000-07-21
It’s niggling doubt time. Do mobile phones push out harmful radiation or not? Let’s ask the scientists. Trouble is, can we trust them? It isn’t a case of corruption, but contradictory evidence that is then used by our own, dear politicians to promote their own agendas. In the UK, the starkest example of this was the unfortunate case of “la vache folle”, as they say over the channel. British beef was so, so safe to eat that the then Minister of Agriculture even went on television with if daughter, who was “encouraged” to eat a beefburger on film. We know politicians are manipulative, but to see them stooping so low in public still comes as a shock, especially as pockets of the human form of the disease – CJD – have now been traced to production methods of baby food and school dinners. The over-riding conclusion that we have to reach (apart from the dubious nature of politicians) is that science cannot necessarily be trusted. The adage that “there is no evidence to prove a link” does not mean that there isn’t a link, just that we are too primitive to find one.
And so to mobile phones. To coin a phrase, there is no evidence to prove a link between microwave emissions from mobile phones, and brain cancer. Sure, the frequency used is that used by ovens as it coincides with the frequency of boiling water. Sure, heat scans of phone users show the area of the head around the phone is warmer than its surroundings. But – no evidence to prove a link.
It does not matter if the risk is small. While uncertainly reigns (and it always will, until a link is found), it is essential that any potential risk is seen to be minimised. We saw this with the BSE tragedy that followed the world-wide ban on British beef, in that thousands of cows were slaughtered. What we have not seen so far is mobile manufacturers working to minimise the risk of microwave emissions. Not, that is, until now.
The Cellular Telecommunications Industry Association (CTIA) in the US has gained agreement from mobile manufacturers to publish the emission levels of mobile phones. So far the only company to agree to the August 1 deadline is Ericsson, but Nokia and Motorola are said to be following suit. In itself, this may not sound like much but this step is a major one. In publishing this information, companies are opening a door to scrutiny. It is inevitable that an emissions league table will be published, and equally inevitable that phones with higher emissions will be rejected in favour of lower-emission phones. Over the years, phone manufacturers have been producing mobile phones with decreasing emission levels. Market forces will give added impetus to further improvements, such as the integration of additional shielding.
The mobile phone issue may or may not be a red herring, but following the BSE calamity we should have learned that trusting science at face value was not an option. Even if the risk is small, it is worth encouraging any move to reduce that risk still lower.
(First published 21 July 2000)
Unbundling on course, but so was the Titanic
2000-07-21
Unknown to its navigator, the Titanic was several miles off course when it ran into difficulties with ultimately tragic consequences, seeing its doom for the first time only as the mother of all icebergs loomed out of the fog.
Consider recent events surrounding handling of the unbundling issue. Both BT and Oftel are sending out mixed messages with respect to both opening up exchanges and roll-out of ADSL services. We only have to look at a single recent story on VNU.com to get the picture:
• “The EC has proposed a draft law requiring European Union member states to unbundle their local telephone networks by 31 December 2000”
• “Oftel … is "well on schedule" to meet the December 2000 deadline, even if unbundled services wouldn't be widely available by then” – a contradictory statement in itself and one which falls short of EC requirements
• “A BT spokesman said the company is on course to meet the deadlines agreed with Oftel, and unbundled services would be working by 31 July 2001.” – another contradictory statement, considering Oftel’s “well on schedule” remarks above.
Other information in the press and on Oftel’s web site muddies the icy waters still further. Rival telcos are to gain only limited access to local exchanges between now and the end of the year, with up to a hundred to be made available for “pilot projects.” The concept of a pilot project does not sit well with that of a “limited service” – Oftel’s term to describe what would be available by year-end.
Clearly it is in BT’s interest to slow down the process in any way it can. ADSL is a key element as it is the “killer app” that will give subscribers real reason to transfer allegiances, hence BT can use any time it has left to roll out ADSL kit to local exchanges, giving its OpenWorld service the incumbent position (“Why wait months for the competitor service? Sign up with BT today!”) and also tying up valuable floor space in the local exchange. Let’s face it, no company with any nouse would open the doors to the competition before getting its own act together. Unfortunately for BT, past delays in its ADSL strategy have got the company into the situation it now finds itself. It is running out of time, and even the timescales it thought it had agreed were cut by six months in Lisbon.
Meanwhile, the competition are not just settling back and taking this. BT are treading a fine line, as if it can be proved (and it would not take much) that BT are damaging the business of other telcos, they could be sued for enormous sums. Companies considering legal action are Fibernet, Colt Telecommunications and Global Crossing; it is likely that others will follow suit.
So where’s the iceberg? The hard deadline is the end of 2000. Neither BT nor Oftel are happy with this, but they do not have much of a case. The EC will not allow BT to restrict its rivals as it is currently doing, nor will the competitors themselves. The difference with the Titanic is that the navigator did not know that the ship was off course, and nobody could have seen through the fog guessed the scale or potential damage that the iceberg could have. In this case, however, there is no fog, only a smokescreen from a company trying in vain to protect assets it no longer really owns.
(First published 21 July 2000)
Napster movement goes underground, people responsible
2000-07-31
In the oft-downloaded words of the Carpenters, “It’s only just begun…” Napster may have until Friday to shut down its operations, but it doesn’t take much to realise that the injunction will not spell the end of online piracy of copyrighted materials. The judgement was against Napster alone, and not the individuals using it – according to \link{http://www.news.com,News.com}, this would require lawsuits to be filed against individuals. Similarly, peer-to-peer duplication products such as Gnutella, Centrata and Akamai (covered \link{http://www.it-analysis.com,here} are not affected as this would also require individuals to be pursued. It looks like, whatever happens in the future, individuals hold the key.
Let’s get one thing straight. Piracy of any creative work is a bad thing, as it is stealing from the livelihood of its creator not to mention the agencies that work on his or her behalf. These may be perceived as “the enemy,” capitalist men in suits who like nothing more than to make a fast buck off the backs of the innocent public. The major record labels and publishers do provide a necessary service – without them, for example, Harry Potter might still have been a manuscript languishing in the bottom of the drawer. All the same, recordings are priced high and the temptation to make copies of them has been too great for any of us. If there is anybody out there that has not, at some time, taped an album or a song off the radio, speak out! We want to hear from you.
Together, digital quality recordings and the Internet changed a problem that was seen as a necessary evil by the publishing industry, to one which could have catastrophic consequences. This is probably a reasonably accurate analysis, if the duplication and distribution of MP3’s were not stopped. The question is, can it really be stopped? One company – Napster – has gone down, but others (such as AppleSoup) exist. Is the RIAA going to sue every bunch of students that put together a few lines of code to permit peer-to-peer file sharing? Let’s face it, even an Instant Messaging facility and an email service is sufficient to allow an exchange of information on pirated files, not to mention the files themselves, which could be exchanged automatically using facilities such as ftpmail.
Sooner or later, attention has to turn from the pirating software to the pirates. In the UK in the eighties, a levy was put on blank cassettes to recompense recording companies for lost sales, however it is difficult to see what similar mechanism could be put in place for the Web.
Given the vagaries of human nature, it is possible that an honour system is the only one that will possibly work. Stephen King may have had only limited success on the first day of his online publishing venture but the idea is sound: download a chapter of my book, if you pay I’ll publish the next one. The only difficulty is that it is a one-shot operation – once the entire book is published, being on the Net it will be subject to the same issues as any other online work. The concept of a “second edition” goes out of the window.
As more and more people get online, and as connection speeds improve, the problem of piracy can only increase. This is inevitable, whatever lawsuits may take place – savvy students and others the world over are unlikely to take too much note of the results. If the recording industry wants results, it must appeal to the individuals responsible for both its existence and its possible demise. It may get results, but it will not get everything its own way.
(First published 31 July 2000)
Microsoft Windows for free
2000-07-31
Microsoft looks scarily close to becoming innovative. Sure, the ideas may have been developed elsewhere, but with its games console, set-top box and wireless appliance, the company is really making a go of it. The company previewed several new technologies at an analyst briefing at the end of last week. Meanwhile, at another briefing, Microsoft updated journalists on the operating systems state of play – very much in the Microsoft old school. These two lines of attack – old and new – will define the essentials of the company over the next few years.
Forget dot-Net, forget C-Sharp. All those big announcements make nice wallpaper, but they are not really where the action is for a product company. For Microsoft, shareholder value is about shipping product, and in the past it has to be said that they have been remarkably successful at it – more successful, indeed, than any other company in the world.
Traditionally, Microsoft has made its money selling three product lines: operating systems, office applications and development tools. It has wiped the floor with the competition in all three areas, but broad as this market may be, sooner or later it will be saturated. If we look at the operating system announcements, they are in fact (just like their predecessors) details of upgrades rather than anything new. The media player may be updated, the Web browser may support new forms of content but the underlying technology remains essentially the same. With the new release of WindowsME, aimed at replacing Windows 95 and 98, Microsoft are moving closer to a code base shared with Windows 2000 (and its own replacement, codenamed Whistler). Clearly this benefits Microsoft, but is unlikely to cut much ice with the end user. Even the recent forays into new look-and-feels have been little more than a rehash of the old. An OS is an OS is an OS, and that’s all there is to it.
It would be impossibly un-PC (sic) for Microsoft to ditch this model, particularly as there’s life in the old cash cow yet. To all intents and purposes Windows is Microsoft and to question it would be like the Pope denouncing Christianity. So – Windows is still very much strategic, but so is the “great software on any device” tag line – enter the new product lines.
By entering the domain of the appliance – the games console, video engine, wireless PDA or whatever – Microsoft is recognising one crucial fact. With appliances, nobody cares about what is under the bonnet, the external functionality is more important than the internal components. This is the rationale behind other companies (such as \link{http://www.it-analysis.com/00-07-14-1.html,these})using Linux – there is no operating system sell, it is the box that counts. Also, with Linux there is no operating system buy as it is license- and cost-free. Microsoft cannot compete at the OS level as others are giving it away, hence they are competing at the level of the complete device. This is a dangerous game – hardware margins are notoriously lower than software margins – but the company has no other choice.
By bundling Windows with the device, Microsoft is essentially giving it away. In doing so it gives itself an exit strategy from the oncoming drought in the OS space. Perhaps more importantly, it can move on without losing face – a necessity in the technology market, where image counts for far more than people give it credit.
(First published 31 July 2000)
TANSTAA Free Internet Service
2000-07-31
Oh, how the wired world must envy the UK! Over here in Blighty, we have free Internet Services, and what a wonderful place it is to be. Well, it would be if only such services worked. Unfortunately the realities are proving shockingly different to the hyperbolae.
Take LineOne, for example. This ISP was originally a subscription-based service, before adopting the “free” model where LineOne took part of the cost of the call to finance the service. In April this year, LineOne launched a joint venture with low-cost call provider Quip, in which £5 per month of telephone calls would qualify the subscriber for free Internet access. The initial service was swamped – users reported slow connections after many failed connection attempts: an infrastructure upgrade speeded up the free service but it continued to degrade and at peak times was almost unusable. The problems also impacted on performance for the “paying” customers. Two weeks ago, a letter was sent to the Quip subscribers saying free access would be terminated in September, and the original £20 cost of the Quip box would be refunded in call charges. So – no loss financially, just in time and effort.
Second we have Breathe. At the end of last week company took the shocking step of disconnecting some of its heavier users and cancelling their subscriptions, all in the name of customer service. A note was sent out saying that the service was discontinued and giving a web address for those affected to claim their money back, but – as one subscriber pointed out – how the heck do you get there, once your Internet connection has been cut off? This move is staggeringly insensitive and will most likely prove very damaging to Breathe’s business. Finally, users of one of the more successful free ISPs – \link{http://www.thefreeinternet.net,The Free Internet} – have been finding that the 0800 calls have been appearing as chargeable calls on their phone bills. Other horror stories, concerning free service providers such as Screaming.net and others, abound.
What is going wrong? There is the inevitable issue of service quality – in LineOne’s case, the company misjudged demand and failed to protect its existing subscribers from its new services. Having decided that its business model is fatally flawed, it looks like the company is extracting itself reasonably well from a situation it clearly finds untenable. As for Breathe, who has also suffered from service level failures, the reaction of the company against its customers absolutely cannot be condoned. It will be interesting in the extreme to see how the company puts this behind them.
The bottom line is the bottom line – there is nothing for free in this world. The business model of The Free Internet is, in fact, a subscription model and it is likely that this is the only model that can work: costs may be reduced to £50 per year, but they do not vanish completely. Companies giving the impression that they can deliver on the promise of 100% free services, will be found out sooner or later. Nice dream, nice dream.
(First published 31 July 2000)
August 2000
This section contains posts from August 2000.
CA shifts innovation from marketing to business strategy
2000-08-25
Many companies exist by running two lines of products. There are the cash-cow product lines, which keep the company going through thick and thin, and then we have the showroom products that keep the company looking innovative and which give it a chance to compete in the years to come. For a software industry example we need look no further than Microsoft, which continues to reap the harvest of Windows and Office while announcing new strategies such as .NET to show how it is keeping up with the game. Less obvious, but similarly split is Computer Associates, which likes to portray itself as a future-looking company whilst all the while relying on its mainframe-based products to bring in the money. Unlike with Microsoft (which can guarantee a good few years of revenue from its older stable of products), the clock is ticking for CA. There comes a time for every company to make the new range of products strategic, and lave the older lines to wither a little (and perhaps, to die later on).
CA's innovation USP is two-pronged. First up are is Neugents - software modules that use artificial intelligence technology to draw conclusions, such as the likelihood of a server crashing, or to draw out patterns and trends in business data. The second strand is 3D visualisation, as demonstrated through the wrap-around interface of Unicenter TNG (not to mention the acquisition of graphics company Viewpoint last summer).
The shine was well and truly taken off the CA logo a couple of weeks ago, as Sanjay Kumar took over from Charles Wang as CEO of the company. This unprecedented step (Wang has been CEO since the company's formation) came as a result of a series of bad performance announcements and profit warnings. Kumar's is stated to be to intensify efforts in the more innovative, growth areas of CA’s business.
Last week's announcement of the release of the stand-alone package Neugents ii was the first of many that will no doubt see CA's key differentiators being brought to the fore. To be fair on CA, the company announced months ago that it would be “componentising” its product portfolio – a strategy which (in the form of Neugents ii) is now starting to get results.
Over the next few months we can expect to see CA release new products to give it a real foothold in what should turn out to be exciting new markets. It would not come as a surprise for Kumar to take the unprecedented step of replacing the old, dependable approach to its mature product lines for one that is more slash and burn than weed and feed. However it remains to be seen whether he can turn around the behemoth and make it dance. Truth be told, for CA to be successful in the long term the real innovation will need to take place in the boardroom and not in the product catalogue. The CA well is drying up, giving the company little choice but to move the more innovative product lines to the centre of its business strategy rather than just its marketing.
(First published 25 August 2000)
September 2000
This section contains posts from September 2000.
How many Open Source developers does it take…
2000-09-01
There is certainly no shortage of companies offering up their products to the open source movement at the moment. Last week, IBM announced its plans to release the source code to its AFS enterprise file system to the “developer community.” A few weeks ago, as we discussed \link{http://www.it-director.com/00-07-03-3.html,here} Sun Microsystems admitted to difficulties in opening up Solaris. This relentless march of companies is delivering a growing stack of source code to the seemingly infinite resource pool that makes up the open source movement. The cathedral builders are handing over their plans to the bazaars, and the world will be a better place for it. That’s the theory anyway – in practice, as demonstrated by the recent announcement of a company-sponsored open source laboratory, it is the major corporations that rule the roost.
As far as we know there are no statistics concerning the breakdown of the open source movement into its constituent developer types. It seems reasonable that there are three categories:
- commercial developers, who are salaried workers tasked with the modifications for business reasons
- academic staff and students, with lectureships, research grants and undergraduate projects to spare
- miscellaneous others who dabble or spend their waking hours working on code.
The utopian idea of any of these individuals developing for the greater good of mankind is unrealistic: each has a goal in mind that may be commercial or personal. It is absolutely not be the case that every release, by a vendor, of code to the community is seized upon with delight and absorbed automatically into the great open source repository in the sky. Two facts are clear about the release of code – that it does not imply that a vendor is going to stop working on it (rather, that any work will be more transparent), and that there is no real loss of ownership. Just as Linus Torvalds still has power of veto over the Linux kernel, so Sun and IBM will still keep control over their own offerings.
In fact, when a large company “opens” its code, it amounts to little more than opening up its APIs. Sure, you can see the code and modify it if you like, but only of you are willing to take the time and effort to understand it in detail, not to mention construct the development and test environments necessary to enhance it. Anyone that has worked on file system development, particularly something as complex as AFS knows exactly how unlikely it is that anybody will understand the code, let alone want to modify it. In other words, IBM is nodding in the direction of the open source movement as a whole rather than facilitating anything in particular. Some companies may choose to download the code, and may even decide to build on it but it is unlikely they would do so without working in partnership with IBM. So – if it isn’t that generous an arrangement, what is it about? Making money, of course, though not directly.
There is nothing new here. Companies have been using the open source model for perceived competitive advantage for many years. Novell, for example, is releasing the source of parts of (note that) NDS in an aim to dominate the directory space against its main competitor, Microsoft. As we reported \link{http://www.it-director.com/99-09-01-3.html,here} Novell's own bugbear is TRG, which offers a NetWare-compatible file system as a free download from its \link{http://www.timpanogas.com,web site}. This product is open source and comes with the thinly disguised aim to take away Novell's market share. Similarly, it is no secret that IBM is putting its back into Linux, despite having perfectly workable Unix-a-likes of its own (and probably scuppering the 64 bit AIX replacement, Monterey, in the process). Its motives are to reduce the costs of infrastructure software and make life very difficult for companies who depend on such things for the bulk of their revenue.
Open source is not the sub-culture that so many would love it to be. This sub-culture exists, but it is by far a minority in an open source world that is controlled, as ever, by the corporations. The ultimate success of open source, to be adopted by the mainstream, will also be its doom.
(First published 1 September 2000)
Marillion.com – turning the record industry on its head
2000-09-01
It is no secret that the music business makes a fickle bedfellow. Marillion may be thought of today, if at all, as the prog-rock group that had a brief string of folksy hits in the eighties before losing its hydrophilic lead singer and fading into obscurity. This popular perception is not shared by the ever-loyal following that the band still enjoys, however it has been proving increasingly difficult for Marillion to make any mark on the music mainstream. That is, until last month, when the band proved it still had something to sing about by using the power of the Web (sic) to leverage a major recording deal.
This is how it works. A band wanting to make an album gets an advance from recording company, to be set against estimated royalties. When the CD hits the shops, the first tranche of income is used to pay back the advance. Inevitably less popular bands have less negotiating power with the recording companies, hence advances (and good percentages) can be hard to come by. In addition, such contracts give very little flexibility in terms of how the music is released, or even what should be included on the album.
Even bands with a good following still need the advance – to cover the production costs, not to mention that the musicians have to eat. Like many bands that are fed up of living hand-to-mouth, Marillion has been puzzling over this conundrum – how to make an album that it knows it can sell, without putting itself at the mercy of the recording industry? The answer is as simple as it is profound.
In June this year the band members put out an appeal on their http://www.marillion.com web site}, and emailed their fans. The question was this: “If we asked you to pay in advance for the next album, would you do it?” The response was staggering – nearly five thousand respondents said they would, leading Marillion to take a leap of faith and go ahead with the idea. Estimates suggest that over £50,000 of advance orders have already been taken.
The benefits are clear. First of all, Marillion has been given the wherewithal to record an album, with the only commitment being to deliver a CD early next year. Marillion have no obligations to any record company, so there are no licensing issues and no limitations to what the band can do artistically. This leads to the second point. Effectively, what Marillion have done is shown that the album will sell – effectively, it already has. This puts them in a very strong position to negotiate for its distribution. According to Lucy Jordache, Marillion’s marketing manager, most of the major labels were keen to accept Marillion back onto their books before EMI won the deal. This time, without the shackles of the advance, the band has been able to put together a very nice package indeed.
In using the Web, Marillion’s goal was not to miss out the corporate middlemen, but to give themselves some security and a stronger negotiating position. This reflects the stance taken by Stephen King, whose recent publication of “The Plant” at $1 per instalment threatens to be “Big Publishing's worst nightmare”. King was quick to agree that, though this might knock the publishers off their laurels, it would not completely destroy the publishing industry. At the same time it was seen as trailblazing for “midlist … and marginalized writers who see a future outside the mainstream,” according to the author. At the time Stephen King’s move was accused of being more hype than substance. However, few could deny that Marillion have managed to get tangible results from their online strategy.
In the words of the Marillion song King, “they call you a genius cause you’re easier to sell.” With bands and authors selling themselves over the Internet, the recording industry looks set for interesting times ahead.
(First published 1 September 2000)
Sony - lies and on the take?
2000-09-08
Let’s just get this right. The Sony PlayStation 2 has sold 3 million units so far, the US has just pre-ordered 1.5 million units and meanwhile, in the UK, only 200,000 of the things are going to be available for Christmas. The nice people from Sony are trying to tell us that this is not a marketing ploy and, frankly, we believe ‘em.
The UK pre-order launch of the Sony PlayStation 2 happened at midnight on Thursday last week. High street retailers such as Dixons opened their doors to bleary-eyed queues, determined that little Jonny would get what he wanted this Christmas. Consumers hoping to wait until after the weekend may well be disappointed.
According to Sony’s Playstation \link{http://www.playstation-europe.com/hardware/playstation2.jhtml), the European release date for the PS2 has been delayed to Novemer 24 “due to completely unforeseeable and unprecedented consumer demand for the PS 2 in Japan.” This may or may not be true – production is production. However its paltry rationing of the devices for the UK market smacks ever-so-slightly of “looking after one’s own.” Does that sound at all like sour grapes to you?
With what is currently the most powerful games platform (and, arguably, some of the best titles) out of all the console manufacturers, Sony are in a position to gamble with the consumer. By the time that the PS2’s nearest rivals, Microsoft’s X-Box and Nintendo’s GameCube, get anywhere near the market, Sony expect to have already shipped a good 10 million units across Europe, roughly half of what the company expects to ship worldwide this financial year. Competitors will arrive at the party only to find that the guests have already moved on (no doubt forming an orderly queue for the PlayStation3). Meanwhile, before the event, Sony can afford to pump up the prices (to £100 more than what is to be paid in the States) and justify an additional 25 quid for the right to buy.
Grumpiness aside, there can be no doubt that Sony is onto an absolute winner with its games console. The manufacturer is already a victim of its own success – fortunately it has no real competition at the moment otherwise it could have missed the boat. Ask any child which device he would rather had and brace yourself for the reply that will appear obvious to anybody under 20. “Why, a Play Station of course!” This attitude does not limit itself to the younger generations – we know plenty of seemingly mature adults who are holding out for this latest device.
How exactly the company has done it is unclear, but there can be no denying that (in the material world we call Europe) the PS2 is a thing to have. If it achieves nothing else, the arrival of the PS2 will set the minimum standard for games consoles. Bargain hunters, look out for older devices going for a song in second hand stores and online auctions, from January 2001.
(First published 8 September 2000)
XML – in, out, in, out
2000-09-08
There were two seemingly conflicting reports about XML adoption last week - both appeared in UK IT rag Computer Weekly. The first trumpeted the fact that the Police in Scotland have decided to adopt an XML-based architecture for its main applications. The second dealt with the decision by the National Health Service (NHS) to stick with an EDI-based strategy. So, is XML right or is it wrong? Is XML ready or is it not? Perhaps most importantly, does the UK have a coherent strategy or does it not?
It is pleasing to note that the Scottish Police are not jumping straight into XML with both feet. A firearms licensing package has been built as a proof-of-concept system. This package is now to be piloted by Fife police force, before being rolled out across Scotland. Should the pilot prove successful, additional applications are planned including “command and control, personnel, custody, intelligence and crime management,” according to Computer Weekly. Already, sponsors of the firearms package are expressing delight at the fact that it took only 6 months to write, rather than the “six months … it would have taken in the past.” How very encouraging.
Meanwhile, south of the border the NHS is paining a different picture about XML. Far from the “flexible, scalable adjectives” being used in Scotland, NHS executives talk about XML being “uncoordinated” and less reliable than EDI. “If we go down the XML route now we would have to wait two years to get proper protocols,” said Rick Jones, author of a report into the use of XML for medical applications.
There seems to be a difference of perspective here. The Scottish Police are saying “let’s give it a go, nothing ventured, need the experience” and are having some good results. The NHS meanwhile are saying “not my job mate, it ain’t ready, we’ll stick with what we’ve got.” These approaches are diametrically opposed and only one of them permits the organisation concerned to keep on top of the new technologies. By burying its head in the sand, the NHS is missing out on more flexible, interoperable architectures, not to mention infrastructure and development cost savings and the potential for new applications.
It is worth mentioning a point made in each of the report – who is in charge. In the Scottish Police the forces themselves, driven by the Scottish Association of Chief Police Officers, decide IT strategy. Meanwhile in the NHS, it is the government Central IT Unit, Citu, that decides policy. It is fair enough that no organisation should be pushed into making technology decisions too early or for the wrong reasons, particularly if it relates to the health service. At the same time, it is unacceptable that the NHS rejects Citu’s strategy for reasons of “not invented here.” At the very least, the NHS should be prepared to adopt the XML strategy in principle, and to pilot it in non-contentious or low risk areas.
(First published 8 September 2000)
mCommerce – less big bang, more whimper
2000-09-08
It would be dangerous to say that analyst firms make things up, but of two reports just out, one must be wrong. The first, from Forrester Research, puts the value of mobile transactions at £3.2 billion by 2005. The second, from IDC, says it’ll be £25 billion by 2004. If we’re not mistaken that’s a factor of ten.
It has to be said that neither figure is particularly small. The smallest percentage of Forrester’s estimated figure would still be a worthy addition to the revenues of any company. All the same, it is not enough to send businesses into a panic in the same way as, say, eCommerce has done. From Forrester’s point of view, the figure represents a mere 3% of online retail revenues – let’s face it, most businesses are still trying to get their heads round how they can get a slice of the other £95 billion. Even given IDC’s more inflated figure, mCommerce is still the icing but eBusiness is very much the cake.
Much would appear to depend on the form factor of tomorrow’s mobile devices. Mobile phones are small, neat and almost impossible to do anything with, other than make phone calls. Given this, mCommerce transactions need to be as easy and quick to perform as tapping in a phone number and saying “I’d like to order a pizza, please.” There are some obvious wins for mCommerce, for example online betting at crowded sports events. However many currently touted applications for mobile devices are deep pan pie in the sky – Robbie Coltrane may want to check his bank balance from his hotel room, but I can’t see the need myself.
The hype around mCommerce is exactly that – hype. Fortunately it appears that the bubble is bursting well in advance of it doing any serious damage. There can be no denying that location-sensitive services will become a reality and our fast-paced city dwellers will no doubt decide that the world is a better place for them. However this is the stuff of the future, and even then will not have a serious impact on the online transaction count. Here’s a simple exercise – list all the things you might need to buy when on one place, compared to all the things you might buy going from one place to another. One list is big and the other is small, right? QED.
Things will be different in ten years’ time, when UMTS broadband access is available to mobile devices. By that time it will be questionable whether most punters will know, or care, whether a given transaction is taking place from a land-based or a wireless connection. In the short term, the additional functionality being built into mobile phones will sit, and wait, until its users determine a real use for it.
(First published 8 September 2000)
October 2000
This section contains posts from October 2000.
Geoworks WAP licensing – fast buck or insurance policy?
2000-10-11
Geoworks, the company perhaps more famous for its small footprint operating system than its wireless credentials, has been coming under fire in recent weeks. Its crime is to enforce a patent, which the company holds on a part of the wireless protocol that is essential to WAP. So – is Geoworks on the make or is it protecting its future?
The problem started back in January, when Geoworks announced its intentions to introduce a licensing scheme for WAP vendors. On the surface the licensing scheme seems reasonable enough. Individual vendors wanting to make use of the Geoworks technology need pay a flat fee of $20,000 dollars per year. If the company in question earns less than $1 million dollars, the fee drops to $25, a veritable bargain it has to be said. It is with the service providers that things get interesting, as the fee equates to $1 per year, per service user. For a company such as Vodafone, which now handles over 10% of the world’s handset users, the sums would become phenomenal. It is unlikely that the larger players will pay the book cost for the privilege of using WAP but nonetheless Geoworks looks set to make a pretty penny on the arrangement. Stock holders certainly think so, with shares doubling in value on the day of the announcement.
Of course Geoworks is not the only company to bring up the issue of licensing. Indeed, the company is a pussy cat compared to some of the larger players in the WAP forum, namely NEC and Phone.com. As a smaller player, Geoworks commands more sympathy: with just over 100 people in the whole company, it cannot afford to be frivolous about its R&D spend. Neither does it have the luxury of giving away key technology for the greater, in this case wireless, good. On the make it may be, but Geoworks has few other options.
Despite the underdog stance, Geoworks has succeeded in putting the cat amongst the wireless pigeons. In an interview with IT-Director.com, Ken Norbury, General Manager of Geoworks in the UK, agreed that the stance was “upsetting to people not in the know.” This was a view echoed by the chairman of the WAP Forum, Greg Williams who said on News.com "I'm not saying at all that Geoworks [tried to take advantage of the process] ... I think what they've tried to do was set a price that is fair and reasonable, as anyone would."
When the annals of IT history are finally written, it will not be the actions of individual companies that count so much as the combined effects. The longer-term necessity for WAP is already being called into question. While Geoworks’ position is understandable, the licensing issue may prove to be the final straw for an already weakening standard.
(First published 11 October 2000)
Bluetooth PANs the Wireless LAN
2000-10-11
The arrival of Bluetooth, the short-range wireless communications standard, is moving ever nearer. Products are not due until the latter half of this year but the announcements are now coming thick and fast, with the IT majors now joining forces with the technology providers to make it happen. At the end of last week, for example, IBM announced it was partnering with TDK to develop Bluetooth solutions for its Thinkpad range of laptops. Despite the fact that TDK is not known as an IT company, it has carved itself a niche as a producer of networking devices that meet the standard formerly known as PCMCIA. Back in 1998 TDK was quick to pick up on the potential of Bluetooth, and has been supporting standards work and developing products ever since. IBM, too, has been an active proponent of the technology but has recently been pulling back from the production of networking devices. The result: IBM has a partner that can deliver, and TDK has a conduit for its products to die for.
The principle behind Bluetooth is simple. Devices broadcast a wireless “hello, I’m here” into the ether and listen out for any responses. Should one device detect another, the two will form a loose network, known as a Personal Area Network or PAN. PAN clusters can then form into a kind of super-PAN using hub units that act as a switch between different PANs. The active range of Bluetooth is ten metres: the intention is not to replace existing LAN technology but more to enable locally positioned devices to interact, for example a PC with a printer or a mobile phone with a PDA. If the standard had anything in its sights, it would be the infra-red standard IrDA but Bluetooth is primarily an innovation – it augments existing facilities rather than replacing them.
This is all well and good, but (and there is always a but) Bluetooth isn’t the only wireless technology on the block. There are a number of vendors who are indeed setting their sights on the LAN. These include Apple, which has already released its own wireless protocol. Most likely to succeed is the wireless Ethernet standard, behind which a consortium of vendors is already lining up. However, through the fault of nobody in particular, Bluetooth is proving to be the fly in the wireless ointment. The two wireless standards can, and do, interfere with each other making it difficult to run both PANs and wireless LANs.
Steps are being taken to minimise the damage. Symbol Systems, for example, has included frequency-hopping in its 2Mbps wireless LAN technology so that if interference is detected, it can be countered. This has all the hallmarks of a workaround, and apparently it does not work for higher data speeds. The current advice is not to use the two in the same part of a building: clearly this is not a rule which builds confidence.
Where to go from here? Data communications experience suggests that, when two or more protocols get together, some bright spark designs an interface between them. It may be that if wireless LAN devices can also talk Bluetooth, both standards can exist in the same frequency range. Ultimately the question arises whether more than one wireless protocol is necessary at all: time will tell but, even though the standards do not overlap in principle, it may prove unwise to focus too heavily on one technology.
(First published 11 October 2000)
Nothing To Declare
1996
This section contains posts from 1996.
January 1996
This section contains posts from January 1996.
Breaking Out To Get In
1996-01-19
At the table, you’re alone although there’s people all around. You
Listen without hearing, distracted by other things within you.
A passing word gets your reaction though you didn’t mean to
Spout hate-filled words which resonate around the room you leave behind.
A voice inside the shell is screaming “why the hell you doing this?”
And the voice is you, you can’t believe you’d be like this.
On the outside there’s a man who doesn’t care the way he is
The man is you, you’re damned all ways and who cares anyway.
Breaking out to get in, thick and thin, you’d help it if you could.
Is this a sin? You cannot win. You’d help it if you could
But a voice inside the shell is screaming “why the hell you doing this?”
The voice is you, you can’t believe you’d ever be like this.
On the outside there’s a man who doesn’t care the way he is
The man is you, you’re damned all ways and who cares anyway.
At a party, you’re alone although there’s people all around. You
Listen without hearing, distracted by other things within you.
A passing word gets your reaction though you didn’t mean to
Spout hate-filled words which resonate around the place that’s left behind.
1999
This section contains posts from 1999.
November 1999
This section contains posts from November 1999.
Nobody ever listens!
1999-11-17
Here's a fact. It has been reported that the average executive spends up to 2 hours of the working day "doing", that is, getting on with the job and 6 hours "communicating", i.e. exchanging information. The average consultant's day will differ from this profile, it is sure - for a start we spend much of the day dealing with all those ironically named "office automation" applications. The fact remains, however, that a significant proportion of our working minutes are spent in meetings, interviews, group discussions and, for some, client lunches.
So what? Well, while we spend many hours of our lives honing our outward communications skills, for example in presentations, interviewing and report writing, the inward art of listening is most often assumed. Listening is a given, so we concentrate on outward skills such as learning the right questions to ask, how to structure a report or how to get a message across.
Listening is a given. Hmmmm... maybe it is. We listen all the time, and therefore we are constantly keeping our abilities to listen "on the boil". However, as with all givens, the issue is one of complacency. In this context, we have two weapons against complacency: knowledge and discipline.
Knowledge is required, in terms of what makes a good listener and what are the blockers to good listening. A good listener is - OK, hands up what makes a good listener? that's right, someone who:
- pays attention!
- sees the speaker's point of view
- shows interest in the subject matter
- looks for the underlying messages in the speaker's words.
Blockers to good listening are less easy to pinpoint.
- baggage - extraneous clutter in our own brains which distracts from the conversation in hand
- inner noise - where the conversation sets off a train of thoughts which, though fascinating, prevent us from continuing to listen
- control - making leaps of understanding about what the person is trying to say, thereby missing his actual point entirely
- ping-pong - where a client's point triggers a memory or an opinion, so we spend the next minutes looking for a suitable gap to express it
- display - where we use the conversation as a tool to express our own knowledge, thus ignoring the client's subject matter and making him feel stupid in the bargain
- hidden agenda - where we ensure that the conversation achieves our own goals, forgetting to check that the client's goals are satisfied. Dull down your own agenda - for example, don't bring sales talk into a requirements capture exercise, but note any opportunities for later
Discipline, in this context, involves the continued relearning and application of good listening practice. It is relatively simple to give a pretence of paying attention but this is not enough. Good listening requires sustained concentration ("hanging on every word"), with the listener constantly on the lookout for any underlying messages, ("not what I say but what I mean"). Paying attention is not easy even in optimal conditions. In difficult (that is, real-world) situations, with heated rooms, distracted participants and too-little time, it can be nigh impossible. In either case it requires effort.
Throughout the conversation, the good listener should be on the lookout for blockers to listening, both preventing them in advance (for example, by clearing one's mind of preconceived agendas) and recognising them when they appear. Not all parts of all conversations require intensive listening. There is always a need for balance - for example, small talk and telephone preambles are leisure activities which may not require full attention.
These words have concentrated on listening, not because it is more important than speaking or presenting but because it is as important. It is not only an issue of making the client feel appreciated. Before we can give the customer what he is asking for, we have to listen to the question.
2003
This section contains posts from 2003.
May 2003
This section contains posts from May 2003.
Review: A solid, practical mug with a thoughtful, understated design.
2003-05-08
Well, the Mug arrived in the post today. I've put it through its paces and I'm glad it say that, overall, it does the job a mug should do. Let's see how it got on.
PACKAGING
The mug arrived in a cardboard box, apparently sprayed with expanded polystyrene balls. The box was solid enough, but I confess to being slightly dubious about how well the balls would withstand a greater than normal shock. Still, no damage done. The box opens by removing a cardboard tongue, then the lid opens quite easily.
FIRST IMPRESSIONS
Having removed the mug from its packaging, I gave it a quick rinse with water and had a good look. There is no manufacturer on the base, I confess I should have dried it before checking as I now have a wet leg. The mug is slightly creamy white and is printed with four Barries, each holding a coat hangar. They are sporting yellow, blue, green and red anoraks, I am still to work out which one is missing from the five in the CD notes. Print quality and image resolution is good. However the Barries feel slightly rough to the touch - it remains to be seen whether this affects the drinking experience. The green Barry has a tiny blemish just above his right foot.
The mug is covered in shiny glaze, this may be seen as downmarket by some but it does have the advantage of being easy to clean.
HANDLING
Balance is centred well, with the finger positions slightly above centre. The handle is cleverly thought out - it has room for two fingers, and the underside of is angled to accommodate a third. This adds to the feeling of stability, whilst limiting the potential for scalding - I can see my children bringing me cups of tea in this one.
The mug is designed to hold 300 ccs of fluids, though with its good balance you could probably go over this limit for short periods. On contact with the mouth the mug demonstrates its careful design once again. It has a lip, if you will, that is just large enough to catch one's own, thus making the mug-mouth process very simple and effective. The handle position results in a slight pressure on the lower digits when it is tipped, but this should not cause anyone too much grief. For those who prefer to hold the mug by its body, once again the balance is good and there is room for two average-sized hands.
DRINKING QUALITY
I tried the mug with Kenco instant coffee, this was mainly successful. The protruding lip did result in a slight fluid residue on the underside, however dripping was kept to a minimum. That's the mug, not me. The mug is neither chunky nor delicate, neither is the rim too wide, therefore it should be suitable for both coffee and tea. Dunking is an option for most biscuits, including Rich Tea and Hob Nobs - some Digestives may prove a little too wide. Oh - and the roughness of the Barries did not prove to be a distraction.
Heat retention was good, but the last few mouthfuls tended to lose the heat quite rapidly so be careful.
OVERALL
A solid, practical mug with a thoughtful, understated design. The few minor weaknesses we could find in this mug should not distract from the drinking experience it provides. Buy one!
h's bar
2003-05-30
Here's something I wrote a couple of days ago, in my pre-blog days. It all seems so distant now.
A funny thing happened on the way back from Woking yesterday...
It all started a few weeks ago when, once again, I was cruising the M4. A minivan passed me with all haste, but as it went I couldn't help noticing the logo "h's bar" on the side. I was instantly intrigued. Where was this bar? I had to find out. Foot to the floor, I was determined to catch up with this vehicle that was already half a mile away. Suffice it to say, good job the police weren't patrolling that particular stretch of the motorway.
Finally, I drew parallel with the van. On its side was a Hungerford address, that was all I needed to know. Relieved, I withdrew to a more sensible speed and let it on its way. The plan was already formulated, to one day seek out the bar and - think of the excitement - pick up a beer mat or something. Who needs Everest.
And then it was yesterday. A meeting had finished early, I had a spare hour, I had some thinking to do and my stomach was in rebellion. h's bar beckoned, so off I went down the slip road and into Hungerford. Can't be that big a place, I thought, not much more than a main street. The bar was keeping its counsel as I drove up and down, so eventualy I parked up and asked a local. Indeed, I was that desperate. Not far at all, I was told, down that side road and past the fire station. Can I walk it? I asked. Oh yes, smiled Janice, still wearing her badge having finished a shift at the Co-op. So off I went, and there it was. h's bar. Home of a warm welcome, cold beers and a cheese sandwich if I was lucky. I pushed the door open and went in.
Inside the decor was neo-pub, light and airy, but the atmosphere was most definitely not. Six pairs of eyes were staring at me, unusually all of them were from women, who made up the entire clientele. There was an old lady at one table, two mums with their daughters at another, and a lady at the bar. Time stood still for just a moment, until I blurted, "hello!" Well, what else was I supposed to say? I turned to the bar, but the manager was otherwise occupied, talking on the phone. Next to him was a teenager porting one of those fluorescent waistcoats usually worn by people who work on the railways. I glanced again at the lady at the bar, she gave me a pitiful, almost pleading look. This wasn't what I expected at all. What was I to do?
I could think of nothing other than to wait.
Eventually the bar manager came off the phone, and exchanged some sharp words with the lad. Suddenly all hell broke loose. The youngster grabbed some keys, and looked like he was making a run for it before the bar manager grabbed him and, after a scuffle, wrestled the keys off him. "SIT DOWN OVER THERE!" he screamed. "YOU CAN STAY THERE 'TIL THEY COME!" He pushed the boy into a chair, then returned to the bar. "Hello there," he said, "sorry about that. Can I get you anything?" Petrified that I might ask for the same thing the boy had requested, I proffered, "I was wondering if I might get some food?"
"What would you like?" he asked.
"A sandwich," I said nervously, "anything you like, maybe cheese."
"Is that it?"
"Perhaps a bit of tomato?"
"Some onion?"
"Er - no, thanks." I didn't want to push my luck.
"No trouble," he said, and the mood of the whole bar lightened considerably. He took the order to the kitchen, then returned to his post. "Anything to drink?"
"An orange juice," I said, feeling cocky now. "With soda." I took my drink and went to the table next to the old lady, as things returned to some semblance of order. "What happened?" I whispered over to her, and she explained that the lad had come in to use the loos and was still in there half an hour later, getting up to all sorts of no good by the tone in her voice. She confided that the police had been called, and sure enough as my sandwich arrived, a car drew up and two burly officers strode in. They went away into a back room with the barman and the lad, before leaving the premises in a southerly direction whilst escorting the wrongdoer to the vehicle. Well, that's how they talk, isn't it. All was well, so I thought, until the lady leaned over to me again. "I knew he was up to no good," she confided, her eyes widening just enough to set my adrenal glands pumping again. "I can tell, you know. I always could..."
A quick glance down told me I still had two quarters of sandwich to go. I was sure I could get away with leaving the lettuce garnish, as long as I mucked it around a bit. "Humour her," said the voice in side my head, "and munch with purpose. That way, you might get out of here alive." So I asked her what on earth she was on about. Politely, of course.
It transpires (and this was the really weird bit), the lady - Jean - was matron at Holloway prison for over 20 years. I've seen the press clippings. I've also seen photos of all her grandparents, ancient pictures, compositions in sepia stuck on cardboard. I've had the stories - "nobody ever laid a finger on me, you know... I'd be there to mop their tears when they were in the docks... I don't know why she confessed to me..." and so on, you get the picture. Other stories as well - one of her forebears was a hushed up bastard offspring from previous royalty, all hushed up of course, and a grandparent built the first London bridge. Or was it the second. No matter.
In the end, I didn't want to leave, but my time was up. As I paid my bill, I conversed with the barman who by now was considerably less agitated than when I arrived. I explained to him what had drawn me to h's bar (no, he wasn't called h), and mentioned the irony about the song written all about Holloway. We agreed, I would send the words to the bar, and he would pass them to Jean for me. I said my goodbyes and left, through the same door, back past the firestation and to my car. Before long I was heading towards the familiarity of the motorway, towards home and the life that had, for a couple of hours, been put on hold. All I took with me was a couple of business cards, a receipt and a single thought that has been rattling around in my head ever since - what in heaven's name was all that about?
Just thought I'd share that with you.
Earth to...
2003-05-30
(click)
psst... is this thing on?
And I wonder, even as I type my first blog, whether I will sustain it, or whether it will last a few months and be forgotten. For some obscure reason it reminds me of Air Miles.
Off to walk the dog. And yes, the name is in homage to Bill Watterson.
(click)
June 2003
This section contains posts from June 2003.
BT and the art of customer service #1
2003-06-07
Apologies for the length and confusing nature of this post. Ironically, it is exactly that confusion which illustrates the point Im making here. I hope.
Well, I certainly didn't expect to wake up to this on this bright Saturday morning. While I wait for a BT service person to pick up the phone (for a second time), I'll let you know where things have reached. Came down this morning to a phone bill, of the so-enormous-it-verged-on-the-amusing sum of £1,245.47. When I checked, I found that most of the cost was down to the ISDN line, on which a total of 998 minutes of calls had been made. How could this be, as I was using Surftime Anytime the fixed-price Internet access package? Only one way to find out, so I called BT on their billing enquires number.
My first contact was with a lady called Kim. Having worked through the various items of security, discussed the problem and apologised for the inconvenience of it, she tried to put me through to the Home Highway division as, she said, the problem was theirs. Unfortunately, after a few rings, an automated voice told me to replace my handset. So much for that.
I called back and this time, after a much longer wait, I spoke to Nadia. We did the security thing again, at which point she put me through to Michelle of the Home Highway division in Sunderland. We had a good chat, during which she checked what number I have been using to accumulate all these costs. It was an 0845 number, she said, so I checked the configuration of my Buffalo ISDN box. Sure enough it was indeed an 0845 number, as she was telling me that that was a lo-call number, not a free-call number. As she spoke I remembered back to the Buffalo box having been reset to its factory defaults a couple of months ago. I had hunted around for the number, finally finding this one it had worked so I hadnt thought any more about it. Yes indeed, this could be construed as a mistake on my part. I waited for Michelle to finish, then I explained what I could see. Michelle was very apologetic, she said that she was surprised Tiscali didnt have a mechanism to catch that sort of thing. However, she said that the issue lay with Tiscali, as BT didnt do anything other than see Surftime Anytime as a free service. So I thanked her very much and prepared to call Tiscali.
No its not amusing any more, its turning into a nightmare. Heres how I see it. Clearly, at some point, I configured my ISDN device to use the wrong number. Did this therefore mean that this whole situation was my fault? Back to the plot.
I'm now in a queue waiting for Tiscali's customer service. They also apologise that my call is in a queue. Thank goodness for speakerphones.
Now Ive spoken to Richard at Tiscali. He said that BT was wrong about it being an 0844 number, as Tiscali used an 0808 number. He said also that Tiscali could not be held liable for what is, in Tiscalis eyes, a customer mistake. He made the point that the correct behaviour would have been to call up Tiscali and find out what is the right number. He also said that Tiscali had no way of checking whether the right number was being used, and that the only people who could do that were BT. I commented that I had tried to call up Tiscali on several occasions in the past, and all the numbers I tried had failed. Without apology (dont get me wrong, he was pleasant enough), he said that this was due to Tiscali acquiring a number of companies and the call centres being consolidated as a result. Also, I think he said that Lineone used to use an 0844 number for dial-up rather than 0808. Theres that number thing again. During the course of the call I found the original 0808 dial-up number written on the letter that Tiscali had sent me originally, the one which started You may already have heard the exciting news that LineOne is now part of Tiscali This is also the letter with the incorrect help and support numbers on it, so I would argue how could I differentiate any other numbers that I had scribbled on this letter? This was also a memory jogger for me, as I remembered that I had been given the 0808 number over the phone. Is that really considered a sufficient or appropriate mechanism for issuing what is clearly such an important number?
I thanked him and called back BT.
Ive just spoken to Barry at BTs Middlesborough office. I hope Ive spelt that right. He mentioned once again about the 0844 number for freecall (which sounds horribly similar to 0845 to me) and I said to him that that wasnt the number that Tiscali was quoting. I said, I hope not too glibly, that if BT could get a number wrong, or at least have incorrect information about the right type of number to use, what hope was there for the customer however I made the point that because the customer made the configuration rather than BT, it would appear to be the customer that was liable rather than BT. He told me the amount of the phone bill was very unusual for a residential customer, and that BT have mechanisms that can monitor line usage but that they are only normally used for new customers and are turned off after 12 months. He did agree that this issue was something that would not have happened if BT had had their monitoring mechanisms in place. I have asked to escalate the issue to a manager, who will be calling me back today or Monday.
There you go thats all for now. Sorry about washing all this dirty linen in public. I feel a bit sick, I think its the adrenaline.
Back so soon. I have just spoken to Geoff, the BT manager at Middlesborough. He told me that this was the largest residential phone bill he had seen for a number of years. This rang huge alarm bells, as surely they should have noticed it if it was really this big? We had quite a long chat, during which he said that if Tiscali had referred the situation back to BT, they would have been wrong to do so. The fact of the matter is that this is about liability - Geoff once said not BTs responsibility and once, BT is not at fault. I said that between Tiscali and BT, the two companies had failed to provide a sufficiently solid safety net in order to protect the customer and provide a suitable level of service, despite having the mechanisms available to do so. The mechanisms were not used for policy reasons, I was told, at which I commented that therefore the policy was wrong as clearly it did not provide the required safety net.
Geoff is now going to talk to his manager, and they are going to send me a letter in the middle of next week. This will enable me to take the case to Oftel, which I plan to do.
Lets look at some of the more salient facts.
- The fact that this could happen dependent on a single digit being typed in error 0844 to 0845 - The fact that BT have mechanisms to watch for this sort of thing, but they were not being used - The fact that Tiscali did not send a letter, but gave be the number over the phone - The fact that Tiscalis customer support numbers are incorrect on the written correspondence I have been sent.
Thats all (again) for now. I am at a loss. Id be lying if I said this wasnt about the money, because if it had been only 50 quid, I might have quietly changed my settings and got on with my life. That, maybe, is what a lot of people do, so maybe its a good job that that isnt the situation. We shall see. Ironically, the fact that the configuration change was effected right at the start of a billing cycle might also be seen as somewhat unfortunate. The fact this is one of the largest quarterly residential bills BT have ever seen, makes me want to take it further. Inconsistencies and weaknesses exist all the way down in the process, and yes I entered the incorrect number into my Buffalo box, but the issue seems to fall between the stools of BT and Tiscali, both of which are quite happy to send me to the other. Having worked in network management for over 5 years and having spent 16 years in the IT industry, Id argue that if Ive fallen foul of the complexity, it could happen to anyone. Im decidedly concerned about BTs inability to implement perfectly reasonable and available mechanisms. Also, the 0845/0844 issue concerns me. Chances are, if Id configured all this for the first time last month, and this had happened, it would all have been cleared up in a jiffy, but because it was a reconfiguration of a piece of equipment on a 20-month old line, it has not.
The whole liability issue alarms me as well, I think its a bit of a sticky wicket considering the legal state of telecoms regulation in the UK at the moment (think Oftel being replaced by Ofcom, and why). As I said to Brian, Michelle, Richard and Barry, if this was about misuse of a credit card and my spending patterns changed suddenly, they would likely have been in touch very quickly. The phrase appropriate safety nets were not in place to protect the customer seems to fit quite nicely, at least I think so I wonder what the rest of the world thinks. I also wonder if the situation is merely a symptom of the fact that BT and Tiscali arent able to work together to deliver a single service, or to respond to an issue when it arises.
I will be interested to see what Oftel say about this. I'll wait for BTs letter then take it from there.
BT and the art of customer service #2
2003-06-10
Very efficient - BT's letter (known on the inside as the "f*** off letter", so I understand) came today. Interesting use of the term "after a full and thorough investigation", considering that the next paragraph tells me that I should have been dialing an 0844 04 number, when in fact I should have been dialing an 0808 number. "I can confirm and reiterate all the information you have received previously from my colleagues as being correct," another interesting one when they told me that BT's protection mechanisms did exist and they were surprised they were not used.
There's a number I should "not hesitate to contact", but every time I've called it, I've got an answerphone. Left a couple of messages though. There's also a sentence without a full stop, which irks me :-)
Enough of that. Today, I'm off to London for an Action Aid do (see link on right), As I've mentioned, I'll be using my tenure, hopefully, to break out of my cushy, throw-money-over-the-wall approach to charity and try to get a handle on what really happens at the coal face. Watch this space.
Pamela's Book Launch
2003-06-12
Wellll? the book launch was, shall I say, interesting. "There ain't nothing ever works out the way you want it to," says the Devil in Crossroads, and never was a film quote more applicable to me than this evening. I met a few people, drank a few glasses of water, even enjoyed myself somewhat, but it felt a bit like being at somebody else's wedding. Pamela Des Barres was very pleasant, though I think she was wearying of signing all those books after a while (it was interesting watching her signing copies without even looking at the page). There were lots of people talking to lots of other people. And that was it! Still, it was a learning experience, and inspiring in an odd way. The venue was the Horse Hospital, which was an upstairs room which looked to all intents and purposes like an old stable. Judging by the ramp one had to walk up to get to the room, I suppose that's exactly what it was.
Speaking of which (spot the link), there are three novels that I want to write. Or maybe two and a screenplay. One's about sex, one's about time, and one's about an umbrella. And none are quite what you might think. In the loo at the party there was a sticker on the cistern facing me, with a person's face and the caption, "you're a fat old git." That uncannily accurate statement will be in one of the books (or perhaps the screenplay), I'll leave you to guess which one! And yes, I do know which one, myself. Whether any of them happen is a moot point.
Finally, I'm writing this on the train having tried to download my email over the GPRS connection. Is it just me, or is this GPRS thing a bit of a con? There ain't nothing ever works out the way you want it to?
July 2003
This section contains posts from July 2003.
Microsoft and tears
2003-07-10
Am I never going to be free of Uncle Seattle? However hard I try, I can't seem to shake him off. A bit like his software, in fact! I confess, I do think Microsoft have some pretty good software. They won the suite wars because their stuff was best, after all - I just wish they didn't want to take over the world, that's all. Anyway, this was supposed to be my last week, but they've asked me if I have any more time. I have said "very little", but it doesn't seem to have shaken them off. We'll see what happens.
Meanwhile, I've started to develop a new Web site for Faros. It'll be up and running in the next few weeks, I think. This Art Gallery thing has really spurred me on. One thing I'm sure of is not so much, "build it and they will come", more "don't get out there until there's something to show." It amounts to the same thing, however I can't decide if its just an excuse to avoid fixing meetings with potential clients...
Got some news back from the people who the book will be about, let's call them the Musketeers. It was neither good nor bad news, but at least it was communication which is always a good thing. We agreed I would write something and get it to them by August, and they could see what they thought. Scary.
And finally, got a CD in the post this morning, it was the latest Marillion best-of album from EMI. Darren sent it, as a little thank you for getting him in the market for doing cover art (he did the Separated Out book cover, fyi, and very fine it was too). It's a funny old world, glad to be of service and all that.
Somewhere over the Atlantic
2003-07-13
Today I shall be mostly sitting on an aeroplane.
It was a bit of a hair raising start to the day. Knowing I had to get up at 5.40, I inevitably woke at five, then quarter past, then... half past six! Disaster in the intervening minutes we had had a power cut and the mains alarm clock had reset itself to midnight. Fear and loathing, and the trip to Las Vegas hadn't even started! Rapidly I stirred myself, washed my appendages and got on my way.
Ironically, from that point on everything has been a breeze. The clear motorways and the just-subsonic speeds I achieved meant that I got to Gatwick in an obscenely fast time, wondering all the while whether I would meet anyone official on the way. I didn't, and as I had a hire car, the arrival was slowed only by having to top the car up with petrol before I returned it. Phew.
So, now I'm sitting on a plane on the way to sunny Vegas. Extremely sunny apparently, that kind of shirt-sticks-to-you-as-soon-as-you-put-your-head-outside sunny. The occasion is CA-World, an event that Computer Associates are very kindly flying me out to. I've just done a bit of work for CA, a video (my first - I was told in no uncertain terms that I wasn't a natural), and as I spoke for free at CA-World last year, I think they took pity on me! It should be stimulating. Really!
After that, I shall return home via Boston, to meet up with some friends, go whale watching and take in a couple of gigs. And why not.
So, back on the plane. So far, I've finished reading a book that I can't tell you about, got to 5,000 words with my first novel (can I say, just writing the word "novel" feels really pretentious? It'll be interesting to see how that pans out!) It's now called The Knowledge, a great title until I find out its been used already. The good news is, I've worked out the plot from start to finish, so hopefully now its just a question of filling in the blanks. And making it readable. Hmm. Not out of the woods yet then.
I've also written some more of the next music book, but my heart wasn't in it and the words weren't flowing, so I've shelved it for today. Oh and I've watched a film - Phone Booth. Cleverly done, with the usual "get out of that" formula transposed onto a corner of a street. I enjoyed it, and was relieved to see a modicum of Hollywood creativity. I would say "for a change," but I don't see that many films these days!
Three hours, 47 minutes to go, as we fly over Canada. Ho, hum.
Conference Urges
2003-07-15
I don't know what it is, but at IT exhibition I get a one track mind, and it ain't the technical track. I've learned to control my urges by getting them out of the way first, then I can get on with the useful stuff. I am of course, talking about the give-aways, or 'gizzitts' (as in, 'go on, gizzitt, go on')
Last night, on the opening night of the conference I exceeded myself, I really did, in my blatant disregard for what people were saying at their stands and in my direct approach to what really mattered. 'What's in the box,' I said, 'Can I have one?' I also tried 'I don't work on Sundays,' which seemed to hit the spot.
So, here, goes. In a style reminiscent of the children's travel game, I went to the expo and I blagged:
Five mini-penknives
One squidgy penguin
Three climbing clips (marked 'not for climbing', two with a compass on the strap)
Various sweets and candies
One pink fluorescent Frisbee ring
One water pistol ' loaded (I overheard someone say, 'we need to find someone we know to fire this at.' I wasn't so discriminating!)
One cap ' blue
Two monoculars (does that make a binocular?)
One fridge magnet clip
One inflatable neck cushion
A paperclip tray
A pocket radio ' with built in torch (flashlight) and compass
One squidgy chair (fits the penguin)
Three notebooks
Five gold rings
One cable clip
A plastic egg containing silly putty
Two digital dice - bang it down and it displays a number with its six LED's
One calculator/ruler
A number of pens
A handy clip with extending string for conference badges or ski passes
One bar of chocolate that looks like a gold AmEx card
One metal torch
One metric conversion card
Two auto-rewind modem cables
One drinking bottle
A map of Las Vegas
One multipurpose survival card (with built in tools and ' you guessed it ' compass)
Ten chocolate $100 gaming chips
Two t-shirts
I think that's about it (I was joking about the gold rings). The winner of the prize for the most worthwhile gizzitt has to be the survival card. Now I must remember to pack it in my suitcase for the way home, otherwise it won’t make it past the checks!
On to more serious stuff: I'm absolutely knackered. I was up for twenty three and a half hours yesterday and only slept for six. Mustn't grumble - where would I rather be ;-) It's now 6.15 in the morning, I've finished a slushy chick-lit novel (very good ' 'Thirty Nothing' by Lisa Jewell) and I'm off to find breakfast.
Boston whales
2003-07-18
Arrived in Boston the day before yesterday and went straight to the Ramada, which is out near the University. In fact, it's privately owned by very nice people, and it had a regular shuttle to the subway (the "T") and so on, but anyway. Woke up yesterday and finished some stuff for Uncle Seattle, which took me until lunchtime, when I headed in to Boston with the vague goal of finding a wireless Internet hotspot. I then had a bit more work to do, which took place in the shade on the end of a quay at Boston Harbour. Reminds me of the time I told someone I'd sat on the beach at Nice and worked. "Of all the things to do on the beach," the friend said, "you had to work?" I replied, "Of all the places to work..."
I took in Chinatown, the harbour, the shops and the sights - by the way, only go to Cheers bar, if you want to go to a bar anyway. Nothing to see but a bar. On the recommendation of a wireless access service provider (I happened to be walking past the office), eventually I ended up in the News bar, a very plush establishment which boasted wireless Internet access. It happened to be happy hour as well, I didn't take advantage of the free martinis but I did have four starters for 99 cents each. Unfortunately the Internet access wasn't working. I tried everything, and was quite willing to believe that it was my fault, but as I could ping stuff that wasn't my computer I thought not. When I finished my food I headed back round to the office of the chap who installed the thing. The long and the short was I ended up down in the kitchens of the News bar, trying to diagnose issues with power over ethernet, linksys boxes and broadband connections. What fun, but we didn't solve the problem.
This morning I got up a bit earlier and headed for the harbour to do the whale watching thing. I thought I'd missed the boat, but no, here I am on deck typing this up. It's an overcast day, so it?s unlikely I'll get the kinds of pictures you see on posters, indeed we'll probably be lucky to see anything at all but at least its 4 hours of not very much, just what I need. Just had a great idea for an article - "the roadmap conspiracy," which makes out that computer companies have everything all mapped out and are drip feeding new technologies to the users. As if, and the Internet bubble didn't burst either!
2 hours later.
Just seen a whale! Well, several whales - one of which was the best the guy with the mike had seen this year, so he said - it was a humpback sleeping in the water, when we moved alongside it arced its back and flipped its tail out of the water as it dived. Very impressive. Now we?re heading back, then I head to the station to catch a train to Providence. Or maybe I should hire a car. I've got an hour to make my mind up!
Opeth in Providence
2003-07-18
I arrived in Providence on the train, unconscious of the fact that it was in this city that saw the tragic and untimely demise of Great White following that foolish application of pyrotechnics. Arriving early, I met with Jason who told me everything was sorted for the show. Opeth were playing first, he said, so expect a late night. S'okay, I said, my train was to be at 12.45. He looked dubiously at me - gulp.
I wait for the doors to open, flitting from street to Starbucks and back. Eventually I meet up again with Jason and Ian, and we go for a Chinese - a welcome respite in the hanging around. Maybe I'm getting old, or maybe its the act of being alone in a strange city. Whatever.
Lupo's is a dive, looking like the sort of place you might see in a film set, like Bladerunner without the rain. This was nothing if not authentic, right down to the lack of doors in the gents. The stage is in the centre pf an oblong, with space at either side. Beside it are two pool tables, the players oblivious to the fact that a band is playing at all. The crowd outside is a regular mix of post-punk, goth, beachbums and rock and rollers, each no doubt planning on finding their own personalities reflected in the music.
Doors open. Audience file in, unconscious of the fact that their meek behaviour is at total odds with their rebellious attire. Positions are taken and the stage is set.
Opeth walk on and the applause is rapturous, even more so as they break into their first song. It's a slow number, a strange one to start with, but it is unlikely that it will stay slow for long. Indeed, the pace picks up. More rapturous applause. It's going to be a goodnight for the Opeth fans. Unfortunately it doesn't do anything for me. I should say I'm hopeless at listening to bands that I don't know the music of, but it sounds a bit pedestrian and indulgent, like a cake with all the right ingredients but which has been left in the oven too long. Maybe I'm getting too old. It doesn't help that I can't hear the guitars properly - they are there, but added, it would appear as an afterthought. Given the fact that the singer is also on guitar, I find that unlikely. There is a lack of energy, which is more likely, the sound is trying to be deeply moving but it has more of a soporific effect.
The next song starts out acoustic, but gradually the other instruments join. This is more like it - a rhythmic number that harks to the orient, Aziz Ibrahim would be comfortable playing on this one. It might be called the darkness. It builds up, to a crescendo which I listen to from behind the one door in the John. It was a good song, and I am maybe now better able to appreciate the music. Was it me, or does the floor sway in Lupo's? I wish I could blame it on the alcohol, but I am stone sober.
Opeth's set continues as it should. A good band, consummate musicians giving the audience what the want. I don't think anyone would accuse them of being showmen though, with each song pausing for a changeover of instruments and a brief introduction, it is difficult to build a flow.
The Threadbare Carpet
2003-07-19
Ubiquitous network, global reach, any time, anywhere connectivity, we all know the lingo and have probably used it from time to time. The reality is somewhat different. Sadly different. Devastatingly different, if you have wandered around a major city looking for a cybercafe or a wireless hotspot, as I did in Boston last week, or if you have baulked at the outrageous pricing that some places charge for access.
The fact is, when the marketeers and CTO’s mapped out the new landscape, they forgot to take into consideration the fact that there are humas involved. Humans are slow on the uptake and resistant to change, meaning that it takes a long time for any great change to take place. “Evolution, not revolution” is not the mantra of progress, but a recogition that people don’t say jump when you want them to. And Geoffrey Moore’s chasm (or Gartner’s trough) in their respective adoption curves is in fact a generour way of saying that most people just don’t want what they’re being offered. With wireless, we have the chicken and egg that people aren’t the purchasers of hotspots, rather their users, but they don’t want to pay for them; menawhile the operators, stung by 3G, are not going to make the mistake of spending too much beefore the demand is assured. Frankly, they can’t afford to.
The end result, is rather than having a finely knitted mesh of connectivity, we have a threadbare carpet, a rag rug of many different protocols which shows its ill-fitting seams far too clearly.
Nobody is thinking about the customer perspective. Let’s stop for a second and think what these requirements are.
1. The customer should not have to give a monkey’s grunt what the protocols are. The way the protocols work together should be entirely transparent to the user. As an interface between applications and networks, the TCP/IP protocol set is a no-brainer. Everything below that should be someone else’s problem, and should be auto-selected based on a user’s service needs.
All this fighting over protocols has resulted in turf wars, which does the service consumer no good whatsoever.
2. A number of services are so obvious they barely need listing, but I will anyway:
- application access. In other words, a Web-based front end onto anything, be it a travel bookings service or music station
- media access. Video, audio, with the necessary clarity to make it usable.
- interpersonal access. Email, chat and discussion boards, coupled with voice communications, in a nutshell.
These three access types, and combinations of each, give people everything they might need
3. Customers want to pay the minimum, if anything at all, for their access. This is fair enough, given the low prices of some access types and the fact that access is a means, and not an end in itself. People should pay for applications,and we don’t need new and improved micropayments mechanisms for this. Telcos already have billing structures that can pay a percentage to each other, there are also time-based subscription mechanisms and credit cards. Don’t give me micropayments, I don’t need them, not for this anyway. Where billing happens, it should be transparent, for example as an on-screen counter: if somebody (say, starbucks) wants to pay for me instead, I’d be happy to see the unobtrusive advert on my screen that told me so. Don’t get me wrong - this isn’t a “water is a basic right” argument. It’s more like I wouldn’t expect to pay for the pavement,or for the shopping centre, or for the cinema walls before I can sit in the seat.
There is nothing at all wrong with the concept of a basic service being given away for free, but a better or more complex service being available at extra cost. The basic serviecs I would suggest are voice and data messaging, and Web access. More expensive would be higher-bandwidth apps such as image/video and gaming, these and other applications could be paid for. It is simple to work out how - application providers need to pay to have their applications hosted on the Internet. That’s it, that’s all, just as the consumer shouldn’t pay a thing to walk into a shop. It is then up to the application provider to edcide how to cover those costs, for example by charging for the service or by considering it as a loss leader or a worthy plan. Charities, for example, pay for mail shots, they can also pay for people to see their web sites, even if somebody else chooses to bear this cost.
I believe that a basic service should be given away. The reasons for this may be entirely self serving, but also they are based on developing a ubiquitous service on a global basis. If not given away, they should exist on a subscription basis. One way or another, it should be absolutely clear what is being paid for, when and why. WIthout some kind of basic, low cost service, it becomes difficult to extend the model as broadly as required. For example, a service shoudl function not only town to town but also country to country. Given the fact that the internet knows no national boundaries (ask a packet), it is laughable to suggest that packet-based access shoudl cost any different whichever country one is in. This is undoubtedly true for western countries.. There should not even be a variance from hotel to hotel.
Tesco’s should install free wireless hotspots in its shops. This would be achievable at relatively low cost, be a great PR coup, and set Tesco’s as a thought leader in teh market place. It would be a bit like Tesco’s insurance, books or pharmacy shaking up the market.
3. Customers expect a level of service guaranteed. The service should be secure and available, within limits. It should not be necessary to trawl around a major city looking for somewhere to plug in - this is a laughable situation that should be resolved as quickly as possible. Consolidation is an inevitable element of the solution, which should be available on autility basis - like water, in fact. I think the biggest weakness at the moment is service transparency. For example, bandwidth availability should be as visible on any device as the signal strength meter on a mobile phone.
The right connectivity, the right apps and services, using the right costing model. That’s it! The fabric/carpet analogy is a good one. In a fabric, every thread is as strong as the others, otherwise the strength of the whole fabric is at risk. Who’d pay for a threadbare old piece of material, unless there was some antique value in it? Which, hopefully, is what will happen to the network.
The Gazpacho and Green Glass Incident
2003-07-31
The scene: a reasonably well-to-do restaurant in downtown Boston. Outside, a number of ironwork tables, largely unoccupied. In one corner, separated from the street corner by an iron railing, sit our thirty-somethings - Karin, Carole, Shane, Jane and Jon, who are looking largely content despite the hour it took to find a seafood restaurant and the final indignity of being unable to discover the way in. A robust, humorous waiter has already taken the orders, which include two bowls of Gazpacho soup - because it was there, largely, and if you have to ask why you-d better not know. Cut to:
Jon's voice - narrating
I'd never had Gazpacho soup before, or at least I don't think I had. It seemed like the right thing to do.
Waiter arrives with two bowls, sets them before Jon and Jane. Zoom past Jane to Jon, conversing with others as he picks up his spoon and stirs it around the bowl, inquisitively, before scooping his first mouthful.
When it arrived it looked absolutely delicious. I plunged my spoon into the bowl, heaped with liquidized tomatoes, peppers and onions, and lobbed an oversized spoonful into my mouth.
Jon nods his approval and starts to chew. Close in on a variety of facial expressions, which blot out background to fill screen. Follow actions as they are described:
It was delicious - so I thought. But as the food turned over in my mouth, I was faced with the experience of something going very wrong. I bit on something hard, crunching it between my teeth before I could stop myself. Oh no, I winced. I've broken a filling.
Jon reaches into his mouth and removes something as camera backs away and shows expressions of others, still conversing, oblivious. Lip sync narration with Jon's profile, while keeping focused on faces as expressions turn from interest to horror.
Look, I said, depositing two small, tomato-ey fragments of green glass onto my palm and holding it out for all to see.
Carole
Oh my God!
Karin
What *is* that?
Jon's voice, narrating in sync:
It's glass, I said, to the horror of everyone sitting at the table. I decided it would be a good time to go find the manager.
Looking a bit uneasy now, Jon pushes away his chair and stands up, almost thoughtfully walking back to the main restaurant. As he arrives at the door, the waiter emerges. No sound is needed to explain the interaction between Jon and the waiter - the crumpling into incredulity and fear on the waiter's face as he turns and goes back inside, leaving Jon to return to the table. Fade to black.
When the manager arrived, she was highly apologetic, but just the tiniest bit skeptical - after all, it doesn't take a rocket scientist to have the idea about lobbing some glass into the soup in the hope of a free meal. Slowly it dawned that this was no set-up, and the skepticism turned to fear - this was the land of the free litigation after all - and then relief as I said that nobody would be suing anybody. Unless I found my stomach lining had been irreparably damaged. We had quite a good old chat in the end, and were promised various things, including a free lunch for me (hurrah! I was having the lobster!) and free desserts for everyone.
Things started getting silly then - the manager said she would name the Gazpacho soup after me - "Jon's Gazpacho" and we all agreed it would be a good thing that the restaurant obtained and played a copy of the CD that started all this, as a penance, for a week. The rest of the meal passed relatively smoothly, even with me ripping a hole in my thumb as I attempted to crack one of the lobster claws, and the desserts were delightful.
Unfortunately, I have suffered no ill effects since, so there doesn't really appear to be a case for litigation! Next time, I'll have them!
October 2003
This section contains posts from October 2003.
Weird Week
2003-10-01
This week has been, well, weird. I thought I had nothing to do, but nature abhors a vacuum... what has actually happened is that I have discovered a whole variety of things that have been forgotten or left by the wayside during my more busy times. There are diary clashes, unfilled forms, an article for Silicon.com I was supposed to write last week, honestly its a wonder I manage to get out of bed, I wonder how I do!
Plus, at the end of last week I managed to hook my wife's bumper on a post and rip it off the car. Such a simple thing, such an expensive thing... and such a lot of trouble I'm in given that she only bought it two months ago. Fate is not without a sense of irony, as two days later it was her birthday and I was planning on purchasing personalised number plates. It helps, of course, to have a bumper to stick them on.
Work is up and down. Some great things happening - a few nice little earners, then of course there's the books, but there's no clear pipeline. "Worst comes to the worst," I said to my friend George at the weekend, "you could always get a job!" she interrupted, a thought that left me feeling just a little queasy. Well, I could, but, no. Not an option!
Four Colours Solution?
2003-10-27
I was just now listening to Simon Singh's Radio 4 broadcast about the Four Colours problem. No doubt a little over-stimulated by having just finished Cryptomicron (cracking book, if you haven't read it), I'm now sitting in a Little Chef having a scribbled a potential, mathematical solution to the problem all over the backs of the menu sheets. It goes something like this: rather than trying to work out all the possible maps, it instead tries to enumerate all the potential relationships between two two-dimensional shapes.
First, an assumption - that the four-colour issue doesn't apply for shapes that meet at a point where several (more than four) lines meet, like the roads leading off from the Arc de Triomphe. If this assumption were wrong, and there were ten lines meeting at the point, this would require 9 colours. Sorry - what I'm referring to are the shapes that are diametrically opposite (across the point) as opposed to the shapes which are next to each other and happen to meet at the point.
Given this assumption, the steps to a proof are as follows.
1. Prove that there are only 6 ways in total that two shapes can join, based on whether the connection is made with an edge, a vertex or one object completely inside the other. This results in the proof also that no shape can touch any more than three already touching shapes. 2. Prove that, given an infinitely large shape as a starting point (represented as a line that stretches to infinity in either direction), and given that the shape has one colour and the space not occupied by the shape has another (visualise the sea and the sky, with the line being the horizon), you can add another two shapes into the resulting visualisation with only two more colours. This should be true however the additional shapes are positioned with relation to each other (there are 6 ways, remember). 3. Prove that you can add another shape without requiring any additional colour, based on 6 available relationships between shapes: a. If new shape touches one or two existing shapes (including the infinite line), use the colour of the shape it does not touch b. If the shape doesn't touch any other shapes, use either of two available colours c. If new shape touches 3 or more shapes, you may have to switch colours with existing shapes to accommodate new relationship between shapes. As a shape cannot contact any more than 3 others (proved in (1) ), you should always have one colour to spare 4. Finally, prove that the act of joining the two points of the infinite line has no effect on the number of relationships that can exist between the existing shapes. The same is true if the plane is linked, for example by forming a sphere or a cylinder.
There may be another way to prove this, going purely on the 6 relationships basis. Any shape can be made into a triangle just by straightening out the lines. It appears to my limited brain that there are only so many ways that the planar sides of 2, 3, 4, - n triangular shapes can come into contact with each other. The central triangle can of course be connected to stacks of triangles, it's just that they can't all touch each other (for a start, they're all pointing away from the centre!) As no 2-dimensional object can have less than three sides, then topologically, any part of a map can be reduced to a 3-sided object with a number of 3-sided objects connected to it. As any shape can be positioned as the central shape, topologically, prove it for one and you've proved it for them all.
Maybe I should have stuck with languages ;-)
2004
This section contains posts from 2004.
March 2004
This section contains posts from March 2004.
Now reading: Dead Air
2004-03-07
The trouble with Ian Banks?
? or even Iain Banks (so, you know he?s Scottish from the very first thing you know about him, and you won?t be forgetting that, will you now), is that all of his main characters are the same. Sassy women, rock heroes, war heroines, journalists, players, they all possess an impeccable wisdom, wit and logic that is unswervably Banksian. Essentially, he writes to express his fantasies on the printed page. There?s nothing wrong with this ? indeed, the results are to be applauded, but despite the stylistic changes, the clever uses of the language, the flashes back and forward I can?t help wondering if Oor Iain is a bit of a comfort zone creature. Amalgamating a few central characters, he likes to live slightly dangerously, enjoys good sex and a few drugs occasionally, and likes to be the centre of a conversation. He values his independence, enjoys his music and feels decidedly uncomfortable that he is getting older, to the extent that he indulges in things that might incite the occasional snigger from those around him. All of which is probably about as far from the truth, but heck. It?s a Sunday.
Stupid bloody, bloody trains
2004-03-18
I'm at Swindon station, where I've been cordially told that there is no scheduled train from here to the next station along (ten minutes away) for two hours. The man at the ticket desk kindly informed me that there was no demand for the service - ironic, I thought, as I was demanding it - and even more bizarre based on what he said counted as research, namely the number of people entering and leaving the station.
Now, given the fact there is no service to come in for, how could they possibly know - or is there something I'm missing here? Equally ironic is the reason why I went to Kemble rather than Swindon in the first place, that the car park is always too full by the time I get to the station to make use of it. The car park at Kemble itself was heaving, 300 cars I was told, busier and busier by the day, but still, no call for any more than a minimal service between the two stations. It just doesn't stack up.
Integrated transport system. Pshaw.
April 2004
This section contains posts from April 2004.
Invisible Man
2004-04-18
It is the evening of what has been a fine, sunny yet breezy day, one of the first that could really call itself Spring. The sun is holding a steady position on the western horizon, still warm and pleasant enough to walk the dog, yet hinting at the slow descent into evening, towards sleep and the distant stresses of another day. All is well as I take standard, measured paces down the lane, each step predefined by countless repetitions during rain or shine. In one hand, an extending lead flicks this way and that, following the scents of late afternoon. The other fumbles with wires and buttons, untangling the earbuds of an MP3 player before attempting to press play. With the opening bars the volume is set, electronica replaced by a steady drum beat. My pace quickens imperceptibly, falling in line.
"The world?s gone mad?"
The music keeps gentle company, not too loud to drown the sounds of nature, loud enough for the nuances to come through, a soundtrack to the soundtrack. I turn a corner and head towards the usual field, its pathways well mapped, timed to perfection. At the gate I stop, hand reaching down to the leash before pausing momentarily.
"The invisible heart?"
No, I think as the music recedes, then quickens. Not the field, not today. The sun hovers fractionally lower in the sky as the decision makes itself and the music calls me on. Shout my name in public places, my lips mouth the words as my feet match the rhythm pace for pace. I hear the words, I say to myself, past the barn conversion and into the open space beyond the village, into the light of a late spring day. Close my eyes, I will walk stride for stride. I have become the invisible man and I follow every note and allusion as my path leads away, towards the fields and the trees. Leave me be.
"It fell through a hole in the corner?"
Ha ? had me there. It?s only music, I think as I recognise the tugging on the lead and look down at the doleful spaniel eyes glancing back towards me. I have barely released the clasp before Cassie bounds over the drystone wall, sniffing for the recent memories of rabbits and wildfowl. It won?t be long before some startled bird is chased mercilessly across the stony field, its indignant croaks drowned by the yelps of the determined, yet ultimately hopeless hunter. I walk on, turning a corner and crossing into the fields myself, a new song to accompany me.
"I had this recurring dream?"
I walk, or try to walk, my feet lightening until they barely touch the ground, rising above the puddles and pieces of ancient brickwork. I lose myself in attics of treasures, the first signs of dusk hovering on the edges of my vision. I sway, involuntarily at first, then deliberately to cover my discomfort though there is not a soul to see or care. Unscared now, I fling my arms from side to side in a brief release of euphoria, singing snatches of song in tuneless abandon. Ploughed soil gives way to meadow as I come over the rise, the coarse grass still flattened by the winter.
"It?s always a struggle?"
There is no time to stumble, nor to prepare. Uncluttered vocals, floating on layers of harmony, render me drowsy and draw out my soul. Unprotected, I am hooked by gentle rhythms, once again fixing my pace and urging me on, taking the rest of my life away. What a wonderful, fantastic place, lush green stretching in every direction, plunging across and down before me. The music lays me down to drink in the vastness of oncoming night, the universe reclaiming its own as the first stars wink their encoded greetings. In a moment outside of real life, I drink in its majesty. I am nothing, I have nothing, my purpose long forgotten as I can do nothing but wonder. Somewhere inside the remnants of my inner being, even now dissolving into particles and casting away on the breeze, there surfaces a snatched memory of a song, "...my body has gone, but my eyes remain..."
"Forgive me if I stare."
May 2004
This section contains posts from May 2004.
You're gone...
2004-05-14
A good friend, Tom Vance, died two days ago.
http://www.qconline.com/archives/qco/sections.cgi?prcss=display&id=195038
The Web may be great at bringing people together, but the global village isn't particularly good for mourning.
RIP Tom mate, one more star in the sky.
June 2004
This section contains posts from June 2004.
Walking the dog
2004-06-04
Walking the dog, across open fields with the sun breaking through the haze of morning. From the jukebox walkman (for a change - I usually like the peace), come the opening bars of Out Of This World. "This is your day," sings h, and I wonder why at the same time as acknowledging the inspiration. Fields, haze, one foot in front of the other as the song unfolds and reaches its climax. I feel slightly moved, releasing a sigh just warm enough to be visible on the still dewy air. My head tips back, leading my still-waking eyes to focus on a B-52 bomber. As it lazily crosses the sky, no doubt heading back to Fairford after a mission, I am surprised I didn't notice it before. I decide to change the soundtrack to something appropriate, but even as my fingers press the buttons, the huge aircraft shimmers into the mists of morning. Only the first line of the song is reached before the plane has disappeared altogether, the morning sun's gentle rays obscuring its dark shape as I am left with the music and the words, "Can you make it... on your own?"
And I wonder why.
August 2004
This section contains posts from August 2004.
Sold on voice recognition
2004-08-05
All those voice recognition experiments culminated in an article... here:
http://www.theregister.co.uk/2004/08/03/voice_recognition/
I must say, I'm completely sold on voice recognition, for one simple reason: I'm doing more writing than I could otherwise do. This doesn't mean I'm getting things done faster (which is also true, but less so as there's more to life than writing things down), but doing more things. In biz-speak this means productivity - in psychobabble it equates to reduced stress, so overall its a winner! A field with nobody else in helps of course :-)
October 2004
This section contains posts from October 2004.
Mobility is seasonal
2004-10-22
Thought for the day:
Mobility is seasonal. The idea of sitting alongside the canal right now, in the mud and the gales, fills me with horror. Still, I can sit in the car and at least look at the canal. Which I'm doing.
Actually, I wonder if mobility is really about mobility at all. An animated discussion with Cisco a few months ago led to the phrase "mobility is ubiquity" - in oher words, you can be as mobile as the technology lets you be. At the moment, for example, I am most certainly text-mobile - over my GPRS connection I can update my blog (first time in a while), download my email headers and generally function. I'm hardly graphics-mobile, however, and I'm certainly not media-mobile.
Text is good.
Wine
2004-10-22
Fact: I would rather drink new world wine than cheap French wine.
Fact: I would rather drink expensive French wine than, well, just about anything else...
November 2004
This section contains posts from November 2004.
New UK Rock Station
2004-11-02
From Blabbermouth, "A new British rock/metal radio station called Arfm will officially launch from the NEC in Birmingham, U.K. on Wednesday, November 24 at 12 noon (GMT) from the Sound Broadcast Exhibition. British rockers MAGNUM will be part of the launch program with Bob Catley and Al Burrow joining presenters Adrian Juste, Steve Price and Simon Gausden. Other guests include MARILLION, Jim Peterik and TARA'S SECRET."
So there - I don't listen to the radio that much myself, but I do like to know there is some decent competition for the media-driven pap stations.
Nasty Upgrades
2004-11-26
Ouch! Upgrades can be fatal!
Nasty, messy situation at the UK Dept of Work and Pensions this week, when an upgrade pilot decided to roll itself out to the majority of the 80,000 protected PC's. Here's the Register article - its up on the Beeb and various other sites.
Now, apart from this being a cautionary tale just a week after the latest Microsoft announcements for its "management vision" at IT Forum in Copenhagen, this is a welcome reminder of where we should be focusing our security efforts. As we showed in a recent Reg reader study, most of the security issues are inside jobs, through system failure or user incompetence. The DWP case is a bit of both, if I understand correctly. As well as having systems that ensure protection against supposedly Ukranian hackers (I say supposedly, as recent rumours are that said gangster ring is in fact operating out of the USA and covering its tracks by using said republic as a cover), it is far more important to protect people from theirselves and from the flakiness of their own computer systems.
It seems astonishing that a modern computer environment can be impacted to this extent by its own upgrade routines - in this case, sourced from Microsoft. Sometimes, its just too easy for vendors to blame the bogeyman - I'm not sure they'll get away with it this time.
2005
This section contains posts from 2005.
May 2005
This section contains posts from May 2005.
eConstructing: Life on Mars
2005-05-24
In August 1996, NASA issued a press release that suggested evidence had been found to indicate life on Mars. Meteorite ALH84001, it was discovered, showed several mineral features "characteristic of biological activity" and, more importantly some possible, microscopic fossils of primitive micro-organisms. These ultra-thin, worm-like 'fossils' appeared to be segmented, but it was about one five thousandth of the size normally associated with such bacteria.
Skeptics were quick to jump on the claims. As no known life forms on this planet were of the size of that found in the meteorite, they said, it was therefore very unlikely that the findings were an indication of 'life', fossilised or otherwise. It was also ironic that the staggering claims came at a time when funding for Mars projects was being sought, against a background of an American public which was looking increasingly unsure of whether their taxes were being wisely spent in space.
Today, they jury remains out on the issue. Interestingly, in 1999 organisms only 20 nanometers across were discovered in rock samples taken from several miles under the seabed off Western Australia. These were found to grow when subjected to more normal conditions and given some food. Despite this being a boost to Mars-life theorists, the fact remains that issues such as 'Life on Mars' are, today, far more about speculation than concrete evidence. It could be argued that the existence of life somewhere in the Universe other than our own Mother Earth is an inevitable truth, purely by extrapolating the theory of evolution to other planets. If, eventually, solid evidence is found, it is likely that we are in for another shake-up of our science, religious beliefs and maybe even our social structures. Darwin and Marx would be proud.
eConstructing: From Vitruvius to Zachmann
2005-05-24
"Architecture is frozen music" Goethe "If you know the notes to sing, you can sing most anything"
Maria von Trapp
In the fourteenth and fifteenth centuries the Black Death swept across the European continent. As it abated it left behind a swelling population and upsurge of demand for goods and services. The following period of regrowth and renewed prosperity very quickly came to be known as the Renaissance or 'rebirth'. Whole populations were inspired to find a more civilised existence than the Medieval times they left behind, leading to a surge of interest in 'higher things' such as science, art and architecture. Many great ideas and inventions came out of the renaissance, such as Gutenberg's printing press, which were to transform the way that business was conducted and, perhaps more importantly, that information was communicated. Indeed, where would Europe's revolutions have been in the eighteenth and nineteenth centuries, without the ability to produce and distribute pamphlets to a literate audience?
The rebirth of the arts led to a resurgence of interest in the 'great civilisations', namely the Roman and Greek periods. Italian architects such as Donatello and Brunelleschi used to pick around the remains of ruined Roman buildings, seeking inspiration for their neoclassical designs. Shortly before 1450 came a breakthrough with the rediscovery of the treatise De architectura libri decem, or "Ten books on architecture" by Marcus Vitruvius Pollo. As an artillery engineer, Vitruvius served under both Julius Caesar and his successor, Augustus. Some time around the birth of Christ, when Vitruvius was still involved in building works, he documented the principles of the both Roman and Greek architecture so, "to deliver down to posterity, as a memorial, some account of these your [Caesar's]magnificent works." The result was a ten-volume manual covering in depth architectural workings of all that he saw around him, both military and civilian.
Leon Battista Alberti, one of the unsung heroes of the Italian Renaissance, translated Vitruvius' treatise and added his own architectural ideas. Following its translation the book was one of the most influential works of its time, being widely used both in Italy and outside. Since the earliest times, architecture has been seen as a subject worth taking seriously.
Marillion Bass Museum 17 November 2000
2005-05-24
Seeking inspiration. Excuse me while I switch on the lava lamp. There.
This is a tale of giving colour to somebody who has only ever seen in black and white.
Hello. You don't know me, and well, there isn't much to know. Am I a Marillion fan? Hell, no. I wouldn?t stick out my neck that far for anything or anyone, I'd only get labelled, heckled, discouraged. This fan stuff is for geeks, saddos without lives, nerdy types who inhabit chat rooms and conventions as a poor substitute for reality. Trekkies, City supporters, Tolkeinites, computer gamers and band followers, they're all the same, right?
Wrong.
Me, then. I got into Marillion when a mate of mine (Hugh, if you?re out there, give us a wave) came into school one Monday morning with a triumphant look in his eyes and a cassette tape in his hand. We were - what - sixteen at the time? Anyway, the tape. It was Tommy Vance, the Friday Rock Show on Radio One (which I regularly missed, my mind not being the sharpest in the pack), extolling the virtues of this new band. He played Market Square Heroes (which I quite liked), Three Boats Down From The Candy (which, sorry, was not the most impressive debut) and Forgotten Sons, which blew my already challenged mind. I taped the album when it came out (Bought it? You?re joking, aren't you? What kind of a rich schoolboy do you think I was?), and the rest is eighteen years of consistently solid listening history interspersed with a couple of concert attendances.
There you have it.
Then, in 1999, I joined the Marillion fan club, The Web. Why? Had I finally got round to it/wanted (belatedly) to be on the cover of Marillion.com/been fed up too often with only hearing about tours after the event/felt an inner sense of yearning that could not be met by conventional means? Who can say. Maybe I just fancied something new. Having made the decision, however, I was determined that the passivity would be resigned to the past! Watch out, Web world, Here I come!
So I lurked.
I hid. I read the news, followed the mailing lists, listened in to the online chat. So much for the great arrival. This went on for a year or so, until, somehow, I managed to get tickets for an exclusive gig at the Bass museum. I remember that, at the time, it was very important for me to keep dialling, redialling, tri-dialling on two land lines and a mobile phone. Try that one, Anneka. All went quiet for a few weeks, and then the day came. I found myself curiously at peace as I drove up from Gloucestershire, listening to Made Again to get me into the "live" mood. I expected heavy traffic but found none, I was sure I would get lost but the museum was signposted from the outskirts of Burton. No petrol problems, no floods, all was serene as I found myself in the Bass Museum car park. The entrance beckoned so I wandered inside and asked the first person I saw whether I could get a drink. Another chap escorted me down alleyways and through a room one corner of which was packed full of keyboards, drum kits and other musical paraphenalia. Steve Hogarth was sitting in the middle of it all, picking out some notes as a couple of guys snapped pictures. I kept walking, passing Mark Kelly in the corridor. Then a door shut behind me and I was in the bar. I ordered a drink and slumped down in a chair, murmuring feeble comments like "hang on, wasn't that..."
An hour later I was queuing, meeting fans who had attended over a hundred events, who discussed each track on every album like they would talk about old friends. "How many concerts had I attended?" I was asked. "Well, three, or is that two..." I stuttered. Maybe I should have left then, while there was still time, I should have returned to my CD collection and stadium-effect stereo system, listening to it loud with the lights off, a cup of Horlicks and a comfy cushion to ease my back. But then the gate opened and we moved through. As I glanced back, I could make out a drizzle-soaked sign in the dark. "No Exit," it said. I could only go on.
We went in, we ate and we drank, we listened to music. Sure, occasional mistakes were made, intros were missed and timings misjudged but it all added to the reality of it all, the honesty of it all. Here were a bunch of chaps making music together as amicably and as easily as if it had been in one of their front rooms, or mine. The set list was old, new and borrowed songs and arrangements, a couple of jams and the requisite number of encores ? only this seemed out of place as it reminded me that this was a concert, these were musicians and I was in the audience. It was over all too soon, eighteen songs then the lights went up and it could have been time to leave, but I didn't want to go. It was as if a curtain had been removed from my eyes as the music played, unpicked and unravelled. My drowse had gone, I was waking.
I was staggered by the simplicity of it all. Here were some people, making music. Here were some others, enjoying the music. The band members were being fed by the applause, grinning and delighting in providing its source. The audience delighted in the music, in the closeness, the reality, the honesty. There was no hidden devices, smoke or mirrors, this was people, interacting and having a ball.
I was sold. I walked into the bar with open eyes, ready for action. I bought a drink and turned to see Mark Kelly, looking much the same as he had looked not five minutes before. Well of course he did, he was Mark Kelly. I thrust a scrap of paper and a pen at him. Pete Trewavas was leaning against a table, chatting. Thrust. I started to ask a question, and then Ian Mosley appeared at the doorway. Sorry Pete - thrust. What was I doing ? interrupting one idol to snatch two seconds of handwriting from another? I apologised, then out came Steve Rothery. Thrust. H. Thrust. As I did so I realised what I was doing. Just a bunch of guys, eh? People interacting, eh? Nah. This was idolatry, pure and simple. I stopped and asked H a question - advice for a budding singer? He answered. I answered. We talked about Christmas carols, and as we did things returned to how they should be. A queue of thrusters appeared around me so I backed away, gracefully, returning to Pete and determined to have a real conversation.
Which I did. After a while I went over to Mark and discussed life, British Home Stores and everything for what could have appeared an unseemly long time, but it wasn't. It was just a chat with a guy in a bar. I'd love to meet him again, and maybe one day I will.
The towels were on the pumps and the organisers were starting to look shifty so I figured it was time to leave. As I drove home I listened to Made Again for a while, only this time it was different - as I heard the keyboards, I thought, "that's Mark," and each guitar solo was played by a hand I had shaken not an hour before. Eventually I turned the music off and left myself to my memories of a remarkable evening.
I've missed out a few bits and bobs here and there. There was the trip to the States which clashed with the museum gig, but which was cancelled at the last minute, there was the excellent company of Dave, Max and the many others, there was the speed camera flashing on the way out of Burton-on-Trent, but I have bored you enough. At the Bass Museum I was reminded that music was not about CDs and covers, merchandise and marketing, corporations and egos. Even with a band as renowned and as popular as Marillion it could still be a deeply personal experience for all parties. Maybe it was all put on, but I have heard enough opinions of the band and its following to convince me otherwise. I sensed a feeling of community, something which (it is claimed) is so sadly lacking "in this day and age." Am I a Marillion fan? Who cares. Will I be back the next time? You bet.
Cryptonomicron
2005-05-24
Neal Stephenson really is a jolly good writer, isn't he? He must be - he can churn out more words than aphids on a bed of roses (poor, but I'm working on it). He's not so strong on names and places, which may be why most of his recent work has involved the same set of families through history, but we'll give him the benefit of the doubt for that.
The first book of his that was brought to my attention was Cryptonomicron (thanks Fraser). Once I had got into the rhythm of the most exceeding levels of detail, both in context and in conversation, I discovered it was a very good wheeze altogether. I won't spoil it but I will heartily recommend it.
Particularly for anyone with more than a passing interest in security. It recently occurred that the book was about security, at every level - there was the global security issues of a world war, requiring technical security mechanisms such as cryptography, alongside more physical measures such as whopping great warships. Many were the examples (consider poor Goto) of how one can move from one type of insecurity to another, out of the frying pan and through a series of unexpected fires, before one finally realises that security is an ideal and not a reality. Compare these to the present day tales of financial security and computer crime, the characters constructing ever more complex architectures to protect both their data and themselves. Finally, the characters themselves were either blissfully inside their own comfort zone, or otherwise.
Security, what a mutilayered beast you are. Ignore my ramblings and read the book.
The Art of Heroic Failure
2005-05-24
"I know you all, and will awhile uphold the unyoked humour of your idleness: yet herein will I imitate the sun."
"So, when this loose behaviour I throw off and pay the debt I never promised, by how much better than my word I am, by so much shall I falsify men's hopes; and like bright metal on a sullen ground, my reformation, glittering o'er my fault, shall show more goodly and attract more eyes than that which hath no foil to set it off."
"I'll so offend, to make offence a skill; redeeming time when men think least I will." Prince Hal in Henry IV, Act 1, Scene 2 The man who, as King Henry V, was later to lead his country to an unexpected victory at Agincourt was (according to Shakespeare) the kind of kid that puts parents to shame. Young prince Hal is ridiculed as a maverick and a layabout, the "sword-and-buckler Prince of Wales," who would be better to be "poison'd with a pot of ale" than become King of all England. In truth and by his own admittance, Hal wined, dined and womanised, he fraternised with vagabonds and cavorted with thieves.
Ultimately, it is the latter group that suffered. Hal knew that, when the time came for him to adopt the kingly mantle, he would indeed throw off his loose behaviour "and pay the debt I never promised," namely to lead his country in peacetime and in war. All this, Hal conducted with barely a backward glance at the life he left behind him, much to the chagrin and eventual doom of his erstwhile compatriots. Sir John Falstaff, it is said, was never the same following the public disgrace he received at the hands of the newly crowned king. "I banish thee," said King Henry, "as I have done the rest of my misleaders." The comfort he found in the bottle was ill designed to prevent the morosity which eventually, it is said, lead to his death of a broken heart. "Falstaff he is dead," said his mourners, for "the fuel is gone that maintained that fire."
Hal showed himself to be smarter than either his friends, or his enemies, gave him credit for. "Herein I will imitate the sun," he announced. He played while playtime was an option, but once his calling came he put such frivolity behind him. One day he was a boy, the next he was king. Would not all parents allow some flexibility in their dealings with adolescent offspring, could they only be sure that, when the time came, all trappings of youth would be firmly placed in the past! Come on, it is only a play, after all.
eConstructing: Of Evolution and Revolution
2005-05-24
"An elderly lady attended a public lecture given by an astrophysicist on how the Earth goes around the Sun and how the Sun circles about with countless other stars in the Milky Way. During the question and answer session, the woman stood up and told the distinguished scientist that his lecture was nonsense, that the Earth is a flat disk supported on the back of an enormous tortoise. The scientist tried to outwit the lady by asking, 'Well, my dear, what supports the tortoise? To which she replied, 'You're a very clever young man, but not clever enough. It's turtles all the way down! "A Brief History of Time", Stephen W. Hawking
As far as Darwin was concerned The Origin of Species was never meant to be the end of the story. In the introduction Darwin describes how he had been urged by his publishers to produce an Abstract, "as my health is far from strong." Darwin needn't have worried about his health, as he lived for a good twenty years after the publication date. However the completed sketch, in which Darwin planned to demonstrate fully how he arrived at his conclusions, was never to materialise. The reader is assured, however, that the Origin was based on at least twenty years' scholarship and field work.
Another publication of the same period was also driven by the desires of its publishers. This time, the author was a thirty-year-old political journalist and social philosopher, required to produce a document in time to coincide with one of the many uprisings of the period. The publication, a pamphlet whose first edition numbered only 23 pages, was rushed out in less than six weeks with little time for reflection or review. The year was 1848, the pamphlet was entitled The Communist Manifesto and the author was Karl Marx.
Although the initial print run of the Manifesto went largely unnoticed by those involved in the revolution in Germany in 1848, it has gone on to become one of the most widely read texts ever published. As Darwinism challenged religious beliefs in its day, so Marxism has gone on to become to all intents and purposes a religion with the Manifesto as its holy book. According to historian AJP Taylor, Karl Marx himself claimed that Darwin had done for biology what he had done for social sciences.
Despite the certainty with which both Darwin and Marx held their beliefs, neither ever claimed to have invented something entirely new. The main achievement of both can be explained in the timelessness of each of the texts. Each remains as accessible today as it was on the day of publication.
The revolution that never was
2005-05-24
"I refuse to prove that I exist" says God, "for proof denies faith, and without faith, I am nothing." "Oh," says man, "but the Babel Fish is a dead give-away, isn't it? It proves You exist, and so therefore You don't. Q.E.D." "Oh, I hadn't thought of that." says God, who promptly vanishes in a puff of logic. Douglas Adams, The Hitchhikers Guide to the Galaxy
Modern religion and ancient history make uncomfortable bedfellows. Like a senile married couple, they shuffle alongside each other with only the vaguest of recollections about why they remain together. On the one hand, history is an unavoidable and beneficial part of religion, serving to set the context as well as to provide a factual basis for beliefs. On the other, some modern religions seem to prefer to leave things to faith alone, as though historical fact might somehow damage the assumptions upon which they are founded.
In late 1947, just east of Jerusalem in an unpopulated area of what is now Israel, a Bedouin goat-herd made a most remarkable discovery. Whilst searching for a lost goat, the boy, known as Muhammad adh-Dhib ("the Wolf"), came across a cave which later proved to contain no less than forty earthenware jars, each stuffed with ancient scrolls in Hebrew and Aramaic. Thus began a tale of intrigue of which the ending has still, over fifty years later, to be played out.
The Dead Sea Scrolls, as they are commonly called, have been stolen, traded on the black market and advertised in the small ads of The Wall Street Journal. They have witnessed, and been affected by the many wars which have involved Israel since 1948. Most interestingly, the scrolls appear to have been subject to a bizarre cover-up by the institution that is the Catholic Church. From the earliest days following the discovery, when a team of Catholic priests took control of the scrolls, there are repeated examples of how the team sought to reveal a minimum about their content and to play down their importance, much to the chagrin of other scholars and the wider public.
It was not until the early nineties - up to fifty years following their discovery - that copies of some of the more controversial scrolls were released to a wider audience. Why the Church wanted to keep a cap on the bubbling volcano that is the Dead Sea Scrolls remains unknown, largely because a number of scrolls in their possession remain unpublished. It is thought that they contain material about a Jewish sect known as the Essenes, whose beliefs seem to echo those of Jesus of Nazareth: however the Essene writings may predate Jesus' time on earth by up to one hundred years. If it were true, this would cast doubt over the fundamental belief of Christianity, that the Christian teaching espoused in the New Testament originated with Jesus. Heady stuff - it is no wonder, then, that the religious institutions were (and probably remain) worried. Such facts could rock the faith of the whole Christian Church and cause a revolution in its ranks. As yet, however, it would appear that the attempts to play down and delay the evidence have paid off. The controversial ideas that could have been spawned by the two-thousand year old texts have not resulted in anything other than frustrated scholars and a still-ignorant public.
Incidentally, technology has not been able to resist joining the fray. There are over 13,000 mentions of the Dead Sea Scrolls on the Internet, referencing specialist sites, exhibitions, translations, academic publications and discussion groups. To be sure there will also be a fair handful of cranks and off-the-wall sites, but they are part of the rich tapestry of online humanity. The Internet is bringing the scrolls to the widest possible audience and this can only be a good thing.
Overall, the story behind the scrolls could be subtitled "the revolution that never was." Unlike the writings of Darwin and Marx, and despite the fact that their content ranks just as highly in terms of the impact they might have had, the publication of the scrolls is now unlikely to have any profound effect on humanity. The institutional conservatism of organised religion, this time, has quashed any potential there may have been for a revolution in either thinking or teaching.
Prochain Pipeline and the Theory of Constraints
2005-05-24
This article originally appeared in Project Management Today, in February 2003. (2009 update - the magazine site seems to be defunct so I've changed the link).
Summary:
"Jon Collins reviews the ProChain Pipeline product, an add-on to Microsoft Project that supports the principles of the Theory of Constraints. He describes the tool and explains how it can be used in practice."
For the PDF, click here.
eConstructing: Darwin and Punk Eek
2005-05-24
It is now over one hundred and fifty years since Charles Darwin published "The Origin of Species." Despite his own gripe that he wasn't very capable as a writer, the work remains one of the outstanding achievements of science writing - lucid and accessible to scientist and general reader alike. Of course the book was far more than just a good read. It transformed biological science and evolutionary thinking, and it caused a furore in religious circles. However it was the accessibility of the volume that ensured it finding its way onto the shelves and reading lists, and from there into the dinner table and drawing room conversations. Darwin did not invent evolutionary theories (this could be ascribed to the Greeks), however he most certainly substantiated, elucidated and popularised them in a fashion hitherto unseen.
Darwin was fundamentally committed to the central pillar of "The Origin," namely that life forms mutate over extended periods of time, for example in response to environmental changes. The reason he gave was "that the greatest amount of life can be supported by great diversification of structure": evolution is driven by the will of all life forms to succeed and grow, despite the best efforts of both the physical world and indeed other life forms to deny them the opportunity. Despite this Darwin was the first to acknowledge that much remained obscure. "I am convinced," he wrote, "That Natural Selection has been the main but not exclusive means of modification."
In its time, Darwin's principles were the subject of much debate between the establishment and the arriviste, the theologian and the scientist. The concept of plants improving over millions of years was generally perceived as acceptable; however "the monkey theory," that man evolved from - ugh! - animals, proved difficult for many. Not so today, which sees even this previously unthinkable tenet being accepted as central to the street wisdom of the agnostic West. Without demanding an understanding of the arguments that support it, Natural Selection has been accepted on face value by the general population, discounting of course the religious groups that have stuck to their guns since the theory was first espoused.
In 1972, Niles Eldredge and Stephen Jay Gould were graduate students at the American Museum of Natural History in New York, and they were having problems. The trouble was that their chosen species - trilobites and land snails - were showing precious little evidence of any evolution at all, even through the thousands to millions of years' worth of strata where the species were found. In attempting to resolve this issue, they developed a theory of their own. The pair decided that speciation theory, which had for a while been the subject of discussion in the biological community, was also appropriate for understanding fossil records. Speciation theory is based on the principle that new species are much more likely to develop within an isolated group than the main population, as any mutations have a much better chance of dominating a small population than a larger one. Eldredge and Gould applied this principle to their analysis of the fossil records and came up with "The theory of punctuated equilibrium." The central principle is that evolution occurs, in the main, very slowly and progressively, with the occasional evolutionary leap caused by small groups of life forms which have somehow become separated from the main population. This theory conveniently fitted the issues that they raised with regard to the near-standstill slowness of 'normal' evolution, not to mention the absence of fossil evidence concerning 'transitional' lifeforms - neither one species than another. After all, if the subpopulation is so small, the statistical likelihood of discovering any fossils would be close to zero.
Exactly what the circumstances were that caused the sudden jumps in the fossil record, Gould and Eldredge did not know. It could be down to gradual, though relatively fast in geological terms, changes in landscape or weather. Alternatively the occasional cataclysmic event, such as the comet which is reputed to have wiped out the dinosaurs, could have led to the rapid evolutions of all life forms with survival permitted only to those which managed to equip themselves for the aftermath. Us lay people only have to look at the reputations of various closed communities for interbreeding, and their consequences, to see the potential for truth in the theory of punctuated equilibrium.
Eldredge and Gould published their findings in the book "Models in Paleobiology" and, in doing so, knocked the evolutionary establishment on its back. The theory was, to the pair, as much about the equilibrium ("Stasis is data," wrote Gould) as it was about its punctuation. Despite this it didn't go down at all wellwith the older folk of institutionalised Darwinism. "Evolution by Jerks," or "Punk Eek" were terms characterising not only the concepts but also the hostility felt against the authors.
Interestingly, the progress of evolutionary theory has itself followed the patterns put forward by Gould and Eldredge. Both Darwin and the young turks of New York count as small groups working outside the confines of the mainstream. Both put forward theories that fundamentally changed the understanding of the time. Darwin, in particular, created the right to question what had gone before, however conservatism was quick to set in once again. Gould and Eldredge found themselves going up against the accepted beliefs and practices of the paeleontological, Darwinist establishment. It seems that the right to question accepted beliefs has often to be fought for, with the battles won and lost on grounds of human nature and not of science or fact.
All About: Wireless Networking
2005-05-25
This is where radio-based communications enable computers and other devices to connect without the need for wires. The technologies themselves are based on wireless transmitters and receivers of digital data, and can be used to replace wires in corporate and home-based environments.
Here we talk about what wireless networking technologies exist - the current leading place technologies are Wireless Ethernet and Bluetooth. We look at the benefits of wireless networking such as flexibility and manageability, and we cover how a wireless infrastructure extends rather than replaces a wired LAN. Looking into the future, we consider HiperLAN 2 and the predicted convergence of wireless networking with the Mobile Internet.
What is Wireless Networking?
In the simplest terms, a traditional, wired computer network (or Local Area Network, LAN) can be seen as a number of boxes joined together by wires. Often the positions of those boxes are dictated by the constraints imposed by the wires. When most people talk about wireless networks in the corporate environment, they usually mean wireless Ethernet. This does what it says on the tin it takes the ubiquitous LAN protocol Ethernet, and extends it to wire-free environments. Wireless Ethernet supports data speeds of 2 megabits per second (Mbps) and 11 Mbps to put this into perspective, traditional Ethernet could handle 10Mbps and many networks being deployed today support 100Mbps.
The second high-profile wireless protocol is Bluetooth, designed to replace the wires between devices and peripherals (say, a computer and a printer, or a laptop and a mobile phone). Bluetooth is often compared to the InfraRed standard IrDA, which it is probably going to replace. Unless you have very specific requirements, these two protocols should cater for your needs hence we shall concentrate on them here.
The Business Benefits of Wireless Networks The key benefit of wireless networking is flexibility: in the box-wire scenario above, if you take away the wires, you can be more flexible about how you locate the boxes. Hence:
* Hot-desking environments (in which desk space is allocated on a first-come-first-served basis) can be built, configured and modified cheaply and simply. * Office facilities can be allocated and moved as user needs dictate, for example to cater for department reorganisations. * Users can access applications from wherever they are on the site, and even on the move opening up possibilities environments such as warehouses.
It is difficult to find a report that concentrates on Bluetooth from a corporate perspective. This may be for a couple of reasons the first is that no one is really sure of what the tangible benefits will be so it is the intangibles that are focused on. The second is that the business benefits are irrelevant to the manufacturers, which are equipping devices with Bluetooth whether the end-users want it or not. Time will tell.
Deploying Wireless Networks in the Corporate Environment In most circumstances, wireless extensions will be added to an existing, wired LAN.
Issues with Wireless Networks The main issues with current wireless networking technologies are as follows:
* Wireless Ethernet, which is touted to deliver either 2Mbps or 11Mbps over a 30-metre indoor range, in reality delivers less than the maximum depending on the number of simultaneous users, the distance from the transmitter and any obstacles that may be in the way. * Wireless Ethernet devices have interoperability problems, that is devices from different manufacturers are not working properly together hence the need for the Wireless Ethernet Compatibility Alliance. * One of the prime protocols Bluetooth interferes with wireless Ethernet and with itself. * There are also question marks over the security of wireless networks, with both eavesdropping and hacking being major concerns.
The Future of Wireless Networking HiperLAN 2 is a standard defined for next generation wireless LAN environments, and is capable of supporting theoretical data speeds of 54Mbps. Much industry support is rallying behind HiperLAN 2. Note that HiperLAN 2 also promises interoperability with the 3G protocol UMTS, which we discuss in out Mobile Internet section. It may be that, in the future, this differentiation disappears as the technologies converge with each other (and, potentially, with wired technologies). For the moment they remain poles apart.
As for Bluetooth, as chipsets for this protocol are currently being incorporated into many devices, it looks set to carve itself a niche in the future.
All About: Web Services
2005-05-25
Distributed applications are nothing new parts of software applications, each running in the most appropriate place and on the most appropriate hardware. In general, distributed applications separate application functionality (say, the database engine, the security functionality, the transaction engine and the number crunching) onto different servers. Now, imagine if these application elements employed the Internet as their communications mechanism they could be situated anywhere on the globe. Imagine still further if different third parties managed such elements, we could access them as services over the Web. There we have it Web services. Here we consider the essential elements of Web services, particularly the platform technologies UDDI, SOAP and XML, and we look at the efforts of software vendors such as Sun and Microsoft to provide suitable frameworks for Web services. We consider the potential benefits of Web services and examine whether such applications are deployable today. We look at the issues facing Web services, particularly the fact that much is still vendor hype, and look at how Web services will evolve in the future. What are Web Services? Defining Web services is about as tricky as defining terms like application and infrastructure we all use them, nobody knows exactly what they mean, but somehow we muddle along. Consider the following. Point one a software application can be thought of as a set of software elements, each dealing with one part of the functionality. Point two these software elements can communicate using defined protocols and mechanisms. Point three the elements do not need to reside on the same machine. Point four in fact, they can be situated anywhere in the world and they can be managed by any third party. Whoa! Hang on there. Things were going OK until the last point, right? But why not separate components across the Internet it is a good enough basis for human communications, so why not use it for programs (or even parts of programs) to communicate? The more that this is thought about, the more two points become clear: There needs to be an Internet-friendly set of mechanisms to enable the communications to take place. This is an appropriate basis for some, but not all applications. On the basis of these points (which we shall come back to), there exists the potential to construct applications from piece parts that are accessed over the Internet from different providers. These piece parts are called Web Services. Clearly, mechanisms to support Web Services need to have been accepted as a standard by the major players in the industry in this case, Sun, Microsoft and IBM. Successfully agreed, accepted and adopted are: The eXtensible Markup Language (XML) is text-based formatting language appropriate for defining and transmitting data between application elements across the Internet. XML has been accepted and adopted by all the major industry players well, all the ones that matter. The Simple Object Access Protocol (SOAP) is a standard for sending messages between objects just think that any application element can be considered as an object and you wont go far wrong. The Universal Directory Description Interface (UDDI) provides a globally accessible registry of service providers and what services they provide. It is with application support frameworks where things get tricky. A number of major IT companies are positioning themselves as framework providers indeed, this is where they see that there is money to be made. In particular, Sun, Microsoft, HP and IBM have frameworks of their own namely Sun ONE, Microsoft .Net, HP NetAction and the IBM framework for e-business . The final piece in the Web services puzzle is the provision of a communications mechanism, which enables application elements to communicate via the Internet. Here we have open versus proprietary, with the UN- (and Sun)-sponsored ebXML (XML for electronic business) in competition with Microsofts BizTalk. The other mechanism worthy of mention is RosettaNet. The Business Benefits of Web Services In the past, applications (distributed and otherwise) have been used by single organizations and accessed over the corporate network. The arrival of the Internet has enabled companies systems and their applications to talk to each other directly, for example in the form of a supply chain or a marketplace. Business benefits are summarised by the various vendors of Web services Sun has a good overview of the current and future benefits here (its marketing but so, for the moment , are Web services). We think that the benefits are a combination of the benefits of ASPs (of which Web services are a logical evolution) and good development practice. Maybe, with Web services, the Shangri-la of software reuse will finally be achieved! Deploying Web Services For developing applications, there is quite a lot of material out there on vendors specific Web services offerings (particularly Microsoft), however there is notably less cross-vendor information. As with other areas of application development, patterns provide a means to familiarise yourself with how to solve specific problems. Issues with Web Services Web services are still new and hence the issues are still to be fleshed out. For the moment they can be summarised as: Complexity building web-services applications is not easier than building traditional applications, for two reasons. First, it relies on an understanding of how to build component-based applications. This is the masterclass of object-oriented development while many can write in Java or VB, few understand the key principles of good design for component-based applications. Secondly, building an application on Web services could be likened to building a house on shifting sands. There will be many out there that claim to be delivering Web services, and few that eventually turn out to be reliable. Security its that old applications-on-the-Internet thing. The Internet is an insecure environment and, at the moment, there is a lack of will to implement the necessary mechanisms (such as PKIs) to build a layer of security. One security issue is with publishing service interfaces, in that they are by nature exposing doorways into the systems that provide the services concerned. The Future of Web Services Er hang on, Web services are the future, right? Certainly as far as computer companies are concerned. There are many articles in the press describing the gambles that the vendors are be making to be part of this brave, new Web services world. Are they right? Well, we think so. It fits with the trends towards outsourcing, commoditisation and component-based development to name a few.
Porcupine Tree Boston 22 June 2002
2005-05-25
"I know something you don't know, doo-dah, doo-dah..." I admit, I was in a bit of a silly mood as I walked up the aisle of the plane flying from Boston to Heathrow. The night before had been a treat, a feast of sight and sound, and there was unlikely to be a single person on the flight who would have a clue what I was listening to, or what they were missing. Should I try to explain, no doubt they would smile kindly back in the knowledge they were sitting next to one of those people who takes some things just a little too seriously... After all, it was only a band, surely?
Perhaps they would be right. Perhaps... No, scratch that thought. The Porcupine Tree gig in Boston the night before was worthy of the highest accolades, indeed it deserved to be shouted from the rooftops and church spires across the land. The final, lingering, memorable moment, of five musicians and artists ranged across the stage, their eyes alight with joy and embarrassment as they absorbed the collective delight of the audience during the second standing ovation of the evening, was testament to the whole event. Such an unlikely success - the Berklee Center was assigned seating only, balconied more like a cinema than a music venue, hardly conducive to building a rapport with the audience. There was no bar, no loosening of minds, bodies or spirits - not that dancing was an option. The joint billing with Opeth, who had had first bite of the cherry, and the ludicrously early curfew of 11 pm meant that the Tree's set was restricted to an hour and a quarter, leaving little room to manoeuvre. On the upside the sound was good, the stacks of speakers designed to fit with the acoustics of the theatre, a location which prided itself on its musical heritage.
And somehow, everything just came together. By the time eleven o'clock hit and the band attempted to leave the stage for the first time, it had already been a perfect night. Perhaps the desire to stand came as much from being forced to sit through more than an hour of music that was designed for anything but sedentary listening, but nobody would deny the rapturous applause was deserved. Something must have happened offstage, for though the clock on the wall facing the band read a minute past witching (or at least legal licensing) time, Steven Wilson led his players back onto the stage for an unexpected encore. "We've just got time for this," said Steve, his acoustic at the ready, leading into a version of Trains as a sea of death metal and psychedelic rock fans meekly sat back down again. Not for the first time that evening, I considered how Porcupine Tree was a guitar band, with Richard Barbieri's aural canvas, and Gavin Harrison's intricate rhythms serving as a complex click-track for the layers of interplay from the three guitarists standing before them. And the vocal moments, though essential, sometimes appeared as little more than a respite, a sorbet between the courses of guitar. Throughout the gig it had been the same, admittedly because it had concentrated on the heavier, more guitar-filled numbers such as, well, most of In Absentia.
There were a few older songs, including Last Chance to Evacuate Planet Earth and one I had never heard before, but even the so-called slower numbers had their fair of share of "rock posturing", as Mr. Wilson would put it. And songs such as Strip The Soul were a take-no-prisoners, full-on barrage of axe-wielding with Steve and John Wesley, the on-tour guitarist who fits in with the other members of the band on stage like he has been playing with them for years ( and I dearly hope he will continue to do so) - Steve and Wes were playing off each other so in sync, that there appeared to be some kind of telepathic link between the two. It is no criticism of Wes that the best guitar moments came from Steve, as his hands whirled across the frets like dervishes. Being the front man, band founder, virtuoso and musical visionary has its perks, after all.
The sound was impeccable, at least to my untrained ears (Ian on the sound desk was as self-critical as ever). That's not to say it was like listening to a stereo that's not turned up too loud. I felt the bass ripping at the legs of my trousers as I sat, the drums and guitars waged a war of attrition against my senses with only the occasional solo Barbierism or break of acoustic guitar to soothe my perforated eardrums and stress-fractured brain cells. That's not quite true of course. There were the gaps between the songs, filled with amusing banter from Steven, who seemed as bemused as the audience by the seating arrangements - or indeed requirements. "You will have to seek alternative ways of expressing yourselves," he said once, using the same line to good effect later to silence a heckler.
From the very start of Porcupine Tree's set, opening as they did with the currently obligatory Blackest Eyes, they set themselves apart from the band that had preceded them. This is not to fault Opeth, who played a thoroughly convincing set of guitar rock. Having said this, Opeth ("most of you will know us more as a death metal band," said the singer) didn't seem entirely comfortable with their new material, being a little more melodic and mellow than their usual fare. So I am told - I confess to being an Opeth newbie. It felt a little like if Marylin Manson had been asked to play at a party for underprivileged children - good music, good songs, clear competence, but just a little unsure and stilted. In comparison, the band that followed was almost choreographed in the completeness of its sound, and was relaxed in a way that only people that are at the peak of their art can be. Heck, maybe the Opeth fans saw it the other way around.
As we left the venue, opinions were divided on whether the enforced seating had helped or hindered the enjoyment of the performance, but one thing was for sure - that it had been the best Porcupine Tree gig that any of the crowd I was with had seen, if not one of the best gigs ever. As it was only my third, I couldn't comment too deeply. Walking back towards the hotel, I remember wondering that if Porcupine Tree are this good now, how much further can they go? I know something you don't know, perhaps, but as they continue to tour and build a US following, I hope this is a situation that doesn't last for much longer. Shout it from the rooftops indeed.
All About: Broadband Communications
2005-05-25
Broadband is as much a state of mind as a technology, defined in terms of what it enables rather than what it is the transmission of sufficient quantities of information to enable such applications as multimedia streaming (think using a computer as an interactive TV) or video telephones. Broadband technologies have been around for a long time, but they have been too expensive in the past for the small business or the home user. This is currently changing with the arrival of Digital Subscriber Loop (DSL), which enabling cheap, high-speed communications across the last mile of wire between the telephone exchange and the socket on the wall.
Here we look at the DSL family of protocols and how they fit together. Business benefits are tempered with the current difficulties in rolling out DSL in various countries. We look at these and other deployment issues and consider where broadband is heading in the future.
What is Broadband OK, we lied. We have seen several definitions of broadband and its poor cousin, narrowband:
Broadband corresponds to multiple voice channels in a telecommunications circuit, whereas narrowband corresponds to only one. Broadband corresponds to a data rate of over 1Mbps Broadband constitutes sufficient bandwidth to permit the transmission of broadband services, i.e. streamed multimedia, videoconferencing and the like.
The third definition may appear a little vacuous, but it is the one we favour because it concentrates on the end rather than the means. It also takes into account the use of the term broadband in other spheres such as the Mobile Internet (for example, the 3G broadband protocol UMTS, which has an initial maximum of 384kbps).
Broadband communications have existed for years, at least for telecommunications providers (telcos) and the large corporations that can afford the extortionate costs. What has changed more recently is the development of a range of protocols known as Digital Subscriber Loop (DSL). The xDSL range (x stands for whatever) enables transmission of very high data rates across the so-called last mile the pairs of wires that run from local telephone exchanges to homes and offices. Given the fact that most data traffic will be to or from the Internet, we have another definition of broadband:
Broadband constitutes affordable, accessible bandwidth for the transmission of Internet-based broadband services via xDSL without needing major modifications to existing infrastructure. xDSL is a range of protocols, each of which is more applicable to certain needs. Most smaller organisations and home users will find Asynchronous DSL (ADSL) the most appropriate. ADSL is asynchronous in that the up channel is smaller than the 512Kbps down channel, a model which fits the standard usage pattern for the Internet in which more information is generally received than sent. A further strength of ADSL is that it is always on there is no need to dial up to the Internet.
Business Benefits of Broadband The technological benefit of xDSL-based broadband may be summarised as low-cost, high-speed, always-on access to the Internet. Given how much of a part todays Internet plays in the lives of most businesses, the positive impacts are linked to the business being able to do its Web-based dealings more cheaply and efficiently.
In addition, broadband access opens the door to a number of new ways to use the Internet for the business. For example:
If it has the right skills in-house, it may be more appropriate for the business to host its own information rather than relying on third parties such as ISPs. Conversely, the increased bandwidth opens the door for the business to make better use of externally hosted applications It is also appropriate to mention home use of ADSL. A high-bandwidth connection from the house to the Internet eases the possibility of teleworking (working from home), as corporate systems can be accessed as if the home user was in the office.
Deploying Broadband in the Corporate Environment So is broadband deployment as simple as making a call to a service provider, and asking them to come and fit a box on the wall? Well, largely, yes.
Issues with Broadband There are plenty of things wrong with current broadband, not least in its availability. Our definition is from the point of view of the end-user and not the telco, who must roll out ADSL equipment to all its local exchanges. Some telcos (for example, British Telecom in the UK) have a reputation for heel-dragging and for playing the system to prevent other providers from installing their own facilities.
ADSL (and cable, for that matter) also have a reputation for non-optimal performance. The down bandwidth is a maximum that is then reduced as more users access the facilities of the local exchange.
Last but not least is security. ADSL connections are always-on in two directions if you can get out, others can get in. There is a real risk that your computer will be attacked, hacked or otherwise misused (for example, as a base to send Spam e-mail).
The Future of Broadband The first next step for broadband is the completion of its roll-out this looks likely to take a good couple of years, particularly outside metropolitan areas. Broadband will be remembered not for what it is after all, it is no more than a high bandwidth socket ion the wall to most but what it enables.
All About: Hosted Applications
2005-05-25
In the very old days, software used to be run on huge computers (mainframes) and accessed using 'dumb' terminals as clients - dumb because they did not process anything themselves. Then personal computers arrived and it became possible to run software on the clients. At the same time as the arrival of the Internet, it became generally agreed that the best place for large-scale applications was on servers, and clients were best dumbed-down, as graphical displays. This oversimplification leads us to the question - what if the Internet is used to provide the connection between the server and the client? If this was the case, applications could be accessed from, and situated, anywhere, potentially managed by a third party. The concept of hosted applications was born.
What are Hosted applications? A good place to start with Hosted applications is the phrase 'applications over the wire'. Rather than installing and running applications on your own server, you can outsource them to a third party and access them over the Internet. The third party gives you application access in the same way that an Internet Service Provider gives you Internet access. (Note - they don't have to be the same party) As usual, from this simple definition things can get quite complicated - there are as many types of hosted application as there are ways of hosting and delivering an application over the Internet.
Business Benefits of the Hosted Model The benefits of hosted applications are a combination of the benefits of outsourcing, with the reduced costs of using the Internet as infrastructure. For the former, consider the following:
- time to market - another company can provide an application faster than you can build it
- reduced overheads - is cheaper for one organisation to manage an application for multiple companies, than for each to manage their own
- resource management - maybe you haven't go the facilities to deploy or manage the application anyway
Deploying the Hosted Model in the Corporate Environment Some basics on deploying Hosted applications:
- Be proactive - don't think that the handover of control to an hosted application provider is a simple process.
- Remember security - like other Internet-based applications, security is a primary concern.
- Service levels are essential - be sure you are given (legally) satisfactory guarantees about the service you will be given.
Issues with Hosted Applications Key to the success of a hosted application is its ability to deliver managed service levels, and key to a companys use of a hosted application is the specification (and guarantee) of appropriate service levels for the business.
The Future of Hosted Applications The future of hosted applications is intertwined with that of Web services and SOA. Hosted applications are seen by some to be a temporary situation. It is generally agreed that service providers in general are going through various evolutions, and that the worlds of communications, media and the Internet are merging. It looks likely (how do we know? OK, we guessed) that service providers will form into two categories:
- general service providers, who cover all the bases
- specialist service providers, covering such areas as security, application hosting, business services and the like.
For further information and an update on the Cloud Computing view, see here.
All About: eXtensible Markup Language, XML
2005-05-25
XML has its roots in formatting languages used for document layout, typesetting and production. It is said that the languages were too complex, and a simpler version was developed. XMLs strength lies in its simplicity it provides a basis to define all forms of digital data, from simple text and values to voice transmissions, multimedia and 3-dimensional graphics. XML provides a basis for systems to interact and to exchange data of any form. If the Internet is the global network, so XML is fast becoming the global language for information interchange.
Here we describe XML both from a technical and a managerial standpoint. We look at the ways a business can benefit from XML, and how XML-based facilities can be deployed. We consider the downsides of XML and look at how it is likely to ex=volve both as a language and as a movement.
What is XML? The eXtensible Markup Language (XML) is a language for representing data. Quite simple, really. It looks a little like HTML, in that it uses tags (like ) to structure and format the data. It is simpler than HTML in that it does not define specific tags, but it is more complex in that it requires the tags to be defined. The simplicity of XML is its strength. It can be used for everything from legal documents to comic strips (articles have quite rightly coined XML The Tupperware of the Internet).
The Business Benefits of XML Clearly, as XML is just a language it has many potential uses. Its strength lies in its standardisation if more than one party agrees to use XML (and, what is more, agrees on the format of XML to be used), then the two parties can communicate. XML also overcomes many of the limitations of HTML.
Three application areas are exploiting this strength, each with its own benefits to business: these are content management, business-to-business (B2B) e-commerce and Web services.
Content Management. B2B E-Commerce Web Services Business re-engineering.
Issues with XML What could possibly be wrong with XML? After all, it is just a formatting language. The answer lies in Bananaramas first law of IT Its not what you do its the way that you do it: the risks with XML are not so much with the language itself, rather in how it is used. If nothing else, remember the adage real programmers can write FORTRAN in any language.
There is an industry fear that simple XML will become SGML as new capabilities and features are added to the language to support more complex constructions and data types. However most of us will not care, as long as the message gets passed. XML is only the messenger.
All About: Tech Resources
2005-05-25
These posts are taken from the (now-defunct) all-about-it.org Web site I set up in 2000, which aimed to provide a resource about new developments in IT. I've uploaded the posts as is - they should be considered as a historical record!
* Wireless Networking * Broadband Communications * Web Services * eXtensible Markup Language - XML * Open Source * Hosted Applications * Content Management
All about: Content Management
2005-05-25
Content Management has been said to be the art of making information accessible. Like all application types, it has gone under several names, including Information Management and Document Management. In these Web-driven days it is concerned mostly with how information is presented to users of the Internet, either externally or internally to the corporation. The bottom line is, if you have document-based information (and there are few companies that do not), you need a strategy to deal with it and there may be applications that can help.
Content Management has hefty overlaps with Portals. These are best defined by example: Yahoo! is a portal, as is FT.com. Portals collate and present information and services in a suitable form for their user base they can be generic, industry specific or focused on a particular topic. A particular form of portal is the Enterprise Portal one which provides corporate employees with access to all the information and services that they need to get the job done.
What is Content Management? Content Management owes its parentage to two converging technology areas, that is document management and the Web. The latter needs no introduction; as for document management, fair to say that it is a well-mined seam for those that know it, and a minefield for those that don't. To document management we owe one principle, namely:
Everything is a document
This principle is central to understanding content management. Put it this way, everything form of data, from an email or a a spreadsheet to an audio file or a banking transaction, can be considered as a document. This principle becomes even more important when we take into account something else inherited from Document Management, namely XML which is the ideal packaging mechanism for all these so-called "documents".
More on this later, but for now lets consider what content management is for. Over the past five years, many millions of Web sites have been evolving from from simple, text-and-graphics-based informational sites ("brochureware") to complex resources linking many forms of information and enabling a far richer "end-user experience" (if you will). Against this backdrop, the "content" - that is, the text, graphics, audio, video and other data - needs to be managed. It needs to be created, verified, delivered, maintained and bumped off when it has reached its sell-by date.
Content Management is often linked to portals - which are no more than content farms, from an application perspective. Content Management does tie into other application types. For a start, as it is Web-based it needs to work with application servers, e-commerce engines and other paraphernalia of the Web. Linkage between content management and CRM are inevitable, in a drive to get that "experience" unique for each and every user - and, of course to log every key-click they might make.
The Business Need for Content Management The evolution towards richness of content has caused increased costs due to the complexity of the information. It is far easier to manage a few pages of text to a multi-layered, multimedia "experience". Even the simplest of sites have a tendency towards complexity, over time. Content management enables content to be stored, managed and maintained appropriately. It also permits the process of content development to be controlled. The litmus test for a content management application is simple - can you use it to recreate your web site as it was on an exact day six months ago? If you want to know why you would need such a facility, just wait six months.
Content management can be thought of as a springboard. It is not entirely necessary to manage content in a structured fashion, or to use tools to automate it. However, the successful use of content management facilities will enable organisations to do more with less, to manage more information and deliver it more reliably than otherwise. Enterprise-scale content management applications can be expensive, hence commitment is required from the top not only to cover the costs of the products, but also to implement the necessary processes to enable their benefits to be realised.
Deploying Content Management Applications The deployment of a content management application should be considered as an integrated part of a company's strategy for using the Web. Content management is as much about process as product. Not only is there the content development and delivery process to think about, but also the other processes (aka workflows) of the organisation will be impacted, in particular the customer-facing processes such as marketing, sales and support.
Issues with Content Management Content management is a relatively mature, and hence stable, market and suffers less from teething problems than a number of other application areas. Issues with content management tend to come more from the way it is implemented - implementation of the wrong process, or failure to integrate with external systems and workflows can cause more problems than it solves.
One issue concerns the distribution of content. Content distribution networks relive the pressure on web sites by offering alternative locations for the content. This can solve difficulties of accessing content from specific geographies, not to mention easing the load on a company's "core" web site.
The Future of Content Management Content management may have the basic model correct, but it must adapt to fit with new technologies and new business models as they come on stream. It will undoubtedly be impacted by the arrival of broadband technologies as these enable new forms of content (such as streamed audio and video) to be delivered. Web services and hosted applications will also impact on content management, not so much in its principles but in the way it is implemented.
All About: Open Source
2005-05-25
Open source software has been touted as a revolution in software development, striking fear into the major computer companies and glee into individuals who feel such companies have had their way for far too long. Open source involves software being developed using a community model (for community read potentially anybody with an Internet connection), and the resulting packages are released for free thats free as in free speech, not free beer, as one open source project developer has put it.
Here we look at what open source is, and the philosophy behind the development. We consider the business benefits of open source from a development and a deployment standpoint. We examine where to start for open source development, and how the most popular packages Linux and Apaché can be deployed. We look at the downsides of open source, and how the packages and the philosophy might evolve in the future.
What is Open Source
Open source is plagued by two conflicting messages from the IT industry evangelical zeal on one side, FUD (fear, uncertainty and doubt) on the other. Certain open source packages have stolen much of the limelight, in particular:
Linux, the Unix-like operating system Apaché, the Web server package
There are other packages that you may be using without realising it, such as sendmail for mail forwarding. But what is open source? Open source is free software, free to be taken, used, modified. Cynics argue that open source software is anything but free on this, more later but it remains true that it is available at no charge, for example freely downloadable from the Internet. You can buy open source most computer shops stock shrink-wrapped distributions of Linux. However, you pay for packaging and the additional facilities that are bundled. Even this cost is borne only once, compared to a per-license cost for commercial packages. Open source is kept free by licensing arrangements such as the GNU public license not an interesting read in summary, it protects against someone taking the source and using it commercially in a way that hinders its open status.
In addition, it is possible for anybody to get their hands on the source, that is the programming code for the software. Nothing can be hidden from prying eyes, with the result that its developers are more likely to pay attention to its quality. This is contentious, but would appear to be borne out by open source packages in common use.
Business Benefits of Open Source
Open Source gives us software. This is not a glib remark. Open source software packages should be seen as part of the software catalogue available to companies large and small. Hence the business benefits should be taken as being the same as for any other package, that is it depends on the package, As well as the absence of purchase cost, there are additional benefits, which also depend on the package concerned:
Support flagship packages Linux and Apaché are well supported, in that companies like IBM will sell support contracts for them. Other packages often have a good following out there, with a community of engineers whose purpose it is to improve the quality of the package in question. Remember it goes with the philosophy. Stability as there is no vast corporation out there whose revenues depend on you buying upgrades, you can have more certainty that the package you run now will still be available in two years time. Initial costs may be lower for open source. Remember, however, that the purchase cost is only one element of the lifetime costs of any software package. You should factor in costs of deployment, integration and ongoing management for any software implementation, open or otherwise. Perceptions are important, because some software suppliers hotly contest the benefits of open source. Remember open source packages can be as risky, or otherwise, as many other packages in the catalogue.
Developing Open Source
Fundamentally, open source is a collaborative model for software development. It involves companies, academic institutions and individuals working together on software projects. Indeed, development of a specific software package is open to just about anyone that wants to take part. This might sound like anarchy, but in fact it is proving to be a surprisingly successful model surprising, at least, to the cynics who feel that the only way to develop software is through structured processes and corporate hierarchies of staff. (Note that some open source projects do follow structured development processes, and why not.)
Deploying Open Source in the Corporate Environment
Use of an open source package does not have to be a strategic decision, however it may be necessary to convince members of the management that the risks are no different to any other deployment.
Deploying open source need not be so different to rolling out other software products. Here we focus on two specific open source products Linux and Apaché. Despite being ported to many hardware platforms large and small, Linux is attracting most attention as a server OS. There is a short article at PlanetIT entitled What Does It Really Cost To Adopt Linux? which provides a good overview of the issues. Linux is progressing up the processor chain, for example it is now supported as a mainframe OS by IBM. Linux has also been ported to the desktop as well as embedded devices. And, of course, Apache runs on Linux. Finally, it is important to many companies to have a support contract in place. IBM offers services for Linux and Apache, as does LinuxCare.
Issues with Open Source
Dare we mention any issues, and fact the wrath of the open source community? Heck, yes. Lets put it this way - if open source is a bazaar, would you trust every stallholder? Uh-huh. There are flakey open source packages out there, packages with minimal or no support, packages that end up costing more than they benefit.
The best way to counter this is to do as the stallholders. Talk to people in the open source community, post messages to the message boards, ask the questions. This will serve a number of purposes:
It will get you in the habit You will find that answers are forthcoming (and if not, that can be proof that the open source model is flawed) The answers will tell you what you need to know.
The Future of Open Source
According to some pundits, open source is a revolution, a world-shaking example of how online communities will one day rule the roost. Lets not beat about the bush the hackers have Microsoft in their sights. Hmmm we shall see. In the meantime the products themselves will keep rolling off the production lines, taking their lead from what people are asking for (such as gnutella) rather than what the vendors would like. If you want our opinion, that has to be a good thing.
IT Analysis In A Cold Climate
2005-05-25
This article was originally written for Brand Illumination (edit-) in 2002.
Even in the best of times, IT analyst relations can be something of a black art. Analysts can prove inaccessible, uncompromising and fickle; attendance at events is never fully guaranteed; furthermore, it can be difficult to see the tangible results of briefings or other analyst relations exercises.
In these less than favorable market conditions, it becomes even more important to target the right analysts with the right information. To ensure that todays limited resources are used wisely, there are a number of considerations that should be taken into account.
Target and Nurture Analyst Relations All analysts are different. Fact. For a start, an analyst firm will tend to focus on specific areas of competence, either horizontal (by technology type) or vertical (by skill set). Even the largest firms differentiate themselves, for example on depth and quality of research compared to hands-on implementation experience. Within analyst companies, too, the types of analyst will vary. Essentially there are two camps: market analysts, who consider the position and perception of your company and its products, and product analysts, who are more interested in ensuring you can deliver on your promises against real world needs.
Resources have never been infinite, either in time or monetary terms. In the current climate, it becomes even more important to ensure that you are targeting both the right companies and the right analysts within them. To ensure you have the correct mix, consider profiling analysts based not only on experience of a technology or market segment, but also on such criteria as perceived influence and skill set.
Make Briefings Worth It At the best of times, briefings can be dull for analysts as much as for vendors. This is often down to the fact that briefing sessions are inappropriate, badly planned or conducted. Just as it is worth targeting the correct analysts, so a correctly planned and structured briefing can provide the best value to all involved. Consider the following:
Only use source information that is relevant to the briefing at hand. Dont waste anyones time (analysts or your own) in briefings, by slogging through irrelevant presentations that have been picked off the shelf.
Customize your message before the briefing. Understand why an analyst wants to attend a given briefing, and make clear why the briefing is taking place. Because the VP of product marketing is here is less relevant than because we have restructured our product line.
Ask the analyst what he wants to get out of the briefing at the beginning, or preferably before it.
Analysts Have Bills Too Recognize that IT analyst firms are currently facing the same miserable market conditions as the rest of the industry. Of course, it is important that you achieve your agenda rather than theirs. However, if you are able to provide an analyst and his or her company with timely, tangible value, the analyst is more likely to have something to say about your company and your products.
In short, take the time to understand research programs and assignments of the analyst firms you have in your sights, and use this information to make the correct people available to support the analysts that you meet. For example, if a firm is conducting a market analysis study, there is little point in providing a technical expert at a briefing; similarly, a brand manager will be of only limited use if a product comparison is taking place. Either of these situations may lead to the worst-case scenario of not being mentioned at all.
Choose Bandwagons Wisely In the glory days before the IT bubble burst, things seemed so simple. In the past, and based on their researches, the largest analyst houses have been able to give credible views (to an extent) on what is hot and what is not. Today, while there continues to clear existence of several hot topics for example, storage virtualization, Web services or mobile technologies. What is less clear is when, or even whether, the market will bite on these juicy worms.
Currently, even the majors are as much in the dark as everyone else about which way the market will go: not just up or down, but into which specific area. The result is a pervading lack of confidence todays predictions are couched in caveats that can only serve to undermine their credibility. It may well be that the current hot topics will already be cooling off by the time the industry starts to pick up momentum, so be sure that your marketing dollars are carefully allocated.
In conclusion, the entire IT industry is currently riding a storm and we all have little choice but to sit tight. However most companies do not have the luxury of doing nothing until the industry picks up. When forced to make limited marketing dollars stretch as far as possible, it becomes more important than ever to ensure that the right message is getting across, and that the correct people are listening.
All About: Component Provisioning
2005-05-25
Component Provisioning has its origins in software reuse, which was one of the original perceived benefits of Object Orientation (OO) and Component Based Development (CBD). Time has shown that reuse does not come for free to be successful, reuse needs to be taken into account across the entire development process. Necessary mechanisms include:
A comprehensive, scalable and adaptable application architecture to support component deployment Agreed definitions of the structure, interfaces and content of a component. Team structures that recognize the difference between component development and solution development. Appropriate management and reuse processes, including component supply, management and consumption, librarianship and certification mechanisms.
The implementation of reuse facilities introduces the possibility to not only make existing components available to other projects, but also buying in components from outside. Reasons of economics, not to mention the use component-ready application servers and the evolution of Web Services, are driving the trend towards component provisioning. Gartner group estimates that component purchases will increase by 40% per year, to $2.7 billion by 2004. Whether this estimate survives the downturn in IT spending is questionable, however there is undoubted interest in component provisioning from all quarters of IT.
The golden rules of reuse become even more important, in particular the need for standardized, defined component interfaces. One such standard is the Reusable Component Specification from Component Source. This covers XML definitions for:
Technology the platforms to which the component conforms Functionality the interfaces and their constraints Distribution how the component is packaged, distributed and deployed Commerce the licensing, contractual information and product reviews
An additional pre-requisite is the existence of a solid, industry-standard infrastructure both for processing and data transfers. According to the CDBi Forum, such facilities have only become available this year, with the arrival of Windows 2000 for COM and J2EE for Java-based components. XML is becoming the de facto standard for data interchange, further greasing the wheels of component provisioning.
According to the CBDi Forum, components available on the open market should be:
Widely available Adaptable to many situations Easy to integrate Supporting de facto standards Environment neutral Process-independent. Autonomous Secure Of appropriate granularity. Easy to manage Cost-effective
Component vendors include component resellers (such as Component Source), Package vendors (SAP, Siebel), ISVs and other software companies (Microsoft, IBM San Francisco). Package vendors are already componentising their offerings, and it is only a matter of time before they start to sell individual components. Component provisioning benefits package vendors in two way not only can vendors sell individual component, but also they can incorporate other components to augment their own applications. This results in best-of-breed applications that are better able to meet requirements and that offer more flexibility for the future.
As the move towards more off the shelf components continues, the need for standardized architectures grows, such that the selection of a component has minimal or no impact on the architecture. It is not just a case of ensuring application elements. can communicate at a technical level; they must also work together at a business level. One such architecture is the Business Object Component Architecture, BOCA, an OMG initiative. The Web services architecture, in which applications are distributed across the Internet, is a new paradigm that brings architectural constraints of its own. These are still being determined, but are likely to involve components that are deployed as Web services, such that they can be discovered, licensed and accessed from across the Web.
Finally, provision of components requires new types of procurement and delivery channels. A variety of licensing models are being investigated including once-only payments, pay-per-use, monthly subscriptions and risk/benefits-sharing.
Where is Component Provisioning to go in the future? Development would indicate that once component provisioning has broadened outwards, it would move upwards into the domains of business process modelling. Already B2B information flows are being defined and ratified under the banners of XML.org and Biztalk.org, the producers of such facilities have found it difficult to supply B2B without including a capacity for recognising and deploying business processes. Component Provisioning may well end up being faced with the same challenges.
Thought for the day
2005-05-26
**
Never put down to malice what can be ascribed to stupidity.
**I think it was Mark Twain who was first quoted as saying words to this effect, and if he didn't he should have done. I heard an alternate way of expressing the same on the radio this morning, a commentator said something like, "some people ascribe to conspiracy theories, I ascribe more to cockup theories". These two principles relate to a third, expressed to me by Phil Tee (who recently founded Njini, having set up Micromuse and Riversoft, but I digress), "the world is rarely as complicated as you think it is."
For some reason it is a human trait to expect the worst of others, and to react with suspicion or look for hidden agendas. Most agendas aren't that well hidden truth be told, so we could all save a lot of time getting on with solving our own problems rather than inventing new ones. If we apply the law of Tee as well, we might all achieve our objectives a little faster.
Wireless in Toronto
2005-05-26
I was in Toronto back in November. A phenomenon that I have never fully been able to understand, is the prolific nature of wireless networking in Toronto, coupled with the relative absence of security. Wherever I would go in the city, there would be, it seems, some kind soul that had left their home wireless network open, to enable me to access the wider Internet. I dont feel too guilty about this, after all it was Microsoft wireless networking that would stumble across the open port, but once I realised that this was happening, I found it both useful and intriguing. Looking more deeply, one thing I noticed was that many of the services were blocked to outbound SMTP access. This was a consistent finding. Even from the 22nd floor vantage point of my hotel room (the Days Inn on Carlton Street), there seemed to be no shortage of helpful souls in the apartment block opposite that were only too willing to spare a little bandwidth for my humble needs. At it seems that there is some kind of cooperative wireless understanding developing among the wireless illuminati of Toronto. If this is the case, then I am all for it.
Toronto was cold, gloomy, and rainy. My usual determination not to use public transport was beaten down by the force of the rain, so I waved for a taxi. To my heaven sent surprise, the taxi driver was local celebrity, Mr Geography. He opened with the question, "Answer me this, and I will give you your taxi ride for free - what is the country with the smallest land area in Africa?" I answered Lesotho but he said, "No, it is the Gambia." He proceeded to tell me exactly how different the land mass of Lesotho was, compared to the land mass of the Gambia. He gave me another try, and then another, before introducing himself. It turns out hes been on TV, and he was even in a copy of the most recent edition of Macleans magazine, ostensibly the Time magazine of Canada. What a way to brighten up an otherwise dismal day.
Making IT Matter
2005-05-26
I keep having the same conversations. At trade shows, on-site with technology customers and integrators, in workshops and at analyst briefings, the conversations would conclude that:
Traditional use of Information Technology (IT) has fundamental flaws In the future, IT is moving towards a service based model. Business need to take increasing control of IT. The problem is more political then technological. Nobody really knows or understands how to move things forward.
This has made for some very interesting, informative and interactive conversations, which have often worked best with a beer in hand. I understand that the same conversations are happening elsewhere, and wherever I listen, I hear similar stories. However, it remains all talk.
Meanwhile, a couple of years ago the IT market crashed. Oops. It may be coming out of the downturn now, but there is every danger that the new IT industry that emerges is the same as the old, a market which has failed to learn from its own mistakes. If there is one good thing that should come out of the crash, it is the opportunity to spend time to understand not what went wrong, but how things could be done better the next time around.
Unfortunately, in the IT industry the precedents do not offer much hope for the future. Never in history have people and organisations failed so obviously and so often to learn from the lessons of the immediate past, preferring to push forward to the next potential solution rather than dwell on current failures. This approach remains the norm in many IT vendor organisations, whose business models are still rooted in the past. The concept of a silver bullet was first exposed as a fraud back in the seventies, and despite the increasingly cynical veneer that many like to portray, as an industry we are still hoping for a cure all. From the vendor perspective, such a salve is referred to as a killer app something everyone wants to buy. In other words, success wil be judged by sales, and not by problems solved.
Hope and help is at hand, indeed many companies are already taking the initiative. There is a great deal of will at the highest levels in the largest vendor companies, and there are increasingly strong demands from end-user organisations. Will is one thing, but a way is quite another: there is a lack of understanding of how to get there, and there remain huge hurdles to be overcome. There are customers that get it; however, there are plenty of customers and plenty people inside the IT organisations that do not. This report is designed to address this issue and provide a blueprint that organisations can work to.
This blueprint boils down to the delivery of IT as a service to the business. This is not a technological goal, more a pragmatic one; it is also not an easy goal. However, as we shall see in this report, it is the goal: it is what IT is working towards, and it is what business is demanding. By focusing on this goal and working back from the answer, this report gives you the tools and approach to help you get there.
This report contains the following sections:
IT is Dead. This introductory section describes how we have got things wrong to date. Particularly given the fact that we are trying to achieve whole order of magnitude changes, for example the agile business. This chapter is kept short we dont want to rub our own noses in it, do we?
Long Live IT as a Service. We cant do without it, but what do we actually want? Lets work this out then we can work back from the answer, which is a service-based IT architecture with all the trimmings.
The Barriers to IT as a Service. If we're all agreed on what we want, there must be reasons why we haven't been able to get there in the past. This is where things get interesting by understanding the constraints and pitfalls, we stand a chance of overcoming them.
Preparing for IT as a Service. To overcome the barriers, there are some measures we must implement before we can start considering how to manage or deploy service-based IT. Not least the business has to understand itself, and the IT department needs to fundamentally restructure.
Managing IT as a Service. The IT manager is given a kingly role in this book, as the representative of the business and the customer of technology. This chapter explains why by considering the roles and responsibilities, the supporting tools and frameworks. It then asks the question how well are you doing?
Delivering IT as a Service. IT can be delivered as a service on a project by project basis. If everything else is in place, each project can move through relatively traditional steps while assuring that the basic service criteria are included in the mix.
Selling IT as a Service. IT vendors need all the help they can get, not least to match up to the new breed of service-savvy customers. No-nonsense guidelines explain the service solution value chain and how vendors can make their own evolution towards commoditisation.
Hope for the Future. This concludes by summarising the requirements of any service-based organisation and ranking them into maturity levels. There is no right answer, other than progress.
Annex Scorecards. These are provided to help the business, the IT department, the integrator and the IT vendor gauge where they are along the evolutionary path. The scorecards also serve to rank IT suppliers, so that IT customers can make more informed choices about who they should be depending on.
Note that this report is focused on making things work better today. It is not a collection of nice-to-have theories or unmappable best practices of one organisation. Throughout, this report incorporates a wide variety of case studies and examples, covering both where things have worked and where they have not, so we can draw conclusions from the successes and failures of the past.
It is worth mentioning what this does not cover. It does not delve into the potential new business models as advocated by the current best practice business thinking, such as the agile enterprise so loved by the Harvard Business Review. Rather, it presents an approach to delivering a concrete framework of technology that can work for any business, agile or otherwise. A castle build on sand cannot stand this aims to resolve the issue of the sand, rather than focusing too strongly on the castle.
Note that this report refers to technology and IT to describe a combination of things, which some may disagree fall under the same roof. For example, Telecommunications and IT are all sides up the same mountain, solving the same problem, as illustrated with metropolitan Ethernet and VoIP.
I shall say this only once
2005-05-26
Well. Having crawled my directories for articles, imported old blog lines, linked the links, chosen the template, uploaded the PDFs, tested the pages, and otherwise got things up and running, I find that I have a web site. All feedback welcome.
770’s, OQO’s, batteries and voice
2005-05-27
Everyone else seems to have been writing about the Nokia 770, so I thought I would join in. Looks like a sexy device, indeed, battery life may be an issue, but thats true for any portable computing platform with a decent colour screen. I remember two years ago I said I would never want to replace my PalmPilot with a colour PDA, as I never would remember to plug it in. Well, here I am with my Dell Axim, my Archos jukebox and of course my trusty Vaio, all of which would think five hours of battery life was a good run. What ever happened to those mats that you could just put a device on (suitably modified, of course) and it would charge without cables. One day, every table should have one.
Of course there have been some quite significant advances in battery technology, but thus far they have eluded the mainstream. There was recently announced a battery that could charge in a matter of minutes, which immediately made me think of the possibility of charging stations in coffee bars and on high streets. See you at the charging station at 3, well, perhaps not. The other very interesting technology is to do with fuel cells, which (as far as I can tell) convert naturally available products into carbon dioxide and water, generating a trickle of electrical current as a by product. A fascinating array of companies are aligning to manufacture and deliver the fuel cell supply chain: all the usual suspects, plus companies like Bic, the Biro and razor company who also has a sizeable chunk of the global market share in disposable lighters and lighter fuel canisters.
The fear is that, to power one of these newfangled, high-resolution colour devices, well all have to walk around with huge cylinders on our backs. Youd only had to add a virtual reality headset to get that frog man effect. Perhaps some smart company will bring out flipper-like shoe attachments, which generate additional electricity by capturing the sound of all that slapping.
As for those screens, I heard on the radio yesterday that the eggheads could now grow Nanotubes as an array, apply a connectivity layer and a phosphorescent layer, et voilà, you have a low-power, very thin screen. Prototypes are currently running at a few inches across, clearly production is a way away but its looking good.
So while battery capacity may not be improving very fast, charge cycles times are set to improve drastically, and fuel cell technologies open up a wealth of new options (I would not be against carrying a can of methanol in my backpack, though it might cause some disputes at the airline check-in desk). Perhaps somebody could invent a fuel cell that runs on vodka. Recognising that alcohol is essentially a natural product, people might start wanting to create their own fuel. Indeed, it would be awfully green, however, it would also be totally illegal in many countries. The situation might arise where, in tumbled down barns at the ends of rutted farm tracks, secret stills would be producing a villages supply of fuel cell fuel. But we digress :-)
Meanwhile, spare a thought for the OQO. This can claim to be the first product to market as a fully fledged computer with a jacket-pocket form factor. It is shipped with Windows XP, but uses commodity hardware, so there would be nothing to stop running Linux. An application of these devices his voice recognition, which I firmly believe it still to have its day in the spotlight. There are several reasons for this, not least the number of people I know who are developing symptoms of RSI, or back problems, due to spending too many years typing at a computer. The technology is now here, in that the latest version of DragonDictate is perfectly adequate for transcribing ones voice into words. I am shocked and stunned see that neither the OQO nor the 770 has a microphone socket, thus preventing it from being a perfectly serviceable and totally appropriate voice recognition device. Incidentally there is a version of IBMs ViaVoice for Linux, but this needs a bit of work in more ways than one.
Incidentally, this entry is being voice dictated as I cruise down the M4. My computer is on the passenger seat, and I glance at it no more than I look at the speedometer or the clock. Fortunately in this sun, the VAIO has one of the best screens there is; equally fortunately, I have a 12 volt adaptor so my battery life is protected. Phew.
Terse...
2005-05-28
... is a fine word and should be used more often. As is flange.
I wonder whether, statistically, there is a tendency for people who speak in short, clipped sentences to favour communication by Blackberry and other Smartphone types.
(Sorry for the short message, this was sent by PDA :-) )
Let the puns begin…
2005-05-28
Our good friends at The Register mention the launch of FLOSS - standing for Free/Libre/Open Source Software. FLOSS is an EU initiative to promote understanding of Open Source. Laudable perhaps, but the same cannot be said for the acronym. Quote: "FLOSS is a global phenomenon, particularly relevant in developing countries, and thus more knowledge on FLOSS outside Europe is needed." Absolutely - but for those with less knowledge of IT it could be an ill-scoped initiative for dental care in the third world.
All the same, its good to see they're cottoning on, and that they have the bit between their teeth.
Sudoku
2005-05-30
OK. It's addictive. So there. More soon.
Community Services
2005-05-30
I'm still quite a newcomer to this sort of thing so I'm commenting on the blog of a usual suspect - Jonathan Schwartz at Sun. I intend to grow my horizons as I get the hang of things.
"As will become more obvious by the day, you can compete against a product, but it's close to impossible to compete against a community," says the man here - largely in reference to Netbeans vs. Eclipse. This may be correct if "you" is an individual, but in large part most of the "you"'s will be communities themselves, variously supported by vendors and other bodies. Competition occurs between communities and within communities, so I wouldn't be rushing to prepare for the death of the individual competitor just yet, as it might not exist.
Also, and just an aside, if Jonathan's so into the new thinking wrt communities, blogs, "the conversation" and the like, why doesn't he allow comments on his blogs? Looks a bit singular to me.
Voicing a concern
2005-05-31
What a missed opportunity. I'm so disappointed I could scream... silently of course. What am I talking about - voice recognition.
Oh, that one, they say. You're into that, they say. True enough - but on and off, truth be told. When I wrote the portability article at The Reg a few months ago, I was struck by the revelation that voice rec was just waiting for the right form factor of computer. Now, examples of said form factor are starting to roll off the production lines, and very sexy they all look too, with one HUGE proviso - they mostly lack a microphone socket. Perhaps I could hope that connectivity such as Bluetooth, builds in the idea that wireless mikes are the way to go. However I'm assured by people in the know that the sound quality over Bluetgooth is insufficient for voice rec. Wired mikes are the way forward, in the short term at least, and only a minority of the devices support this. Notably the Antelope (available now) and the Tiqit (later this year). The latter's processor is a bit poor, but an older version of Dragon Dictate should work just fine*. As for the others - the Oqo, Vaio and Flipstart, just don't go there.
Ahm tellin' ya, voice rec's the way. The time savings possible from being able to dictate any time, any place etc, together with the improved QOL (quality of life, lummox) from not being slumped in front of a glowing screen, will more than make up for the still-dodgy battery life on these devices. It'll probably transform blogging as well - watch this space!
* Obtaining an older verison is not as easy as it sounds of course. Having purchased the latest version and finding it didn't work on my PictureBook PCG-C1X (now retired to the cupboard, incidentally), I resorted to eDonkey to download an older version. To this day I'm still not sure I did anything illegal.
This post was - doh! - typed.
I feel slightly sick...
2005-05-31
I know that much of life is a facade, created by cold-hearted businessmen with an eye only on shareholder value. All the same, a small part of me still wanted to believe there existed such a thing as "music of rebellion", even if the bands delivering it up were signed to the major corporates.
So, when I stumbled across the web site of Grabow (tagline: "putting showbiz into your biz") and found that many so-called rock heroes were available for office parties (albeit big ones) and corporate events, I was just a little bit uncomfortable with the whole thing. Nothing wrong with the model, I'm sure the Grabows lay on a great show - but with just about everybody represented I just feel slightly sad. It does conjure up a raft of images however - fancy Slipknot playing at the shower curtain ring sales summit (John Candy RIP), or Eminem at the oil seed suppliers convention? Perhaps if I started saving now, I could get AC/DC to play my retirement bash in 20 years' time. Maybe we could share the occasion.
I shouldn't be so surprised - a few months ago I had the dubious honour of seeing Carlos Santana play at a Cisco event. Frankly he didn't look like he was enjoying it much - a bit like if his kids had disturbed him in the middle of his favourite TV show and asked if he would play a few songs. Might not be too far from the truth.
June 2005
This section contains posts from June 2005.
Video module for Skype
2005-06-01
Dialcom has announced a video extension to Skype, which I've just run up and it looks pretty good - full screen mode and everything. Webcams do have the issue that unless you're TV trained, it always looks like you're not paying attention. All the same it smacks of a killer app to me.
If you haven’t heard of Google Maps yet…
2005-06-01
Check the O'Reilly blog for an expose of the potential of Google maps when linked to Web services. We haven't even scratched the surface.
Silicon: DRM in business
2005-06-02
My latest Silicon.com article is up, asking whether businesses can benefit from DRM. In short, the answer is - yes.
Alphadoku
2005-06-08
I've created a puzzle I'm calling Alphadoku - ambitiously, I've also registered the domain. I know this is based on the Sudoku principle, if anyone's seen anything similar I'd appreciate knowing but I haven't seen anything like it myself.
See here for more information.
First Phish
2005-06-08
I've just received my first phishing email to my work account - a pain as it indicates my name's now on some spamtankerous list. I've pasted it below: it gives us one more reason why people should learn to read and write properly, as anyone could then tell there is no way it would come with a corporate seal of approval. "thank you for co-operation" indeed.
_Dear HSBC Bank member ,
Technical services of the HSBC bank are carrying out a planned software upgrade. We earnestly ask you to visit the following link to start the procedure customers data confirmation.
https://www.ebank.us.hsbc.com/servlet/com.hsbc.us.ibank.signon.Logon
We present our apologies and thank you for co-operation.
Please do not answer to this email - follow the instructions given above.
This instruction has been sent to all bank customers and is obligatory to follow._
Paris - Presentations and Pavements
2005-06-08
Just on the Eurostar on the way back from presenting at an IBM event, on Project Portfolio Management. It would be very dangerous for anyone to pretend that portfolio management, programme management or other such topics are somehow new rather, what we are seeing is that the tools are starting to support the concepts in a more comprehensive way than they have in the past. The hard bit, as Lawrence Webb would say, remains in the human layer.
Today, Ive mostly spent coming home making my way to the Gare du Nord via various cafés, stopping for espresso and email before moving on. I walked through the Marais and to my favourite corner of Paris, the Place des Vosges, which somehow maintains its sun-dappled serenity despite the noise of scooters in the background. The meaning of mobility to me is to be able to keep up with the job (white collar in my case) from a Parisian street, as easily as from an office desk. Were still a long way from that but perhaps thats a good thing imagine the chaos if everybody started taking their work to the public parks.
Reality check three hours on the Eurostar and Im just about caught up with my email. One can have too much accessibility.
On production
2005-06-08
Last week I was in Monaco at a Dell Product Showcase. Particularly interested to see the multimedia applications and the games we dont get as much exposure to this side of things as Id like, and not just for gadget value. Every time I talk to people in the production industries gaming, film, music, TV and so on, I realise that these people see the creation and distribution of entertainment and information as a universe in itself, like a sphere with a number of exit points. When we talk to the content distributors, not least the service providers, it is clear that they see themselves as the universe, as a sphere with a number of entry points and a limited number of conduits towards the consumer. Not that either is right or wrong, just that the best perspective has to be to see the structure that can be created by both. Some people get how this should look, but many do not, and so they remain inside their spheres.
In the evening we somehow manage to get into the local club a place called Jimmyz. In the knowledge of the early start the next morning and having been up since 5 in the morning, I duck out about midnight. The next day I discover that Bono was in the same bar, apparently surrounded by numerous girls and a smaller number of hefty men. The body mass probably just about balanced out.
Linked Out
2005-06-10
When I put the case for Plaxo and LinkedIn on the Register a few months ago, I made the point that the tools that come with these services are worth signing up for even if you never connect. Well, they're getting better. LinkedIn now has a downloadable toolbar for Microsoft Outlook that offers a "grab" button, so you can select a chunk of text from an email which contains name and address informaiton, and it will automatically create an address book entry for you. If this was built in to Outlook, but its not so this is a good start.
The LinkedIn upload from Outlook still doesn't work with Firefox by the way. Firefox is a great browser but if it wants to be ubiquitous it'll need to cover all uses - some sites I go to can't support it either, at the moment.
Indebted
2005-06-13
I can do no more than mention this as I don't claim to understand the full picture. $40 billion of debts have been written off with immediate effect, saving an estimated $1.5 billion a year in interest payments. Clearly there are a raft of conditions (which is the bit I don't fully grasp) but the write-off has to be seen as a good thing. What is now required is that us decadent westerners get to grips with what caused the situation in the first place and our role in that, and we also apply some conditions on ourselves to ensure it cannot get to the same state again.
Shop of the present
2005-06-16
A funny thing happened on the way to the Metro RFID shop of the future yesterday. For a number of perfectly valid reasons, as I arrived in Düsseldorf I took a taxi to the only fragment of address I had - namely the above, which I knew was situated in a suburb called Neuss. Before long we pulled up at what appeared to be a hypermarket. "Shop of the future," I thought. "I wonder what's inside." So I paid off the cab and waved him on his way. After a few minutes of walking around, the truth dawned. This *was* a hypermarket, absolutely of the present. All rather surreal - or rather, real, when I was expecting pretend. It wasn't a problem - very kindly, the store manager (I sat in his office like a kid who's lost his mum) sorted me another taxi and I was on my way.
RFID turns out to be a lot more simple, interesting and complex than I previously understood. Simple - it is no more or less than a standardised code that can be attached to any object and thus linked to a piece of data, somewhere. Interesting - from a philosophical standpoint, we have seen a major evolution of our understanding of data with the arrival of XML - again a simple construct but which enables data to know something about itself. RFID extends this concept into the physical world, enabling a wealth of innovation to be built on it. Complex - nobody knows exactly where this takes us, and there will be a number of technological, practical, and even socio-political challenges to be faced along the way. For the moment there is time to consider all of these things as the technology is not quite yet mainstream. RFID tags still cost about 30 cents each, which is prohibitive for many applications, and the scanners and other kit items are a long way from being commodity items.
Retail is currently leading the way, but my guess is that the first opportunities lie in other domains - tagging of tapes and optical disks for more efficient archiving, for example, tagging of fine arts and museum items to simplify inventory taking, maintenance of asset registers by medium and large organisations (as anyone who's had to crawl around under a delivery of 50 tables, label them up and log their asset numbers will understand). Easy-to-access information about poisons, solvents and so on. Ski passes. Recorded delivery and restitution of lost parcels. Military procurement. Drug labelling. The possibilities are endless. The challenges are endless too - not just integration and data cleaning but security, civil rights, the power of the major corporations, fraud, misuse and so on. The important thing at this stage is to be informed.
The mail must get through
2005-06-23
What perfect days these have been for working from home - perfect, that is, had my ADSL connection been working. Eleven o'clock yesterday morning was the time the deity of all things connected chose to break the link between my home network and the wider world. I have been stumbling along for the past 36 hours in the nether world of GPRS, trying to cope with 9MB emails and voice calls all coming down the same line. I could resort to a good old modem connection, but there is no wall socket in my office so I'd have to perch in the kitchen, not ideal.
It is quite fascinating - just at the same time as we (at Quocirca) are putting together a report on the importance of email, that this should happen. The psychological state one finds when one's email is no longer available is akin to losing one's door keys or address book: email has become a prop upon which other functions take place, and without which we start to flounder. Or at least, I do.
Frisson
2005-06-23
Now, there's an excellent word.
September 2005
This section contains posts from September 2005.
Today I shall mostly be?
2005-09-19
Flying to Vancouver. At least, that?s how it feels at the moment ? a nine-hour flight gives the impression that a whole day is passing; as we near the end of the journey however, we change our watches and find the whole day is only just starting. It?s a sensory illusion ? the day that is, not the flight ? entirely constructed in the imagination, bounded by sleep.
In other words, before I waffle on too much, a long-haul flight offers a wonderful opportunity to reflect on the most banal. Day dreaming is positively encouraged, given that there?s nothing else to do (the three films have finished, the cards are filled, the email inbox is up to date etc). Soon, a slice of pizza will arrive and offer a way out of the doldrum, itself a little surreal given that it is only ? now ? 9 in the morning. Pizza for breakfast, whatever next.
So, it?s been a while since my last entry to this blog. Three months, if I?m not mistaken, and with good reason as all my spare writing energies were engaged in finishing the book. This is now done, and is (I understand) due out in a matter of a few weeks. I?m still in too deep, a bit like decorating a room I can still see every brush stroke. Already however, the details (and the painfully long hours) are starting to fade.
Which is good.
Seven Lessons from London
2005-09-24
Several funny things hapened on the way back from Vancouver, some better, some worse, so I thought I'd share them as cautionary tales. Ignore them at your peril.
1. Never leave your hotel details on the plane
It was all so simple. Having spent three days in Canada, I would touch down at London Heathrow at 1.30 in the afternoon. At 4PM I was due at the Institute of Directors to give a presentation, so there was enough time to get to a hotel, change, shower and maybe even squeeze in ten minutes of rest. To my delight and surprise, I did sleep on the plane, this was even more surprising given that there were babies everywhere - Air Canada must have given a discount for multiple babies or something, as they were next to me, behind me and across the aisle - but they all slept (that womb-like rumble probably helped) and so did I (the Melatonin probably helped as well). My relief was so great, I failed to notice I was no longer in possession of the piece of paper containing my hotel details, a fact I didn't notice until I was on the Heathrow Express to Paddington.
That's OK, I thought to myself, the hotel's just round the corner from Paddington, isn't it?
2. Never trust essential details to memory
I walked round the corner to Lancaster Gate, asI recalled a map which showed the hotel just on the edge of Hyde Park. As I trundled along, my black wheelie flight case meandering behind me like a labrador puppy, I checked my watch: 2.15 PM. Loads of time. And there was the hotel - the Hyde Park Thistle. Perfect.
3. Always call to check the reservation
As I spoke to the lady at the Hyde Park Thistle, I realised I'd made a mistake. There was no record of my booking: no worries, she told me, there was a Jon Collins checked in at the Kensington Park Thistle, just a 15-minute walk across the park. Could she please check with the hotel, I asked, as I didn't want to make the same mistake twice. Having tried to call the Kensington Park (to no avail - as the lady left a call to her own desk ringing, she explained to me the hotel chain's 3-ring policy). Never mind, she was confident my hotel was the Kensington Park, nothing could go wrong.
4. Never trust computers (hotel systems or otherwise)
At the Kensington Park Thistle at 2.45, disaster struck. When the hotel had no trace of me (how inevitable, in hindsight) on their computer, I decided to check my original online booking so I booted up my computer. Or didn't. The message "Operating system not found" appeared, and a clicking, whirring sound could just be heard, like a tiny man trying to start a motorbike inside my hard drive. This was not exactly what I needed. After twenty minutes or so, the desk clerk had got through to Thistle Central Reservations and had found my booking - replete with booking number - and he had scribbled it on a piece of paper. Jon Collins - Thistle Euston. With 55 minutes left, there was no wayI could get to the hotel before going to the IoD. All my best-laid plans had failed.
5. The tube is faster than the taxi
After heading towards Kensington High Street tube (direct to Green Park, 5 minutes from the IoD) I decided to hail a taxi - not least to compose myself and change my shirt, avoiding eye contact with passers by as I did so. The traffic at Hyde Park Corner was typically heavy for a mid-afternoon, and we went nowhere for a goodly while, eventually arriving at Pall Mall at just after 4PM. I had called ahead to explain the situation, so everything was fine.
6. Always take a backup with you
It is a reasonably standard experience to arrive at an event and be told that one's presentation has already been installed on the speaker's laptop. To the extent that, the last of my worries was that I would need to bring an electronic copy of my pitch. However, the falling faces as I explained the plight of my laptop told me something different. Now, despite the fact that I tell others on a regular basis to do good backups, I have been known to be less than reliable in my own backup routines. For once in my life however, this time I had a USB stick in my pocket, and I had synchronised it with my hard drive only a week or so before. Through this double quirk of fate, I was able to put a copy of the presentation onto the other speaker's laptop.
And thus, victory was snatched from the jaws of defeat. Until...
7. Keep a spare shirt
The presentation went well, and a couple of people told me afterwards how relaxed I was, I explained how I had believed I was already dead - nothing could possibly get any worse. I had lost a laptop and I felt like I'd been wearing the same clothes for two days - which indeed I had. I was delighted to be informed that the IoD had a shower in the basement, and I freshened up in time for dinner (as an aside, the IoD comes highly recommended - lovely people, excellent service and great value). It all went well until the puddings: we were seated as tables of twelve, at round tables with large, square, white tablecloths that hung to the floor. Indeed, they came over the shoes of one poor soul who - and I hasten to add was perfectly in control of his faculties - who managed to entwine his foot in the tablecloth and fall backwards, pulling an entire load of empty plates and not so empty glasses towards him. Fortunately everything stayed on the table; not so fortunately the glasses of red toppled over and landed in such a way as to send a flurry of merlot droplets towards yours truly. I was not so much drenched as well-spotted, and the shirt I still required for the next day's meetings was now ruined.
Still, it could have been worse. As we left I was asked if I planned to take a taxi.
"No, I think I'll walk," I said.
October 2005
This section contains posts from October 2005.
Rush - Chemistry available for pre-order
2005-10-03
Here's the message that went out on the wires last night:
Just to let you know, the Rush book I have been putting together over the past two years is now finished, all bar the shouting. According to my sources, "Rush - Chemistry" will be going to print Monday week, ready for release at the end of October. It is available for pre-order here.
You can also pre-order it from Amazon.co.uk or Amazon.com - if you do, feel free to use one of the links, every little helps!
Audioscrobbler
2005-10-11
This looks like a really great idea - I've only just got round to trying it out, but the idea is to let others know what you are listening to. The charts, thus generated, are very different from the ones radio would have you believe, not least because they reflect the staying power of songs and artists, not just the ability to get promoted. My Audioscrobber page is here, by the way.
Comments issues
2005-10-11
A couple of people have tried to put comments on posts, for some reason they can't at the moment, but I can't see why. There is an upgrade to Wordpress, I shall be implementing this shortly.
Update: I believe I've resolved this now.
New version of N E O now out
2005-10-18
There are few products I will endorse unreservedly but Nelson Email Organizer has been a bit of a life changer for me - the quantity of email I receive seems to follow an exponential curve all of its own, this last quar ter I received as much email as I did the whole of last year, so goodness only know what its going to look like in twelve months' time!
Anyway, I was recommended to NEO by a colleague, and I wonder now what on earth I would do without it. It indexes all of my 40,000 emails and lets me search by sender, attachment type, category and so on, as well as search string. I'm currently upgrading from 3.0 of Neo Pro to 3.1, which triggered me to write these few thoughts while I wait for the thing to finish.
There's a free version of NEO available from the web site, I can't remember what prompted me to upgrade to Pro (apart from the fact that its so damn good), but I would do it again. The only weakness I can think of is that it doesn't search attachments, still, can't have everything!
November 2005
This section contains posts from November 2005.
Chemistry is out
2005-11-03
My publisher has just told me that copies of the book have now been received from the printers, which is a few days in advance. Hurrah! I shall be sending pre-order copies out as soon as I receive them.
For further information or pre-order see the Chemistry page. Also, here's a banner ad produced by my son, they do grow up fast :-)
Also, it was great to see the book as "perfect partner" to the R30 DVD on Amazon.com. Not sure how these things are worked out, but I'm not complaining!
What is the music industry, anyway?
2005-11-05
The recent, utter faux pas by Sony BMG to hide copy protection software the computers of unsuspecting individuals raises a number of interesting questions about what kind of service music companies are trying to provide to their customers. This debate is raging on, and yet another opinion on the subject won't particulalry help at this stage, so I thought I'd take another tack and consider exactly what the so-called "music industry" is for in the first place. Through my various roles I have met a number of people inside the biz, whose opinions are directly opposed to the general perception of what the industry stands for - so are they wrong, or not being heard, or is it that the industry itself is far more complex than we allow?
Following various discussions I've been led to believe the music industry occupies a number of roles. Firstly, it is a loan agency. Musicians and artists are considered in terms of market potential and business risk, and are loaned money to make recordings and pay for their management, marketing and distribution. This latter point is important - there are plenty of artists that didn't realise it was their own money being used to these ends, when they signed the initial contract. The "advance" is exactly that - an advance payment against expected royalties, a.k.a. a proportion of the money made from each sale. Some bands - Marillion is an obvious example, but there are now plenty of others - have twigged the fact that they can get this money from elsewhere, on more reasonable terms. It is surprising that other financial institutions do not get wise to the potential of funding the arts, but in all probablility they just do not understand how to underwrite the risk.
Second, the music industry offers a recording and packaging service. Many music companies own their own studios, as I understand it, and there are independent studios (each with their own specialist skills and reputations) that can be booked. At the end of the day however (as illustrated by David Gray's White Ladder), you don't always need an expensive studio to produce a good album. Once the music has been mixed (final arrangements selected) and mastered (made to sound on the DVD like it sounded in the studio), there is a manufacturing process involved - again, each stage can use in-house facilities or can be outsourced, the same as with any business.
Then there is a global distribution network. There's no magic here - the creation of a global organisation is a painstaking task, requiring the creation of local offices, relationships with suppliers, channel partners and the media in each geography, an understanding of legislation and market dynamics, and so on. This is probably the real battleground for music companies, as it is the only area where they can really differentiate themselves (apart from the artists, but they are subordinate to this in importance - the best artists in the world will not make as much difference to profits as having the better ditribution network).
There is a sales, marketing and promotion service. Again this is global - once a major label has decided to put its marketing muscle behind an artist, they will appear on every street corner and in every magazine, whether they are any good or not. This is a hugely valuable service, but it is not essential that it is carried out by the label itself - indeed and again this role is often outsourced.
Finally, there are other areas to the biz, for example the publishing business, which protects the rights of artists in its stable. Anyone can own the rights to an artist's music - Michale jackson owns much of the Beatles catalogue, for example, and David Bowie has sold the rights to his own back catalogue as a going concern.
While this may be an oversimplification, these five areas give some indication of how complicated things can get. While the recording elements may just want to get the music heard, the distribution elements seemingly work in cahoots with the publishing elements to protect their existing models. Enough for now - the sad thing is, that while certain parts of the biz may be demonstrating their intransigence, it is the industry as a whole that suffers. There is a huge market for music, and it may be even bigger than the existing market suggests; however the proportion of the market that depends on traditional models will inevitably shrink.
The bottom line is this: give value to the people - make the benefits outweigh the costs, and they will pay. There are too many examples to state here, from Simon Cowell to Steve Jobs, from Marillion to George Michael. There is every indication that people want to spend a good proportion of their money on music in all its forms; ill-conceived attempts to implement rights management technologies are only doing damage to what is already an industry in trouble.
Qualcomm and SCO
2005-11-07
There's been a recent spate of releases from Qualcomm about anticompetitive behaviour - not least this one, which is a responds-to-allegations kind of release. Today we saw the release "QUALCOMM Files GSM Patent Infringement Suit against Nokia" (not yet available on the Web). While this patent litigation in mobile device companies is not news, the release has to take credit for having one of the best lines seen in a recent release - "Patents that are essential to a standard are those that must necessarily be infringed to comply with the requirements of the standard." Took me three goes before I could even understand what it meant...
But anyway, none of the device companies are having a particularly good time of it at the moment, but do they really have to descend to quite so much bickering? While there may be valid short term commercial reasons to do so, its surely not in anyone's long term interests, least of all the end users of technology.
It reminded me to go check out the current SCO situation. While SCO made quite a hullabaloo about patents a year or so ago, it was rather more quiet about its near de-listing from the NASDAQ due to accounting errors. While any litigation may be rumbling on beneath the surface, SCO is now getting on with its core business - promoting its products - which has to be a more healthy way to spend its time. Not saying any of these things were connected, just that the majority of people don't really want to know, to them its all so much dirty laundry.
Meanwhile, in the operating system world, we are starting to hear of companies using Linux more as a lever than as a strategy. More than once I've heard companies saying Linux was the best thing to happen to Solaris, and no doubt it has impacted SCO in the same way - meanwhile of course, Microsoft is having to modify its own strategies to ensure it is competitive with Linux. End result: everyone is better off, no vendor is complacent, companies are forced to innovate ever harder to keep their customers sweet.
Which is just as it should be.
Self-important, exhibitionist geeks
2005-11-08
In which the writer castigates the anti-bloggers, then the bloggers, and then everybody else, just to be evenhanded
My wife said to me the other day, "aren't blogs just the online diaries of self-important, exhibitionist geeks?" Interesting perspective, I thought as I was tucking into my breakfast. That day I chose scrambled eggs, and I was wearing... not really. What I really thought was, "better write a blog about that." Besides, I don't eat breakfast.
There's plenty of debate going on about the strengths and weaknesses of blogs, and I frankly don't get it. On one side there is a camp that says "blogs are going to change the world" - a view which to me is both flawed and dangerous. On the other side there is a camp which says "blogs are an irrelevance", or worse, speaking with palpable indignation that blogging should broker any attention whatsoever. As one who clearly has a blog, I may be biased, but equally I find myself in neither camp, which is confusing.
So, what it all the fuss about, and why is it causing such a polarisation of views? At its heart, a blog is no more nor less than a very simple Web publishing mechanism. Were I writing this in FrontPage and then running Web Publishing Wizard, would I write anything different? I don't believe so - having uploaded plenty of text-based content to the Web over the years, the only difference to yours truly is that I don't have to muck around with other editors. I type, I hit "publish", and I'm done.
Ultimately however, the end result is online information, in the same way as magazines, books and newspapers hold hard copy information. There are plenty of publications that are shoddily written, poorly edited or in bad taste, but nobody would ever say "all magazines are bad", for example. So, why should we say the same about blogs? Similarly, there are Web sites a-plenty that give us chapter and verse of the goings on of some obscure family in Maine; there are plenty of shabbily produced, poorly formatted and otherwise dubious Web sites. These are publicly accessible, and often unavoidable as they somehow get presented by search engines as "most relevant" despite all attempts to the contrary.
Somehow, however, we ignore such Web sites quite easily, but it is abhorrent that similarly low levels of quality might exist in blogs. To the anti-bloggers, this is proof if any is necessary that blogging is a crime against humanity and should be stopped. Now.
Meanwhile, the lowering of the bar to more simple Web publishing has also opened a number of opportunities, which leads to the other side of the coin. The IT industry has a tendency to hype the latest trend: this is possibly due to the fact that much technology is disappointingly dull and deserves a bit of a lift, but more likely for commercially minded types it is a way of maximising chances of commercial success. Finally, and whether we like it or not, this is a trait among tech types, we really do get genuinely excited about things technological and their potential to change the world. And so we have the overhyping of blogs - how many thousands of bloggers are really going to turn their waxings into hard currency, for example? On this, the Greek chorus may have a point - there is a lot less to blogging than meets the eyes of some zealots. Meanwhile, perfectly good columnists are calling themselves bloggers for fear of becoming irrelevant, a bit like parents trying to join in the conversations of teeenagers.
Perhaps both sides as presented above are missing the point - I can already hear little voices questioning my definition of "blogging". So far in this piece, the interactive nature of blogs has been missed, a clear discrepancy. It could (and no doubt it has) been argued that a blog is no more than a single user newsgroup, a bulletin board channel in which the moderator creates the content and other contributors are reduced to mere commentators on the main story. This would be true were blogging in some way hierearchical, but instead blogs form a meritocratic network of networks. The blog itself is of limited value; a network of blogs, where participants exchange views and develop ideas, is really where the action is. Blogs obey Metcalfe's law - the value increases exponentially based on the number of links between them.
Let me repeat that, then: a single blog is of limited value, and this is where I fully concur with the anti-bloggers. In the signal-to-noise ratio of the blogosphere, this is the noise - feel free to ignore it. While there may be plenty of such blogs, they make no sound if you just close the window. I don't want to downgrade the value of using blogging tools for more conventional web sites - I've seen charity sites and other news feeds operate with a blog mechanism, and why not, its just a tool. Even if this were useful just to the people writing their own blogs and commenting on the blogs of others, it would already be of enormous benefit to themselves; as it happens, many blogs are generating quite a substantial readership, which suggests others can benefit from the debate whether or not they choose to contribute.
Finally then, there is the strength of syndication. Blogs feed information and content, and it would be very difficult to separate the wheat from the chaff without some filtering mechanism. So, we have tools like Bloglines and Pluck, which aggregate blogs into a single window, suggest blogs based on current settings, enable the easy addition and deletion of blogs. To look at single blogs in isolation would be a bit sad, it is the power of the many that matters - and the better blogs will undoubtedly filter to the top.
So, there we have it. Should we expect today's bloggers to be the media moguls, business leaders and presidential candidates of tomorrow? Don't be silly. There may be one or two characters who ride the wave better than others, and come out on top - and good luck to them. Blogs are already used in a multitude of ways - from major companies market testing ideas, to hobbyists sharing information, and indeed no doubt to self-indulgent diarists. For myself, I really do not see the relevance of what someone had for breakfast, nor am I interested in the golfing progress of an executive who has clearly been prompted to add more of himself into a corporate communication. In the meantime, as a mechanism to share information, test ideas and build relationships, blogs deserve at least a place at the table. What we are witnessing is the democratisation of technology, a lowering of the bar, and that is always to be applauded.
For me, a blog serves as a channel for ideas, and a placeholder for news: the blogging mechanism has enabled me to create a three-column, dynamically changing Web site with minimal trouble. Also, as I am supposed to be a commentator on all things technological, I believe I should at least try these things out, just as I play (and sometimes struggle) with the latest gadgets. No more, no less, and if just one person has read this far, then its been worthwhile.
Next time I'll try to answer my 12-year old son's question, asked out of the blue the other day: "What exactly is the point of Linux?"
Duping Google
2005-11-21
There was a very interesting chap on this morning's "Start the Week" on Radio 4. He pointed out that, according to the US Patriot Act, government agencies can access information on third part servers (e.g. Google, in his pitch) without either requiring the consent of the information source, or even having to disclose that such access is taking place. This is fascinating and curious (to say the least) given the amount of previously private traffic that therefore become agency-accessible. For example - standard voice calls need a warrant; from this definition, voice over IP (Skype, etc) does not.
OK, Skype is peer to peer, and I don't know whether the definition of "third party server" covers caches and other hardware mechanisms for getting information around the Web, so there should (and no doubt will) be a debate on this; meanwhile, I was wondering when people would start to consider duping the system. For example, if some agency is curious about my particular search requests (unlikely, admittedly) and it I was in the slightest worried about this (even less likely), I would start seaching for random items just to confuse the system. Given the availability of API's to Google etc, I might even write a script to search for random sequences of dictionary entries, and run it as a screensaver - my own search entries would be lost in the noise.
Of course, if everyone started doing this, it would be a disaster for Google and other search providers, who are far more interested in the commercial potential of their search data. There are a billion searches a day apparently, offering huge opportunities for the companies collecting it - and no doubt, for the agencies that plug into them. In protecting against the latter, great damage would be done to the former.
At the end of the day however, would the end users care? Would they care enough to protect themselves, and would they care about any damage caused by such protection? Probably not - to either. This example of what might happen is just one of many potential future end user behaviours, the majority of which won't happen. The point instead is that all the highest aspirations of either Google or the US government are ultimately hostage to the vagaries of the technology consumer.
2006
This section contains posts from 2006.
January 2006
This section contains posts from January 2006.
A new dawn, a new day...
2006-01-11
... and a very happy New Year!
I hate to scare anybody but, believe it or not its already the 12th of January. That's more than a third of the way through the first of twelve months, is it any wonder that Christmas comes around so fast. A bit like builders who have unexpected complications when they're half way through a job (i.e. all of them), we're all a bit in denial when it comes to the passage of time.
All the better reason to pack it all in, that's the plan anyway. And so it is that I have left that kindly bunch of fellows (male and female) at Quocirca, and I am taking a month or so off to write another book. I expect to be back on the analyst circuit before anyone notices I was gone - I'm already missing the company! For more information on what is an industry analyst, watch this space.
Meanwhile, the Rush book is doing well. It appears to have been generally quite well received - 8/10 in the latest (February) edition of Classic Rock for example, and there's a couple of good reviews on Amazon UK. We managed to catch the next print run to fix any typos and clangers, so thanks to anyone who submitted them. Following being held up by customs the book still doesn't seem to be available in the US, but the distributor has assured the publisher (who has assured me) that it should be released any day now. I was also told that most copies have been snapped up by Amazon, so it's unlikely to appear in any retail outlet any time soon! Meanwhile, as you know you can always order from me, please specify if you want it signed. I decided not to put the postage prices up in the end, I'd rather take the hit than make it prohibitive.
There's plenty of other things going on, all of which will unfold in good time. For the moment, I can say "... and I'm feeling good" - no doubt this year will have its ups and downs, but that's only to be expected! Good luck in all your endeavours, and may our paths cross frequently.
Jon
The VaioPad - it works!
2006-01-18
Warnng - this may get a little nerdy. Eighteen months ago I wrote an article for The Register about voice recognition, in which I questioned why there wasn't such a thing as a handheld voice recognition computer. It wouldn't need new technology, I argued, just the clever use of existing components. I even speculated that even my trusty old Vaio PictureBook, despite losing several keys and being virtually unusable as a portable computer, might suffice as a voicepad.
Ever since then I've wanted to find out if it were possible. Over the New Year I finally decided to give it a go, taking apart said PictureBook and re-assembling it. And guess what - it works! There's no reason why it shouldn't of course, but its still good to test these things out. That's what I'm sticking to, anyway.
There are some pictures here. I remain slightly bemused why the oh-so-innovative computer industry can't create something like this - I'm sure the market is there, and lets face it there's little to do other than package things up. I had high hopes for computer companies like Oqo selling a voicepad, but they don't seem to be interested either - now the Oqo ships with Tablet PC, all they have to do is ship a mike! Oh well, if and when it comes, you heard it here first.
The VaioPad - Pictures
2006-01-18
Here's some pictures of the Vaiopad. Note I didn't have the recognition software running at the time - the priority was to get the page done. First, a front view in its case - note it is upside down, to facilitate being viewed when carried. Or something.
Here's a picture of the edge, showing the duck-taped hole to reveal the ports.
The computer out of its case, again upside down for no apparent reason.
Here's the computer, demonstrating how the screen is indeed detatched from the base. I think it might have voided the guarantee, but I'm not sure ;-)
Finally a shot to show how the screen re-attaches to the base of the laptop.
The Bells
2006-01-26
Here's a snippet from Mike Oldfield's Web site:
"Hello you ! It has surely been a time of big changes and realisations these last few months, not least of which has been the writing of my Autobiography. This is nearing completion with the help of my co writer Jon Collins and hopefully will be published this year if not early next. During the process of bringing this book to life I have realised a great many things about myself and my position in the scheme of things.( Oh for the gift of hindsight ! ) This will surely influence my future personally and creatively. I have contractually three and a half more years to work on a new piece of music so this will give it a chance to evolve and mature nicely. Meanwhile look out for some promotion for the Platinum Collection , the book , and who knows what else the fates may send . With all my best wishes : )"
It's a cracking story, and I'm delighted to be involved.
Alphadoku Solution
2006-01-28
I've had a few requests for the solution to the 25x25 Alphadoku puzzle I posted last year, so here it is.
February 2006
This section contains posts from February 2006.
Hooking up with MWD
2006-02-01
As from today, I've thrown my IT analyst hat in with Macehiter Ward-Dutton Advisors (MWD), a UK-based analyst house whose focus is IT/Business alignment. In the end it was quite an easy decision - my IT focus has always been the same, though I have couched it in various ways according to the terminology of the time. Initially I shall be concentrating on service management - my first job is to define it so watch this space! I must stop saying that.
You can check the press release on the MWD site, or download it from here.
Neil Macehiter and Neil Ward-Dutton are great guys, and I'm really looking forward to the coming months. Can I also take this opportunity to wish them happy birthday, as MWD was officially launched exactly one year ago today.
Nobody Likes Feedback
2006-02-06
Its Monday morning and the start of a new week. First thing to do - check email, have a browse, read some blogs and check Amazon for the chart position of the Rush book (OK its in the four-thousands now, but why not, its all good procrastination). The eye inevitably strays onto the reviews... and ouch! There's another three-star baby in there. "Not well written, badly formatted, poor pictures." Hum.
Following an initial, "What the heck" denial, the reaction is to put oneself into an entirely self-serving justification mode, "If only he knew what effort went in, etc, etc." Of course this is a huge mistake. Next stage comes the, "Maybe he's right," (which of course, he is); then the recognition that, of course, he is right, both to have his opinions and in his justification of them.
All of this within the space of about a half hour, a microcosmic version of the four stages of trauma (denial, anger, bargaining, acceptance) or the four stages of team dynamics (forming, storming, norming, performing). Yet again I discover that my feelings are not my own, but purely some sequence of chemical reactions to cause various synaptic connections to form and reform in my grey matter.
Hum.
Speaking of connections, I notice that Garr Reynolds put a handy summary of the Cluetrain Manifesto up on his blog, over the weekend. If I may summarise his summary, the manifesto boils down to three words - "Markets are conversations." About five years ago it appeared for many as absolutely the right thing to be said, just when the internet was fouling up corporations' efforts to put some kind of sheen on their doings. It was also, so totally just at the wrong time, a victim of the bubble bursting, all of its sage ideas left to rot as bricks overtook clicks in the fashion industry we call IT. So - its good to see the Manifesto re-emerging, or at least being given a bit of credit.
Meanwhile, here's the link. I fully applaud Amazon's ability to print comments, indeed, I use them all the time when deciding what to buy. However, the interaction is all one way - there can be no Cluetrain-style conversation initiated here, either between reviewers, or between reviewer and author. This opens up a whole set of questions, not least, should the author actually be able to comment on the feedback of others? In the case above I would have loved to do so, but I'm not sure it is always such a good idea; not least because it might stymie further critique, but also because the author's own comments might not necessarily be that valid. I've seen the former when I've participated on mailing lists - there's no better way to halt a perfectly good tirade, than for the author or artist to wade in with their own opinions. Its one of the reasons why I've stopped posting things to the Rush lists, for example, though I confess I do have a quick peep every now and then. As for validity, one could probably determine what stage I was at (denial, anger etc) based on whatever comments I made.
Second (and this goes to the heart of Cluetrain), are all markets really conversations? For a conversation to exist there has to be both a channel of communication open, and a common language available to, all sides. I believe that Cluetrain is saying, companies should open their ears and start listening to their customers that are already in conversation. Perfectly valid, but many customers are considerably quieter, and it would be a big mistake to prioritise conversations with the more vocal customers, over serving the needs of the less vocal. Conversations are valid up to a point, but then, whether its a book being written, or the latest brand of soap powder, or a new gizmo being released, there has to be scope for leaving people to get on with it. Equally then, on the other side of the fence, producers have to accept there are consumer-oriented conversations that should take place without the producers being present - kind of, "would you mind leaving the room now please, we want to talk about you in private."
As always, the keyword is going to be 'balance'. There can be no absolutes here - if anything, with blogs, discussion boards, conversations, feedback and the like, it all serves to illustrate how far we have to go with these interactive technologies. It's one, big, indeed global experiment, one in which I am very happy to participate.
Feedback welcome ;-)
P.S. Hands up anyone who thought this post was to be a comment on the Rush EP...
Avecho - what did I miss?
2006-02-08
Go away for a month or so, and all sorts of things get missed. Like Avecho, the antivirus company that refused to tell anybody how its technology worked. I did a brief bit of consulting for them a few years ago, which was all a bit tricky as they wouldn't tell me either. "Its so easy, if we tell people, they'll all do it" they said - so I was left none the wiser.
Of course, it left my mind racing. I ended up with my own theories as to how it worked, which now have largely been superseded by the availability of "sandbox" protection mechanisms.
Anyway, the company went into administration in December, and was almost immediately bought by another company, Stylish Ltd who I have never heard of. Even more secrecy, all very interesting. I had a quick browse round the Web and found the following post by a disgruntled employee, but once again, I find myself none the wiser.
Still, at the time it was nice to see Essex.
Rush Addendum updated
2006-02-08
(edit Feb 9)
For Rush readers, I have put together a printable PDF of the addendum entries found to date. There are 22 of them, we shall know in time if there are any more to be found. Of these:
- Ten are dating or timing errors - Seven are typos and editing errors - Five are factual inaccuracies
Truth be told, I was feeling more than slightly uncomfortable about the number of errors found in this book. As I was flicking through to put together the above addendum, I started remembering exactly how much of a mountain of information (some inaccurate) that I waded through to compile the book itself. It didn't make me any less regretful of the errors, in particular the inexcusable typos (Getty! Argh!), but I do still believe that we did everything we could to get it as accurate as we possibly could at the time. In a word, sorry.
Oh well, onward and upward. Thanks to all those people who have pointed out the mistakes above. As indicated, the next reprint (due imminently) will fix the first ten issues in the text, leaving twelve which will probably now have to wait for the paperback.
On Googleseep and Netabuse
2006-02-08
So, Google's blocked BMW from appearing on its search engine, because it stuck in a whole bunch of spurious pages to get its ranking higher. Hurrah for free speech and democracy!
Do I really mean that? No. For a number of reasons.
First, when did any Web site worth its salt not do things to increase its ranking on Google and other search engines? Plenty of Web sites that incorporate keywords to make them more attractive to indexers. There is a whole spectrum of quite acceptable keyword use possible before this becomes abuse of the mechanism, and there are plenty of other techniques, including crosslinking between sites and so on. While Google may have picked on an obvious example here, there are no generally agreed criteria that I know of to determine what is use and what is abuse.
Second, as Google is only a company (controversial view, I know) at the end of the day, it still has its own corporate responsibility to think about. Unilateral action to block certain Web sites that contravene its rules is fine in principle, but while there is no clear process to do so, Google itself could put itself in a difficult position. Should there be a warning for example, a 30-day resolution period, room for appeal? Google isn't obliged to do any of these things, but the sudden finality of the current approach doesn't seem particularly evenhanded. Either Google is to restrict its service to all of the Web sites it indexes, or it may find itself be open to the criticism that it is abusing its power.
Third and finally, this smacks of easy targeting. That is, "BMW is big and obvious, therefore bag it." For a while now I have noticed it becoming almost impossible to search for certain types of information on Google. The majority of links are to sites to purchase products, and not to provide information - the result is wading through reams of results to get to useful information. Somehow purchasing sites are prioritised over informational sites - this will not be by accident. Some companies have multiple sites, each of which is selling the same catalogue of products so that it can appear in multiple places in the search results. Bloggers employ their own techniques - "please link to me", they tell their friends, "that way my ranking will increase." Such actions are totally un-police-able, and yet they've reached the point that if you want to appear at all, the only possible action is to join in. And so we have "Googleseep" - if Google is intent on slamming the big doors to make a good-sized noise, it will of course find its indexees seeking every possible "acceptable" way of getting around any regulations and mechanisms it imposes.
Clearly BMW overstepped the mark and needs to do something about it - last I heard, the company had fixed up its site and was waiting to be re-instated. All the same, even if Google's actions were well intended and not just a thinly veiled attempt to generate some publicity (as some might suggest, and as if they're not getting enough already), I think the company's on a very sticky wicket indeed. Either it's indexing the Web as it stands, or its not. If it wants to make the world better, it could adapt some of its own Chinese filtering technology (great article by Martin Brampton, by the way) to give all of us a way to see through the clag that litters most searches. If democracy is about freedom of choice, personal filtering of search results would be a good start. Perhaps the second point above could be somehow resolved with a Googletest utility to verify the acceptability of a site - would it be too much to suggest that untested sites are further down the rankings than tested ones?
As a final note, 'BMW', 'motor car', 'red', 'convertible', and please link to me, that way my ranking will increase...
View from the chair
2006-02-10
Is this the first blog entry written from the dentists chair? I started when drilling, I'm now having all sorts of moulds squeezed into my mouth. To distract myself I am trying to think about interpersonal networking and relationships, but the only relationship I can think about right now is between my teeth and the array of equipment beside me.
(this post was later edited for clarity)
The people's front of WOMM
2006-02-10
I've been doing a lot of thinking recently about relationships. That's business relationships, not personal ones - though admittedly the line does blur, as it probably would in any community. Community - we'll come back to that, but there are a number of strands of thinking I want to bring together. Bear with me, and (in the words of Morpheus), let's see just how far the rabbit hole goes.
First, back at relationships, it is a well known fact among sales people that good relationships drive good sales. This is not just about quantity, but also about quality - there is the oft-told anecdote of the feed salesman who spent years building a good rapport with a farmer, and eventually he won the business. Insert your own, favourite anecdote here.
Secondly, as well as quality there is also the question of quantity. I know of several 'networking' organisations that are essentially about lead sharing and pre-qualification between small businesses. BNI is one for example - the people that use such organisations swear by them, as they are a way of spreading the word, and the load.
The Internet (as a third point) has enabled relationship-based organisations to go into a kind of overdrive. A while back I wrote about Plaxo and Linkedin, two very different mechanisms for sharing contact information, keeping in touch and generating leads. There are plenty of others, Ecademy for example, whose goal (I paraphrase) is to turn its subscribers into power networkers - people who can maximise the potential of their global network. Make lots of money, that means.
Fourth we have blogs. This may seem kind of irrelevant at this point, but power bloggers do operate as a kind of network, as illustrated by James's post here. The blog is both a communications mechanism and a marketing tool, and so it fits neatly into the toolkit for internet-based networkers. It's also host to a wildly diverse set of conversations.
Finally, put it all together and what do you have but Word of Mouth Marketing. I've been hunting around and there appear to be several definitions for this, based largely on the starting point (choose from 1-4 above) of the person doing the WOMM. There's plenty of other stuff that we can throw into the WOMM bucket - tipping point theory for example, and no doubt the wisdom of crowds (I can only guess because I haven't read the book. I know, I know, I will...).
All make sense so far?
Good.
Then perhaps you can tell me why I have a problem with these things.
Its not a very big problem, admittedly, but its still a problem. (As a digression, I still remember listening to Paul Hawken's "Growing a Business" tapes all those years ago, when I first started my own company. �There are two kinds of problems, that�s good problems and bad problems,� he said. �Don�t get me wrong, they�re both problems!� Very good tapes by the way, highly recommended.)
The issue I have with all of these things is that they totally ignore the concept of community. “Community” is the raison d’etre of any network, and yet it is somehow assumed that the network itself is the engine of success, and not the community. Not true, I say. Bloggers also have a tendency to organise themselves into communities, and yet the assumption is still that the medium - in this case the blog - is more important somehow than the message. Not true again. A cursory inspection of any part of the blogosphere will reveal that bloggers are clustering around certain memes, and certain individuals will find themselves closer to the meme-core than others.
Its the same thing in music. There's only a limited number of notes and so many ways of expressing them, but many music fans will express a preference for a certain subset of bands, or a single band. Go onto Audioscrobbler and follow some of the chart links around, look how people tend to listen to the same set of bands. These bands may sound very similar to other bands that are outside their own "listening community" - so they may get frustrated that they do not get the exposure outside their own ecosystem, their musical tide pool (obligatory Rush quote here "each microcosmic planet a complete society"). So be it - that's communities for you.
Certain individuals manage to transcend their own communities, so Robert Scoble and Seth Godin in the blogosphere, and Thomas Power in the networking sphere, are like Green Day in music. You'll see them participating in multiple communities, or indeed, multiple communities participating in them.
Interpersonal communities are a phenomenon as natural as coral reefs and shoals. There are very big ones and very small ones, each has similar characteristics with its own spiritual leaders and mavericks, administrators and regulators. Its the same everywhere, in sport, politics and (dare I say it) religion. I'm not knocking them - I love communities, I have my own that I love to participate in, both at home and at work. One unassailable fact of any community is that it is about active participation - passengers do not reap the same rewards as, and are not generally well considered (though the internet does make room for read-only participants). While priorities can change over time I wouldn't want to be a absolute passenger in any community, and therefore, to join one, I would have to say goodbye to others.
All I’m knocking is that many networking organisations and blog-fests try to give the impression that they are not communities at all. I think this is a perfectly natural thing to do as well - affiliations come with their own baggage, and people don't always like to emphasise them. Equally, it is perfectly natural for communities to wish to grow their membership. Communities have many benefits, naturally these are appreciated by their members, and members wish to grow their own communities for a number of reasons - to enable others to share the benefits, to grow their own pool of influence (or indeed, revenue) or for a multitude of other reasons. The phrase “Join us”, while usually innocuously meant can sound a little sinister, so recruitment takes place often in a more indirect way.
There are downsides of communities as well. What works in one community is totally irrelevant to another. Communities use different terminologies, which are often opaque to outsiders. Inside the communities, individuals find it difficult to understand why others don't join them in debate, when the truth is that often there's no doorway in, let alone a welcome mat. I know of a number of blogs that I find difficult to understand - "they don't speak to me, but that's fine," as one musician once said about the music of another. Indeed, this post may all be gobbledegook to some people who came to this site, but make perfect sense to others (I hope).
That is indeed fine - that's communities, and everybody is free to choose their own.
Still with me?
Blimey.
So - its not the fact that these communities exist that is a problem, but the fact that this first fact is not acknowledged.
Which brings me to word of mouth marketing - WOMM.
I already made the comment that there are different kinds of WOMM (horrible acronym - but I can’t be bothered to write it out every time, even thought it’s taken me longer to say I can’t be bothered!). The different WOMMs are based on the communities that the WOMMers come from so the business network WOMM makes little mention of blogs, whereas the blogger’s WOMM sees it as an essential pre-requisite. Bloggers see the conversation as an end in itself whereas power networkers see WOMM as a way of growing the network, and blogging is one tool to help that growth.
As we can see therefore, its about communities, but surprisingly the community factor is given little mention.
One thing that WOMM suggests is that to be a successful WOMMer, you need to "infiltrate" or at least participate in specific communities that will give you the biggest WOMM bang per buck. I've experienced it myself: with the Marillion book I was already in discussion with the online Marillion community when I came up with the idea of writing a book, and when Marillion decided to authorise it, they were able to advertise it to their own subscriber community. With the Rush book, it was perfectly natural to me to sign up to Rush lists so I could glean tidbits of information and inform the community of what I was doing. It seemed perfectly natural, indeed it would have been folly to do otherwise.
That's all fine (at least to me) so far. What is less fine to me is the suggestion that there is a single discipline that works for every group. If this is more about the community than about the network, then what works for one community will not necessarily work for another. For example a lead generation network may be very comfortable exchanging leads and growing markets that way, but that approach won't cut much ice in other communities, that are used to following different rules. Indeed, it is highly likely that each community already conducts some kind of WOMM activity, so successful WOMM would need to tap into that, rather than trying to impose some external structure.
Unsurprisingly given the principle of community, WOMMers are forming communities of their own. There is already a WOMM association, the WOMMA, there are WOMM companies (with their own blogs), and no doubt there will follow other organisations, each with their own way of WOMMing. No doubt each will come up with its own way of doing things that works for its members and their immediate associations. No doubt also, there will be arguments between different organisations - this is community rivalry in action, and neither side with be wrong, as such. "People's front of WOMM? Nah mate, we're the WOMM popular front!"
It's all about community. It always was and it always will be. For myself, should I be invited to a networking organisation, as I have been (to many over the years), or to join a club or association, I may well say no. Not because there is anything wrong with the community itself, but because it is a community. Perhaps I don't believe I would fit, or perhaps I wouldn't make it a high enough priority to make it worth my while. In the words of Groucho Marx, "I'd never join a club that would have me as a member." Not strictly true in my case, but I do recognise it is about community, or joining a club, which requires more than a tick in the box to benefit. You get out what you put in.
Equally, while (I think) I get the principles of word of mouth marketing, I think we all need to recognise that there will be different kinds for different communities, and that many communities were doing it for themselves, long before the term was invented. Blogs or no blogs.
The IT brain drain?
2006-02-10
I had a very interesting conversation today with an IT consultant who was previously a network manager and who has been around the block a few times, so to speak. He saw one of the biggest problems in today's IT environments being that so many of the IT staff had been outsourced, many companies in question now lacked the capability to make decisions about new technologies. He'd been talking about this problem to outsourcing firms, who were now suffering from the fact that companies had less and less of the wherewithall to involve them in new technology deployments.
It sort of stands to reason - a bit like in the game of Monopoly, where you need a threshold of cash. Without enough of it, you find you spend more than you make and very quickly you find yourself out of the game. If this is a trend, then IT managers will find it increasingly hard to keep their levels of technology understanding current, and keep up with what is once again an accelerating field of innovation.
Unlike Monopoly of course, large corporations are extremely complex and the reduction of IT skills in the management layer is harder to identify. For now, we can only mention that this could be an issue, and hope we are wrong.
What does Hormel think about spam?
2006-02-13
In one of those Monday morning moments, I got to wondering what the company that originally registered the term "SPAM" (the meat-like substance) thought about the term "spam" (the email scourge). A few minutes on Google and I found the following page on one of Hormel's web sites. It boils down to the following:
"Let's face it. Today's teens and young adults are more computer savvy than ever, and the next generations will be even more so. Children will be exposed to the slang term "spam" to describe UCE well before being exposed to our famous product SPAM. Ultimately, we are trying to avoid the day when the consuming public asks, "Why would Hormel Foods name its product after junk e-mail?" "
That, with its reference to "mickey mouse" unsophistication and so on, I thought was a reasonably sanguine assessment.
Then I wondered, was the company ever able to send out any product-related email at all? After all, what would you expect your filters do with email sent from spam.com?
Any Eastenders fans out there?
2006-02-19
I first started talking to the multi-talented Peter Polycarpou a goodly while back, when I was singing a few numbers from Miss Saigon (how's that for declarative living), and I asked him through his Web site whether he had any tips. We've not had much reason to converse, so our email addresses have lurked, unnotticed, in each others' address books. Until recently that is, when Peter got in touch to say he'd be appearing in this week's episodes of Eastenders on the BBC, (I believe) as Yiannis Pappas.
Now I'm not much of an Eastenders follower but I do know that Peter's a lovely chap and a great performer. I also think this is a great example of connectedness - its not just about the links in the chain, but what messages get passed and what actions they trigger. For me, it means I'm actually planning on watching Eastenders for the first time in years! His site has also linked me through to the blog of Caroline Lucas from the WTO summit in Hong Kong, another unexpected demonstration of connectedness as well as being a fascinating account of what went on in December. The truth is out there - if ony there wasn't so much of it!
If this post appears…
2006-02-19
.... it means I can post from anywhere, by email. No more excuses, just queue 'em up and let 'em go.
Meanwhile, I'm building a list of open questions that I'd like answered. I'd like to suggest one possible future - Douglas Adams was right.
Nothing to declare
2006-02-19
"If you are fully in control, then you aren't going fast enough" - Mario Andretti
It's time for a change to this Web log. There are a number of reasons to this, not least that, while my days have indeed been packed, often they've not contained things I've been able to talk about - "just trying to close a deal with XXX," for example, or "talking to the management of YYY," I suspect that neither XXX nor YYY particularly want me to blather on about our ongoing dialogues. Second, its been nigh impossible to mary my two worlds (music and IT) in blog format, and finally, writing about my day isn't really what rocks my boat. The real issue that I think would be worth documenting, is to do with coping with everything that's going on, navigating the complexities of the global village we find ourselves in. Technology is more and more about people, and people are more and more about technology, to an extent that the boundary between the two is shifting day by day. Grannies are texting their grandkids, and Sri Lankan tea co-operatives are communicating directly with Surbiton mums. The Internet is both a symptom and a cause, while the cost of an international call has dropped from pounds to pennies. There's syndicated newsfeeds, blogs and podcasts, wikis and word of mouth marketing, mashups and Web 2.0, all thrown into the mix with gay abandon.
There's so much going on, so what does it all mean? To be honest, I have absolutely no idea. I do know that I would never have set out to be a writer if I hadn't been in direct email contact with other authors, and discovered they were individuals just like me. I understand the blessing and the curse of communication, when bands like Marillion can regain their mainstream status, and others can appear from nowhere, without the backing of a monopolistic corporation. I do know that major companies are between a rock and a hard place, trying to retain their positions while implementing new strategies, all the while watching new companies steal their markets from under their noses. I do know that there are huge, geopolitical effects of all this change, some of which are real, and some of which are imagined to push some agenda or another. There are psychological and spiritual aspects, as people disciver like minds and communities, but suffer the consequences of having too much of a good thing. There are upsides and downsides, hidden agendas and ill-conceived plans, power struggles and opportunities for genuine, honest progress. Technology might be some, if not all of the cause, but equally, so much of technology is primitive, incomplete, non-inclusive. We stumble from one oasis of technological goodness to another, be it with mobile coverage or getting to our email. Ubiquity remains a distant dream, while integration and accessibility issues mar all but the simplest of interactions.
These are fascinating, turbulent times. We are right in the middle of something quite unique, a show that shows no sign of ending. We're emerging from a period of technological recession, conferences are buzzing again and the teddy bears are back on the stands. Its exciting again, largely because nobody has a clue where we're going to end up - a roller coaster ride for sure, rickety at times, but one which doesn't just come round to the starting point and heave a sigh before raising the barriers. On this trip, there's no getting off whether we wanted to or not.
All that to say, that's what I'll be writing about. I'm glad that's clear.
Got me foxed
2006-02-19
To open with a caveat: it might have been something I did. However, to my knowledege the only recent change to my Internet Explorer settings was when I downloaded the recent series of security patches. It still works - after a fashion - but tends to lock up when too many windows are open. It also now fails to be able to get into the admin pages of my blog - sometimes.
There is a fascinating pseudo-philosophical subtext here, about what security is for in the first place. To my mind it's about preventing things that could slow one's productivity, while giving a certain level of protection to one's data. These principles are true for individuals as much as for enterprises. Trouble is, what happens when the very effort to implement security results in a loss of productivity, or in data compromise? That just blows everything.
Sadly, its a fact that malicious attacks are far less likely to cause any anguish than issues caused by personal incompetence or flaky software. I know this is true from my own experience and from those around me, and while I was at Quocirca we conducted some research that proved it reasonably conclusively.
It's all very interesting. Right now however, what's most interesting is how I can actually do what I sat down to do. There are a number of options, at the top of the scale I could reinstall everything on my computer, but I've already opted for the more mundane - to run up Firefox. And it works.
This could be a cue for another riff, on the subjects of software construction, overcomplexity, incumbent responsibility and so on, but I don't have time for all of that. The switch has been made, and now I'll need reasons to switch back. End of story.
SC101 Network Storage Device
2006-02-20
I gave this device a mention on Silicon in December, and suffice to say I wan't very happy with it. There's been a firmware upgrade since then which has improved things somewhat, but still no cigar - its losing the connection with one of my computers, while being OK with the others.
Doing no evil
2006-02-20
Following my post about duping Google's search records , I noticed that Google currently has no intention to give up such data to the US government apart from when required to by law. This sounds like a dubious distinction given that the government's asked a judge to rule on the matter, but at least Google is showing some signs of resolve, following its capitulation to Chinese interests. Enough has been written about the latter, so I won't go on.
Horses for Courses - Myspace and Music
2006-02-20
All networking sites are created equal, but some are more equal than others. Myspace for example, seems to be rapidly becoming the port of call for musicians and bands. I know of a few bands that are up already (I'll dig them out and list them), Simon Apple (I think) and John Wesley for example, and a music producer recently said to me "you can find them on Myspace" in the same way that you might talk about the best parties. Fish has just announced a Myspace presence as well.
Sort of repeats the theme that its all about community, methinks.
More on briefing analysts
2006-02-21
Following James's tips on briefing IT industry analysts, here's some of my own from the archives - a bit rusty, but applicable.
Its all going to go horribly wrong
2006-02-21
Wordpress has moved from the version I'm on (1.something) to 2.something - it was 2.0 but there were so many bugs in the initial release that a new release was issued some days later. Eager to experience the benefits of the upgraded version and in the name of research and growing my understanding of what's out there, I'm going to take the plunge over the next few weeks and upgrade. I expect one of several outcomes:
- it all goes horribly wrong, and I shall be left with a gaping hole where my Web site used to be - it all happens as smoothly as manure sliding off a well-oiled shovel - something in between
I will of course be backing everything up, but even that offers no guarantees in my experience. Oh well, all part of the fun.
As I was looking at the Wordpress site the other day, I noticed that Wordpress was now hosting accounts - for free I think. Strange - on the basis that I would have paid, I would have charged. There's a business model (if anyone asks how to make money out of free online software, hosting has to be the first answer - but not if everyone gives it away). As a side note, its "powered by Automattic", a spin-off company from Wordpress that claims on its front page, "Blogging is too hard." Insightful... but anyway. There were a couple of Wordpress hosts at the time I set up this site, but I didn't know either and doing it myself seemed to be the best option. Now I'm not so sure - after all if it's good enough for Scoble its good enough for me (how much influence does this guy have?)
There's still some things to do in blogging that currently pass me by - tagging for example, trackbacks and so on. I hope that Wordpress 2.something will offer a suitable base for my continued education. I also notice - this is as much a bookbark as anything - that Yahoo seems to rapidly be hoovering up some of the better blog-related companies, Flickr and the like, and has forged partnerships with both Wordpress and Moveable Type. To inner-circle bloggers this will be nothing new, but to mere initiates like me, its very interesting particulalry given that Google still hasn't really got its own blogging act together.
In the meantime, anyone know of any better RSS readers than Pluck for IE (which seems to have wheedled its way back onto one of my machines) and Sage for Firefox? I've tried a few and they're all either too primitive or too buggy for my liking.
Getting engaged - to a blog
2006-02-22
Everyone, as Tom Cruise once said about his now ex-wife Nicole Kidman, "is absolutely right." Apparently, saying these words is the best way to prevent an argument, or at least resolve one. And while it doesn't seem to have worked for our Tom (though perhaps it worked better for Nicole, whose own thought was, "finally I can start wearing high heels again,") I have to say that when it comes to what's going on in the collaborative technology space right now, he has a point.
What exactly is going on? Nobody's sure, but one thing is certain - there's certainly a lot of people talking about it. "Markets are about conversations," announced the Cluetrain manifesto and several other pundits before then (not least my old boss, Robin Bloor, when he wrote about the electronic silk route in 1998 and before). With the advent of the blog of course, there appears to be more people talking about these things than ever. In May last year, Jonathan Schwartz wrote about what he termed the Participation Age on his own blog , and there can be no greater demonstration of this concept than the fact that its all being talked about on the blogs of others. The blog is a cultural phenomenon, for sure. But is it the end, or just the beginning?
There are still plenty of people who have never even heard of blogs. "You shoud have a blog, " I said to my ex-colleague and business dynamics guru Roger Davies. "A blog?" he said, "What's that?"
Now, I'm not dragging my good friend Roger through the slurry of the blogosphere in order to show him as some kind of luddite; exactly the opposite. The fact is, there's a massive number of supposedly, highly connected people out there, who have never heard of blogs. Another old colleague Clive Longbottom makes the following point in a recent column for Silicon.com:
"Blogging is on the increase - at least the number of people who write blogs is growing. Our research shows that blog readership is still miniscule, and is moving more towards community-of-interest style usage."
If we consider Tom's first law - that everyone is right - we arrive at a conundrum. The bloggers are right, that there is something very exciting going on, and yet the non-blog-reading community is also right. Jonathan Schwartz and Clive Longbottom are equally correct to say we are in a new age of participation, and yet the majority are not participating.
To square this circle, we need remember only that the blog is a symptom, not a cause. The fundamental principle behind participation is the act of engagement, of joining in. Blogging for like minded types is no different to SMS text messaging for teenagers - each went, or is going through a similar growth curve, and blogging will no doubt find its level. People can argue about signal to noise ratios and claim to be the first to notice the clutter, but what they fail to do is remember that much conversation serves no purpose whatsoever - us Brits will talk about the weather for example, quite happily, for hours sometimes. Markets are indeed about conversations, but conversations are primarily about relationships and how these can be nutured, sometimes over a period of years.
This principle of joining in is of primary importance. With blogging, as with SMS, each provided a new mechanism that was appropriate for a certain type of conversation. The barriers to joining were lowered to the point where a large enough number of people could engage, and - lo and behold - they did, and are. Call it an application of Metcalfe's Law. In other words, blogging isn't an answer, it's a mechanism. So are all these other "declarative living" tools that are springing up, for sharing preferences, photos, books and so on. They're mechanisms, each appropriate to its audience.
Joining in is nothing by itself, however - unless all we want to do is talk about the weather. People join conversations for a purpose, in business as in leisure. Sometimes that purpose (one suspects, the vast majority of teenage texts) may be to support the growth of the relationships concerned, be they one to one or within the group. Ultimately there has to be a higher purpose than participation itself. I suspect we shall see a continued evolution of blogs, from the individual and observational type blogs to community-oriented, news-based entities. However blogs will never cut it in their current form for anything other than providing an online mouthpiece; for multi-user, project oriented interaction to take place on the same scale as blogging, other mechanisms will be required, which are still to be developed.
How do I know this? Simple - because its happened before, and because of the same premise that people use when talking about technologies that pre-date blogs by decades. "Blogs are nothing new," they say, citing Newsgroups for example, or even uucp-based forums, which both provided an appropriate transport for the capability we have with blogging today. In the same way that Hypercard pre-dated the Web, but didn't quite reach global phenomenon status, similarly cyberspace is littered with failed projects for collaborative working. "Failure" is a harsh term - all are successful in their own way, but none has achieved that elusive "de facto" status.
I believe that both the blog and SMS are like prophets of old, forecasting greater things to come. We are still waiting for the real hero of the piece - the globally agreed standard for collaborative working. When we have this, then can the age of participation truly begin.
Are there any contenders? Undoubtedly - but therein lies another post.
Thought for the day...
2006-02-24
Never trust anything to humans.
Ego Sum
2006-02-24
Great little post on the MWD blog, about egosystem vs ecosystem :-)
I'm still getting the hang of what should go here and what should go there - basically if it has a personal spin it goes here, and a corporate stuff goes there. I would imagine James, Stephen and Cote would argue that blogs should be individual, while Dale and Helen would say there's more of a role for the multi-user blog - I think that there should be a place for both. Tom's first law, of course.
I really need to sort out my categories.
Thought for the day
2006-02-25
It's not the critic who counts...
Mobility = Ubiquity
2006-02-26
Here's a concept that came up in conversation with Cisco a year or so ago, and which popped back into my consciousness due to the closing remarks of this week's, always-a-winner Round-Up:
"And finally - a challenge for you. Dashing about in central London, trying to find somewhere from which to file this newsletter, the Round-Up managed to walk for a full 15 minutes without finding a Starbucks. Can you beat that?"
Well, in all honesty, yes I can - finding WiFi in central London has been a nightmare, that is until I found that I could sit next to the window in the Oxford Street Borders cafe and hook into a (legal) free signal. Even when you can find a kosher link, there's no guarantee you'll be able to connect; if you get disconnected it'll probably think you're still logged in. Etc, etc.
But to the point. At the moment let's face it, we feel hooked into the cyberstream only when we're in front of a computer. When we're out and about, we're reduced to looking at the Web and our emails through a cloudy porthole, with voice access to the few people we've actually remembered to load onto our SIMs. A level of mobility we have, perhaps - in that we can be found wherever we are, and some form of communication can be made. True mobility however, the kind that service providers and app vendors (think: mobile video, or mobile CRM) are trying to push, will not really be possible until there is some critical level of access, for a critical number of people, in a critical number of ways.
Until there is ubiquity of connections, connectedness can only ever be sporadic. Connectedness - the feeling of joining in, the ability to link to thousands of applications and services in a way that we want, is still a distant dream for the mobile user (though realistically, they're probably dreaming of other things right now). Laptop users can experience a level of mobility as they bounce from coffee shop to coffee shop, but even this is a long way from the what we'd need - you have to stop, sit down, switch on and connect before anything can happen, it's hardly seamless.
Now, this may come across as a rant, but it isn't meant to be. The point is, that when ubiquity is cracked, then we really can get on with mobility. The City of London experiement is one to watch or Google's San Francisco plan. I particularly like the Google plan, with its two-tier service plan (free at lower speeds) - it fits with the "Wireless Pavement" idea I've been advocating for a while. The wireless pavement (sidewalk, guys) is the idea that it costs to lay a pavement, but it is delivered at a municipal level, for free because we all recognise the spin-off benefits. Neither do you pay to go in a mall, but these are enormously expensive to run. Of course you do pay, but not directly - you pay for a shirt, or a coffee, or a lampstand, and a cut from that goes towards the cost of the building, pavement, whatever! I must blog all that - perhaps I just did!
We digress. The lack of ubiquitous access is currently a bottleneck on progress, caused I suspect largely by incumbent service providers not wanting to release their traditional grip on the cost of access. They're like the toll keepers of old, forcing everyone to travel down their own, pot-holed routes. The network is being forced wide open as we speak, and I fully expect this to lead to the nirvana of ubiquitous, seamless access, which in turn will lead to a whole raft of new innovation, new ways of connecting, new ways of doing things. I believe this is where connectedness will get really interesting, as the online and offline worlds, the voice and data networks merge.
Why should I get all excited about this? Because for a start, it is unexplored territory. I don't know what will be the apps and services that rock people's boats, or in what combination - all that chatter about location-sensitive services was largely driven as a revenue opportunity for the SP's - there may be something in it but I'm not convinced it's the "killer app", or indeed whether there will be a killer app at all (though it would have been useful to know where a pertol station was, on more than one occasion!)I see "microwave oven" innovation - unique compinations of functionality that give whole new ways of doing things. The mashup applied to the PDA, perhaps. The opportunities are global - James has ranted on more than one occasion about the lack of innovation in Europe, but let's remember that both the mobile and the open source revolutions started in Scandinavia.
Here's one thought of how things might go - applied to the lowly address book. Add Plaxo-like updating, RSS-like feeds, Google maps, GPS and a reasonably powered PDA and the address book becomes a dynamic hub, changing in real time. The possibilities are endless - suddenly you can find out all the parties you haven't been invited to, for example... perhaps we should leave that one there!
What’s an IT analyst, anyway?
2006-02-27
I'm conscious of a couple of things as I set out to write this, what I hope will be short, entry. First - there is an ongoing debate about the nature of the analyst business, and the credibility of the analysts themselves. Second - many of the people who come to this site have no clue what I do as a day job. I hope to hit both birds with this single stone.
It all starts with buying and selling of Information Technology (IT). There are three kinds of product in IT, which are:
- - commodity items, where a product has been simplified (I use the term guardedly, I don't mean it is now simple) and packaged to such an extent that it can be sold off the shelf. Examples include PDAs, office software packages, hard disks and so on.
- - solution components, where a product can still be packaged, but it doesn't serve much purpose on its own. Examples are databases, blade servers, storage arrays and so on.
- - IT solutions, where often-complex combinations of products are packaged and configured to meet often-complex demands. Examples include some of the enterprise applications, storage area networks, enterprise management software and so on.
End-user organisations of all sizes frequently need help, both in deciding what to buy, and what impact their procurement choices may have on their business. That's where IT analysts fit in. Their primary role is to keep tabs on what technologies are available, what they can be used for and what constraints they impose. Based on this understanding, IT analysts can also offer advice to vendors - the IBMs and Microsofts of this world, to help determine what technologies are more appropriate for which audiences, and indeed where they should be investing their research, development and marketing dollars.
Within this blanket description, there can be a number of sub-types of analysts. End user companies do not require analyst reports on commodity products, for example. For commodity products, analysis is more vendor focused and tends to be oriented more around market research - which countries or regions are buying which products, from whom. Such information can be used to drive advertising and sales campaigns, but the "understanding" work is already done.
For solution components, analysis is done proportionately for both end users and for vendors. Very few organisations need to know what a database is, for example, but a guide to who sells what kinds of databases, and what features are present, is useful to both sides. End users can use such information to produce a shortlist of vendors, and vendors can use it to keep an eye on the competition - or indeed to promote their product as "best in class".
For IT solutions however, analysis (whether paid for by vendor or end user) tends to be more focused on end user businesses. White papers can explain the ramifications of technology types, or help organisations understand what are the business problems to be solved - this kind of report is often supported by focused market research, for example to help understand which product features give the most benefit, or which issues are the most important to be solved. Longer reports can be produced, explainng the solution area and providing information about which vendors provide products to support the solution concerned.
There are a number of other categories. Some analyst firms choose to focus on specific areas, such as security or data management; other consider only their local regions. Some companies focus on service providers, or consultancy firms, or particular industries. Some are actually end-user consultancies that perform some analysis, and advise vendors almost as a spin-off benefit of their end user knowledge. Still others are market research organisations that specialise in IT, and so on.
When done right, IT analysis can be enormously beneficial to both sides. However there are some potentially major hurdles to be overcome. Not least, the analyst can play an highly influential role in some major buying decisions - either in terms of quantity (at the commodity end of the scale) or deal size (at the IT solution end of the scale). Influence can be direct, indirect or obscure and untraceable - if I was a strong advocate of hosted applications, for example, I might well attract the attention of hosted application vendors to promote the advantages of their products over in-house applications.
Quite rightly, then, analysts should be subjected to considerable scrutiny. Not least, there is a requirement for transparency - any formal analyst recommendation should only be served with equal helpings of context ("who does this apply to, under what circumstances") and justification. This goes for the simplest of quotes, up to the most detailed of reports - even if the context and justification are not included for space reasons, they must be in some way available.
A second issue comes from the very nature of analysts to be "market makers". The analyst's job is to articulate the strengths and weaknesses of what are sometimes very new technology areas. One way to do this is to define the area concerned, generally by naming it in some way: in doing so, advertantly or otherwise, the analyst firm creates a market for that product. This was a reasonable approach in the past, when the majority of software applications were custom coded and most IT purchases were of solution components. These days however, we are in a very different playing field - IT refuses to be pinned down in the same way. SOA, for example, offers a terribly important set of principles that any IT shop should apply, but it does not map onto any set of products in particular.
Trouble is, IT marketing in some ways depends on this need to classify products, and some analyst firms fall into the same trap - areas such as Identity Management, for example, are impossible to characterise in this way. Some analyst firms still insist in trying to apply the old models to these new technology areas, but this just plays into the hands of vendor marketing departments who can then jump on board the bandwagon. However, given the fact that most products can only partially fit the definitions, the main result is confusion on all sides. Disparagingly one could refer to this as bandwagoneering - only in most cases, the wheels fall off the wagon almost the moment it is set off down the hill.
Finally, the vendor's requirement for analysts is very different from that of the end users. Ultimately vendors are answerable to their shareholders, and they exist primarily to make money out of IT sales. End users require analysts to help them get the best value out of IT, to make their businesses run better. In the fast-moving game of IT, sometimes the end-user requirement for business value has been subordinated to the vendor requirement to make money. I'm not saying that analysts have been directly complicit in this, though of course this is what market research is for. What I would say is that sometimes, the analyst industry as a whole hasn't always been as vocal as it could have been to protect against it.
I believe it is the combination of these issues that has led to the credibility, relevance and independence of analysts to be called into question. I agree that change is necessary - but I wouldn't level the finger at any particular firm. Instead I would argue that the IT industry as a whole needs to change its marketing tactics, that the analyst industry needs some new models to help itself help its end user customers, and that finally that these end user customers need to be more demanding of their advisors. All of these things are happening, perhaps a little too slowly for my liking. I don't believe the final answer lies wholly in blogs, or in open source research, though these are highly valid options, and indeed may be the necessary catalysts for change. Blogging is indeed rocking the foundations of analysis - indeed in some cases there can be a fine line between the two.
Meanwhile, there is some very good work being done by analysts across the globe. Perhaps now is the time to get back to basics, to assess what services IT analysts should provide, to whom, and how. The jury's out and the only thing I know for certain is that it won't be me that makes the final decision. I watch, and analyse, with interest.
March 2006
This section contains posts from March 2006.
Thought for the day
2006-03-01
Are bloggers the new masons?
The laws of connectedness
2006-03-01
OK here goes to try to define whats going on in this brave, new, collaborative world, Ive drawn up a series of laws. These will evolve feedback welcome. Thats the point see law 16!
Laws of connection
1. Connectedness is about joining in 2. Joining in happens automatically when the barriers to joining are low enough 3. Connections form between individuals, not organisations 4. Connections link devices, services and people 5. Connections are two way 6. The value of connections increases based on the number of touch points 7. Connection is a means to an end: the end is participation
Laws of participation
8. Communities form as a natural consequence of connectedness 9. Communities define their own mechanisms, language and etiquette 10. Individuals occupy roles within communities 11. Participation can be active or passive, hub or spoke 12. Declaration is a pre-requisite to active participation 13. Participation is a means to an end: the end is collaboration
Laws of collaboration
14. Collaboration is the achievement of goals by a connected community 15. Goals benefit individual participants, not the community 16. Active feedback is essential to achieving goals 17. Success is proportionate to the number of participants 18. Open collaboration is self regulating
These laws are not specific to any technology or group. Example themes that have driven these laws are: text messaging, Make Poverty History, Blogging, Marillion, BNI, LinkedIn, Cluetrain, peer to peer, mashups, Warcraft, ecademy, eBay, street teams, open source, Skype, Flickr, wisdom of crowds, SOA, agile development, Usenet, Sharepoint.
Thought for the day
2006-03-02
Snow brings out the child in everyone.
Re: Innovation in the UK and Europe
2006-03-03
For some reason I can't post this comment on James's blog, so I've put it here. If anyone spots any offensive words let me know!
So, re: this post -
I had to read this more than once - I started disagreeing, then I realised I did agree youre spot on that we seem unable to turn theory into cash.
Despite the absence of cash-earning I think we sort of expect companies to form in the US, so we leave them to it theres plenty of innovation in the European software industry. I pointed you to Exoftware, an Irish company theres a great page of links on their Web site, and they are leading the Agile Alliance in Europe, worth a browse. Similarly I used to participate in an OO conference (OT) that was totally freeform, great stuff. Much was theorising, which was a downside, but on the positive, some of softwares best thinkers are European (Jacobsen, Fowler). You should also take a closer look at the patterns community very active, very communicative, very SOA and very European!
Fascinatingly to me, and Im trying to work out why, this very active community is not too heavily into blogging. I think it might be that the communities formed before blogging, and therefore theyre sticking to older mechanisms for a variety of reasons. Blogging favours individuals, perhaps nobodys in favour of sticking their heads above the parapets (apart from Grady Booch of course, but thats an IBM thing!) Theres country differences as well I did a presentation in Germany yesterday, and when I asked the audience, they all said they read blogs. In the UK financial industry version today in London, nobody did. Very interesting. Perhaps, also, youre not reading the blogs in French, German, Spanish and therefore not hooked into whats going on at a local level?
Anyway, all good food for thought.
To Ecademy or not to Ecademy?
2006-03-03
That is the question - and I just cannot decide. What it boils down to is that, to join this illustrious band of networkers, I have to pay money. Now I'm very happy to cough up when necessary for a useful service, and I'm sure Ecademy is exactly that, but that's not the point. Right at the moment I can barely keep up with the number of "free" social networking facilities I'm using, and I believe these are only going to increase in both reach and functionality over the coming months and years. With this in mind, are there really any additional benefits I could get from a paid service, and - most importantly - would I be able to devote sufficient attention to get my money's worth? In all honesty I just don't know. When I was talking to Plaxo last year I did raise thsi question with them - how exactly are you going to make money? I wasn't convinced at what they said then, and neither am I absolutely sure about what Ecademy can offer me, now.
I'll keep on thinking about it - but for the time being I'm afraid I'm going to have to go for the "when in doubt, do nowt" school of decision making.
My-name-is-J-o-n
2006-03-03
So, I understand that the MSN search engine is now "powering" Microsoft.com. This is not good news, for one simple reason. When I put in my own name, it comes up with "Were you looking for joan collins?" Well, no, I wasn't. I've had enough of that kind of victimisation and abuse in my lifetime. Of course I immediately put in a complaint.
After all, what would Uncle Phil say? :-)
On the plus side - at least I was the first "Jon Collins" - and the second, third, fourth and fifth. I'm not going to give in to their thinky veiled flattery, however.
Register for UK transplant
2006-03-06
I just got an email from my sister in law, which read:
> I always meant to, but never?got round to properly registering on the UK register to donate my organs for transplantation.? But I happened to see a poster with the website address on,?and now I have done it - hurrah! ?It's so easy on-line and I would like to give you the address,?in case you would like to do the same.? The website makes for interesting reading. > > Now we're in Lent it seems to me to be?a good moment to do something that in future may help the lives of 10-15 people. |
---|
She was right - it took no time at all.
Is this what radio's all about?
2006-03-07
Just a question, not a statement. The suggestion from this - the position of the UK's Radio One in the scheme of things musical - is that concentrating on playlists and the lowest common denominator, gets results. While people might question the tactics, it is difficult to argue with the outcome.
Today I shall mostly be...
2006-03-07
... sitting on an aeroplane in the rain, waiting to get a slot onthe runway, so I can go to Nice. Humpf. At least they've let us use our gadgets - I've run out of sudokus.
Video is its own killer app
2006-03-07
I've just installed the latest edition of Skype - which incorporates video. Despite spending (aka wasting) a few hours trying to work out why my video camera didn't work - before remembering I had to reset the laptop motherboard (thanks Sony) - the whole thing was remarkably seamless. I then set it up on one of the home desktops, and my daughter Sophie and I spent a happy, if random half hour testing the thing. Think - Dad wandering around house with laptop, Sophie zooming in on her nostrils, you get the picture.
But it works, its simple and straightforward and it will be an invaluable tool for those home calls when I'm out of the country. Already I felt more connected, in a nice way. Of course its been possible for years - but with Skype's critical mass, USB and broadband, the barriers to entry have lowered. Expect it soon on a console near you.
Thought for the day...
2006-03-09
Denial is the root of all evil
Playing with words
2006-03-09
I was waiting for a call this afternoon, and thinking about all this analyst credibility stuff that's been flying around the Web. I thought I'd set out my stall and say what I believed were the key facets of IT analysis as I would like to see it done. I started with transparency and clear explanation of results, and it sort of grew from there - before I knew it I'd made a word or two. Here they are - you can consider it my analyst manifesto 1.0, and I shall do my best to stick to it.
FREEDOM of research
RELEVANCE of results
EFFECTIVENESS of advice
EDUCATION, not evangelism
TRANSPARENCY of methodology
RESPONSIVENESS to requests
ATTRIBUTION of sources
DECLARATION of interests
EXPLANATION of conclusions
When did blogs tip?
2006-03-09
I'm currently re-reading The Tipping Point by Malcolm Gladwell, largely because I could no longer remember what "Maven" meant and I thought I'd better check. Also, Mr Gladwell's started his own blog - I'll dig out the link when I have a moment.
Right now, I have a question - when did blogs tip exactly and what caused it? I'd put it about 2 years ago, but its a guess. One thing I do notice is that the law of the few - the tipping point theory that explains how trends are transmitted - relates both to blogs and to bloggers, which are both Connectors and Mavens. Its an important point, and one which merits more thought. For now consider this a bookmark!
Final note - Connectors link people to people; Mavens know lots about things, and want to tell others. Sound familiar?
P.S. Also just bought - the Wisdom of Crowds - that's next!
Hors l’anglophonie
2006-03-10
Following up on James's project to look for euro-innovation, I thought I'd start hooking into blogs that weren't written in English. Here's a first - of Tristan Nitot, founder of Mozilla Europe. I'll start from here and see where it goes.
I've also got in touch with some old colleagues at Frmug... seems like aeons ago!
Update to Marillion book
2006-03-11
Update: I've now submitted the fixes - thanks for all your help!
We're reprinting some copies of Separated Out - and taking the opportunity to fix a few errors in the text. If anyone knows of anything they'd like to see fixed that's not already been listed here, please could they add a comment to this post. Thanks!
P.S. If anyone was having problems accessing the Separated Out microsite, these should now be fixed - just hit refresh on your browser. Thanks again!
Thought for the day...
2006-03-27
The blind can see things others can't.
Guys in the bar - New Yorkers and analysts
2006-03-30
Is always dangerous to do too much of the same thing at once. Such as, for example, reading the current 'must-read' book list of The Tipping Point, The Wisdom of Crowds, Freakonomics and now Blink, back to back. Doing so has led me to a whole number of potentially totally invalid conclusions, but which are interesting enough to write them down anyway. Not least, all three authors are based in New York. There's a 'so what' there, until you take into account that all three are written (at least in part) by journalists or columnists, two of whom work for the New Yorker and one (the second Stephen) who has written for the New York Times. Now, good as these books are each in their own way, I can't help wondering. Malcolm Gladwell (who was first with his Tipping) offers a glowing endorsement for his stable mate's Wisdom, and another for the Freaks across the street. This latter is perhaps the most surprising, given that both the Tipping Point and Freakonomics write about exactly the same incident ' the falling crime rate in ' ahem ' New York, but give totally different reasons for the causes. In Gladwell's case it is down to the broken window theory ' fix the window and people will know that such things aren't tolerated, and therefore will be put off bigger crimes. However Levitt and his partner sets this theory as part of the process, and not the cause of the turnaround ' which is put down to the changes in abortion laws 15 years before. Despite these contradictory findings, the book is seen as thoroughly endorsable.
Don't get me wrong, I think all of these books are valid, but I can't help thinking of the foursome (or at least the three journalists) as guys in a bar, waxing lyrical and theorising about the events of the day. They sometimes agree, and sometimes disagree. One writes a book, and gets it published; so does his friend, and then so does the couple of guys that come in from time to time. Of course these hale chaps are not alone in writing books of this type ' a cursory glance around the shelves in Borders reveals that there are plenty more where these came from ' nor would I like to suggest that the authors acted in any way untoward. It does seem a bit weird however, that the authors of the books that are currently at the top of the pile, should all come from the same place. Cue a shrug. Perhaps its down to a shared writing style (the books are similar in this) that is piquing the interest of the reading public, or an example of the tipping point phenomenon itself. Who knows ' but it is the collective endorsement (as measured by sales) that validates each set of theories and their authors.
Something else has also struck me quite hard about these books - that they seem to be written all about me, or at least me as an analyst. Now then, now then, this isn't a cue for some over-indulgent self-absorption by a narcissistic blogger. Or perhaps it is - but let me explain what I mean. The books actually all seem to be written about the IT analyst business, and how it actually serves the needs of the IT industry as a whole. For example: the Tipping Point describes the law of the few, and talks about connectors (people who know lots of people), mavens (saddo geek types that write articles such as this one, delving into the detail) and salespeople (those who present information so to inspire others to act). These are all analyst traits. Freakonomics is about looking for the real drivers behind the trends, delving deep in the data to help the bigger understanding of what's going on. That's an analyst job if ever I heard it. Blink concerns pattern-matching skills, and the ability to reach conclusions with only a handful of facts: an essential trait for any public-facing analyst. Finally ' and here's the humzinger ' we have the Wisdom of Crowds. This is less about the analysts as individuals, as it recognises that everyone can be, and often is, wrong. The clincher is when an IT vendor takes a room full of analysts, pays for a hotel room and a nice meal, pitches to them and then asks for feedback. Ah, there's the rub ' the collective feedback is probably as good as a vendor is ever going to get in terms of advice, which is food for thought if anyone's worried about the analysts exploiting their position by getting the free meal. Its also a cautionary tale for vendors ' particularly those who are less structured about how they gauge feedback! TANSTAAFL of course, but while the cost may be low for each individual analyst, the combined value is maximised for the vendor.
Indeed, when asking what's an analyst anyway, we can take some quite deep insights away from this. Are bloggers analysts? Perhaps. What about consultants, internal or external? Possibly. Marketeers? Maybe. From the analyst's perspective, the real trick comes from deciding to treat one's capabilities as a maven, connector and salesperson as a full time job ' I would argue that bloggers, marketeers and consultants can become analysts, if they choose to step up to the plate, to make themselves available for advice and to publish what they find. From then on, its up to their customers to endorse their role, by accepting them as part of the crowd that delivers the combined wisdom. Without this external validation, a bit like Malcolm, James, Stephen and Steven, we are all just guys in the bar.
… P.S. Its probably worth adding that this post was based on a discussion, over a beer, with one Larry Velez of Forrester. Hi Larry, QED!
April 2006
This section contains posts from April 2006.
Towards free wireless?
2006-04-04
Are we heading towards the wireless pavement, or are examples of free wireless just exceptions rather than the rule? Peter Cochrane would like to think the former, and I hope so too - but it probably comes down to how wireless is regulated. I shall be keeping an eye on services such as Fon.com and perhaps Myzones.com (which seems to have evolved since it was written about above).
Microsoft Vista and new horizons
2006-04-08
There's some funny goings on inside computers and other devices at the moment, many of which have some kind of impact on the consumer space. Here's a selection: Intel is releasing advanced management technologies ' I don't know the full story, but these enable a computer to be interrogated via the network without even booting up, for example to aid diagnostics or firmware modifications. Motherboards are coming out that can power down unused elements, saving electricity and reducing heat output. USB sticks can store and run applications, and even boot their own environments. Laptops are being released that can boot either as a computer or as a DVD player ' I understand the latter boots with a stripped-down Linux kernel. All kinds of mobile devices are coming to market, some more useful than others (MP3 camera, anyone? Games console phone?) but which are becoming cheaper as fast as they are growing in complexity. We have set-top boxes and games consoles that are, in effect, computers, but they hide their true form behind a vastly simplified, appropriately customised interface. We have mobile phones and MP3 players pre-loaded with MySpace and XM/Napster services, each offering what is, to all intents and purposes, a browser interface.
Not so very long ago a computer was a computer, it had a processor in the middle of the motherboard and some memory on the side, it ran an operating system and supported a variety of applications. From this general purpose model things are becoming awfully specialised ' these days, the device and the application are often bundled. The iPod, for example, is a computer with a hard drive, a processor and memory. I assume it runs its own chip-level OS and application, straight from, and on top of the hardware.
In parallel with this, there is an evolution in how consumer computers are being used, particularly by kids. When my children log onto the computer they tend to use it for email, chat and web access. As we have discussed before, the kids of today see the Internet as a place ' Myspace is a community, a joint (though they wouldn't use that word) to hang out in. Anyone with a number of kids will have experienced the fights to get on the computer ' not to do anything 'productive' but to see who of their friends is on. We can see Mark Thompson's latest pronouncements about the future of the Beeb (think: teenagers, content, communities), as both corroboration and catalyst of this trend. Whats interesting in all of this is that technology is becoming a lottery. Nobody's too bothered about who's responsible for a given device or application: most important is, is it cool, and do my friends have one. The Myspace device (coming soon to Europe) and the iPod are competitor products, but nobody really cares who was responsible for what happens inside the box.
What's Windows Vista got to do with all that? Perhaps nothing, and that's just the point. Microsoft's hold on its incumbent position has always been based on two premises ' first, that the operating system is a necessary basis for general purpose computing, and second, that people want to standardise on the same platform as everybody else. Now, however, the 'thing' is migrating to the application layer ' For Myspace and XM it is the portal, for the iPod it is the interface, the device. If the goal is to give the masses what they want, what's to stop running a Myspace environment directly on top of the silicon? Could we envisage a device that offers integrated videoblogging and email, with nary a Windows logo in sight? It doesn’t take a rocket scientist to work out that a Google-branded device (probably “powered by” Sun, you heard it here first) might do rather well, not yet but at some point in the future. In a sociological twist on Metcalfe’s law, its less and less about the technology, and more and more about the shared experience: the success of any such device will be down to whether a critical mass of like-minded types jump on board.
I don't know the final answer to this, but I do know that there are going to be even more choices in the future than there are now. As consumers form into communities, each community will choose the most appropriate mechanism for the time, and after a while it will move on. In such a lottery there will be many participants and few winners. Of course Microsoft has its own games console (the 360), which is doing rather well; however, it is unlikely that the company can retain its present level of penetration in the consumer space on games console sales alone. Equally, Microsoft has all kinds of digital home initiatives, but for device manufacturers there is little incentive to pay a stipend to Seattle. Windows Embedded means locking oneself in and reducing future flexibility, and'indeed, companes like News International (owner of Myspace) are increasingly in competition with Microsoft/MSN. What possible incentive could there be, to'shackle themselves to the enemy?
Don’t get me wrong, I don’t believe Microsoft is sitting on its laurels. Neither is it losing on all fronts - the messenger preference of local kids appears to be MSN, and for email, Hotmail. However,'the number of fronts is opening all the time. And while Microsoft may have me-too offerings in the shape of Microsoft Search and Windows Live, it doesn’t take a rocket scientist to work out that Microsoft’s core faith lies in its operating system. Microsoft is Windows, and without it the company becomes no different from any other software and media company.
Over in the business world, there’s plenty of milage left in the OS (though this is fragmenting as well - think Hypervisor). In the consumer computer'market however, the days of Microsoft’s dominance may well be coming to an end. If indeed, the market continues to exist at all.
Hat tip to Cote, without whose comment I might have got away without writing this!
The last yard
2006-04-11
There's been various writings about the Internet as a place - for better or for worse. Nothing new, yada yada yada. It may be true that the kids of today are growing up in a world where chat is preferable to phone, and MySpace discussion boards are a valid hang-out. As my son would say, "fine, whatever!"
To me however, there's a fatal flaw in this "place" thing. First, at the moment the dream of virtuality is encased in a couple of kilos of tin with a 17" screen. Of course, I could carry this around the house with me but that's not the point - I generally don't, and nor does anyone else. This is the last yard - between the screen and my eyeballs. If I want to "jack in", the last yard requires me to go perch at a desk somewhere, much as I'm doing right now. Indeed, let's work through my last ten minutes:
- - in the kitchen, I am thinking over something discussed last Friday. I decide it would be worth writing about.
- - I head upstairs to my computer and forget what it was I was going to do
- - back downstairs, I make myself a coffee and it comes back to me
- - I decide its not worth the effort, so I sit and read the paper
- - I return upstairs to check my email
- - when at my computer, I think, "oh sod it, might as well post"
Its hardly communications at the speed of light, is it? My own foibles aside, I don't believe we are even one degree out of 360 towards really integrating the Internet with daily life. The whole process above is neither natural nor practical, nor particularly inspiring towards seeing computers, and even our darling Internet, as anything more than a distant haven that we can visit from time to time. This is the Internet as a place, but its a place we have to go, rather than the place we are in - adequate for now perhaps, but not even scratching the surface of what should be possible.
Voice on PDA: close, but no cigar
2006-04-12
In the Wim Wenders film a la "Bis ans Ende der Welt" a man talks into his handheld computer and it notes what he says, even as he goes off topic as he reacts to the events around him. Its something I've wanted to be able to do ever since the film came out in 1991 (that's do the voice thing, not go off topic - I'm quite capable of that already).With this in mind, I've just done my once-yearly trawl around the world wide morass to see whether anybody had yet cracked the speech-to-text idea on a "standard" PDA. A few years ago they claimed that processors were to slow, but as I now have a Dell Axim that has a faster processor than my very own Sony Voicepad, I remain forever hopeful. Perhaps its still a little premature, however. There seem to be a number of vendors selling command-and-control packages, notably Microsoft, Speereo and HandHeldSpeech but that's really not what I'm looking for. VoiceSignal comes closest, with a voice-to-SMS program called VoiceMode; not sure it's out yet, or if it is it's only on specific devices. One possibility I was toying with was to install Linux on the Axim, then ViaVoice for Linux and see how that ran. It has been reported that IBM no longer supports ViaVoice, but it appears that it still does within its developer kit, so maybe that's a possibility. I downloaded the SDK but I didn't meet the pre-requisite requirements (to buy WebSphere!).Perhaps I should wait for Oqo pricing to fall, or for Origami to hit the mainstream and become pocketable (have you seen the size of those things?). For now, however, mainstream use of handheld voice remains out of reach. Pity.
Update: should have mentioned Research Lab, who sell a speech recognition SDK for Windows CE and its derivatives.
What goes around... Freecycle
2006-04-13
My mum just sent me a link to Freecycle, which (once again) looks like a thriving global network I'd never heard of. You can sign up with your local chapter and offer things you don't want anymore, to like minded individuals. Cracking idea - already my eyes are scanning the office for junk!
Here's the blurb from my local group:
This Freecycle group matches people who have things they need to get rid of with people who can use them.? Our goal is to keep usable items out of the landfill.? By using what we already have on this earth, we reduce consumerism, manufacture fewer goods, and lessen the impact on the earth.? Another benefit of using Freecycle is that it encourages us to get rid of junk that we no longer need and promote community involvement in the process.? Free your inner pack rat!
Latest tool from last.fm
2006-04-13
Of course, all you media junkies will have known about it for ages, but I didn't, so there. There's a feature of last.fm that allows you to have your personal charts displayed on your web site. How cool is that? I do wonder whether there's a Heisenberg factor in last.fm, i.e. that listening preferences are influenced by?their visibility. Its possible, but perhaps temporary.
Meanwhile, I've finally set myself up on Myspace. Can't help thinking we need some of Neil Macehiter's federated identity to tie all these virtual worlds together!
P.S. Speaking of virtual worlds, here's a virtual community that talks about them...
PR 2.0
2006-04-13
Once every few months I meet with James Cooper of AR/PR/MR firm Ascendant, just to chew the fat and see what's happening on the other side of the fence. Inevitably this time, the conversation turned to blogging, in particular "the press release is dead" writings of Tom Forenski. I disagree, for what its worth - but I do understand people like Tom need to present one side of the story. After all, who's going to listen to someone that just says, "here's a new tool, I suppose it'll be useful for some things..."Anyway we covered word of mouth marketing, the influencer community in general, what makes a good European analyst in this context and so on. It was an interesting chat, thanks James, and thanks for your additional remarks by email - I copy them here, and no doubt we'll be looking back on them in a few months:
Hi Jon, I think I found the article you were talking about yesterday. A few initial thoughts:
- It is still a heavily US debate (doubtless it will morph here as well, but the market is smaller and more subjective)
- I think commercial interests will ensure blogging doesn't disrupt PR to a negative level but becomes just another medium as we discussed
- The ‘trust’ factor has to be a big issue here
- I think in Europe for mainstream communications/business it will just be an extension of the chatroom and useful; a few key blogs will emerge that simply become another CW or Silicon.com; the authors will be the equivalent of today’s journalists, or editorial input will be achieved by responding reactive or proactive messages directly
Having said all that I am still new to it J James
A printing service, not a book
2006-04-18
I stumbled across Kevin Kelly's Web site this morning, and found two full texts of books he had written. This may be old news to many, but it wasn't to me. Here's what he wrote about what he'd done:
"Out of Control was one of the first books to be available in its full text online. All 230,000 words are still available for free on the web here. (This fortuitous opportunity came about because my literary agent, John Brockman, was among the first to realize the value of online rights before the publishers did, so when he negotiated my book deal in 1990, we kept the online rights.) I mention this elsewhere, but its worth repeating: You are free to print out the whole book; if that will help you read it, please do. But I can save you the hassle, time, and paper spent printing it out. Click here to get a printed version, nicely bound between color covers and mailed to your desk, all for about $16. Think of this as a printing service, not a book."
I think this is fascinating, and Kevin's absolutely right - the same idea can perhaps be applied to the music industry, which can be (let's face it) very good at packaging things in a way we want to handle and listen to them. All good food for thought.
P.S. OK I changed the associate link :-)
Online gaming as a social experiment
2006-04-18
It is all to easy to abuse one's position as an industry analyst and technology commentator. I'm not talking about backhanders from IT vendors with the unspoken promise of a mention. Rather, it is possible to steep oneself in the overflowing and delightfully warm spring waters of geekdom, claiming it somehow fits in the category of "research". I make no such excuses, therefore, for my current "Level 60" status in World of Warcraft (that's as high as you can go, folks), nor for the fact I've recently picked up copies of the galactically large Eve Online and the beautifully rendered Guild Wars. All of these fit into the category of MMORPG, or massively multiplayer online role playing games, the title itself sealing the nerdy nature of the whole affair.
All the same, it is difficult not to be impressed by the phenomenal rise in interest about such "games" - I understand that Blizzard Entertainment (incidentally, currently looking for a PR manager) was pretty much saved from the very real financial wolves, due to the success of its World of Warcraft title. I use the term "game" guardedly as, while these things are undoubtedly designed for leisure time, there's as many social aspects as there is slashing and burning. Eve, for example, is a great deal about trade - players are presented with a quite complex set of tools for buying and selling virtual commodities, from ore up to space ships, and can learn to hedge and profit from the imaginary markets. All of the titles encourage some form of collaboration - tasks get easier, puzzles are easier to solve and monsters are quicker to kill, and indeed the whole thing does become a great deal more sociable.
Perhaps most fascinating is where the line between "virtual" and "real" starts to become less well drawn. There was the famous case last year of a very real murder taking place, due to the "theft" of a not-so-real sword and its subsequent sale on an auction site - the authorities were powerless to do anything about the "theft" as, after all, the sword did not actually exist. eBay is serving as an international currency market for virtual gold: when I told a trading expert about the practice of "framing" virtual gold and selling it in this way, his first response was, "ah, money laundering." Very recently, an online funeral was held for someone who had died in real life; unfortunately the virtual event was trashed by an opposing faction, who "slaughtered" the attendees at the funeral and videoed the result. There's a fascinating expose of this at the most excellent Virtual Worlds web site, which also discusses such issues as men taking on female forms and then being propositioned for cyber sex.
There are many potential links that remain to be exploited. For example, online guilds often have their own forums and discussion boards, and there are in-game chat channels - it would make sense to bring both of these in line, so online and offline discussion and chat could be integrated in some way. The social networking aspects of the games are thus far under-exploited, but all it would take would be an API between (say) Eve and Myspace, and the rest would take care of itself. Perhaps both would drive standardisation across games, with the potential that trading could take place, and even that characters could move, between virtual worlds. There's a very good, if tongue in cheek article about the possibilities, here.
The fruition of such ideas would not be without causing other issues. For a start, vendors of online worlds are currently very keen to ensure they keep their subscribers, and are unlikely to open the door to churn. Neither do most people want to reveal their true identities outside the gaming world - the current debate on identity management is already extending to "digital social environments" that include MMORPGs. The number of different kinds of abuse, from the aforementioned laundering to good old fashioned bullying, will be limited only by the imagination. Frankly I have no idea what is to come, but I do know that it stands to be very interesting indeed.
Freaky
2006-04-18
I love the Internet - having just read Freakonomics, I can now make comments on the authors' blog. Specifically on a post about something I was trying to dig out anyway - the link between nutrition and behaviour. You can find their post about it here.
Testing the Qumana blog editor
2006-04-19
David Terrar's blog put me on to Qumana, a blog post editor that works with Wordpress. It looks very nice and it didn't cost anything - the business model is (I believe) advertising based. This is a test post - if it works out I think I'll probably stick with it as it can work offline, has a spell checker and adds a number of blog features I'd never use otherwise! The message below can be removed, I believe.
Update: One click and Qumana posted straight to the site! Rocking :-)
Update: Just deleted a post - don't know what that does to the feed...
Update: Not obvious how to create a new post (right click in the system tray, dummy ;-) )
Powered By Qumana
I now have a Russian email address
2006-04-20
I certainly didn't mean to do that. It all started when I was following up on my MMORPG post, to see whether there was such a thing for the Pocket PC. Yes there was, I found, a game called Sphere from a Russian developer Nikita. Following the FAQ in English, I followed a "where to buy" link and found myself on several web pages in Russian. It was reasonably obvious (through a translation web site, anyway) that it was asking me to register, which I duly did - goodness knows what I've set up to be my test question, but anyway, I got there.
To my surprise, I now find myself with a "@ya.ru" email, accessible via?the?Web. Unfortunately I still have no idea where to get the game. Oh well, it really is a shrinking world.
Update: Again through online-translator, a wonderful tool, I have been browsing?the Russian version of the Web site and found the download page. It transpires that the game itself is only available in Russian. I think I'll leave it there!!
Spam in Russian
2006-04-20
Following up from my previous post, if I used my Russian email address, would I get Russian Spam? Do English-speaking spammers send English spam to Russian addresses? If not, wouldn't it just be the perfect Spam cure, to get a non-English speaking email address? If I got Russian Spam, I wouldn't understand it anyway!
What did I miss?
del.icio.us on Wordpress, anyone?
2006-04-22
Quick question - could anyone let me know the easiest way of integrating a daily/weekly feed of del.icio.us bookmarks into Wordpress? I can't find an obvious answer in any of the obvious places.
Incidentally, as I get more into this blogging thing, I'm more and more convinced that the future of blogging is in tagging. You heard it here first - or maybe you didn't.
The latest in anti-navel gazing technology
2006-04-26
I was starting to be very impressed with Qumana. Then, on my way over to San Diego, I indulged myself in a "why do I blog" post. You know that sort of thing - I'm participating, how cool is that, its a social revolution etc etc. It was a great post - lyrical, even poetic, with just a soupcon of hubris.
I saved it in Qumana to upload when I got to a connected space... or at least, I thought I did. When I got there, all I found was a file with a link to the one blog post I mentioned, from David Terrar. All my musings, rantings, cleverly counterpointed justifications and criticisms were gone.
I can only assume that Qumana, being the latest in social technology, has some kind of leading edge, anti navel gazing filter (I can't remember if this was my idea or Cotes, but by that time we were just guys in a San Diego bar). In which case it really is an incredible piece of technology. Alternatively, perhaps it was just a bug. In more ways than one, I should probably get around to reporting it.
If it was that clever, perhaps I just did.
Today I am mostly playing with…
2006-04-27
- RSS Reader - neat, but requires .NET framework 1.1
- RSS Publisher - no clue how this works
- Toshiba Bluetooth stack - "fun" getting bluetooth headset working with Sony laptop
- iEx - who needs a Mac...
- Synergy - multiple computers on a single desktop. Neat.
- Second Life - could I/should I be the first virtual analyst?
This flight's been too long
2006-04-27
Is it just me that thinks of "Rock the Casbah" when he sees "Lock the Taskbar"?
Port forwarding
2006-04-27
Doesn't look like I'm going to get onto the San Diego airport network right now...
https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=https://login.airportwins.com/CN3000-boingo/?dlurl=https://cn3000.authdirect.com:8090/goform/HtmlLoginRequest&l=ans_san-005&original_url=
Get the picture?
Marillion on the box
2006-04-28
Not that box, but Pandora - apparently, its part of the "Music Genome Project." I'd not heard of it, maybe I need to stay in more. Believe it or not, I don't live, breathe and sleep Marillion, even if I did write a book about them. So, when I was prompted by Pandora to put in a band that I liked, Marillion wasn't the only potential option. Honest.
What came next intrigued me to make the post, and yes, it is about Marillion as much as its about Pandora. It came up with the following text: "We're playing the following track because it features electronica influences, a subtle use of vocal harmony, mixed acoustic and electric instrumentation, a vocal-centric aesthetic and extensive vamping." Now, then. Apart from the fact that I'm not absolutely sure what "vamping" is, I bet my navel that this is pretty much why the majority of Marillion fans I know listen to?the band. Its also pretty much the polar opposite of what non-fans think Marillion is about these days.
Having played me "You're Gone" (pretty much the polar opposite, etc, etc), its then played me "Can't Explain" by ISM, "Forever and a Day" by the Dissociatives and "Where's the Man" by Scott Weiland. What have they got in common? Something with Marillion, apparently. Also, the fact I'd never heard any of them. Three out of?the four share an apostrophe, but I don't think we can count that. They all have similar descriptions, akin to the above. Maybe that's it. There's various things I can do from here - the first is "subscribe", and I think, why not. I can pick one of the songs so far and go down a different route, or I can just let it play - there's the occasional advert apparently, we'll see if it starts to bug.
Finally, and the reason I launched it in the first place, I can link it to my Last.fm account through this mashup by Gabe Kangas. Next up: "Buffalo Swan" by Black Mountain. Cool. Now, either Pandora is staffed by people that are trying to subliminally point the world towards certain bands ("AC/DC? Hmm, yes, try this - lots of vamping") or there really is something in it. We'll see, but for now I'll side with the latter.
Something for the weekend, phperhaps
2006-04-28
Just downloaded a couple of PHP editors - from here and here. Now all I have to do is remember how to program. Expect an ultra-powerful Alphadoku generator/solver by Monday.
That was a joke.
Analysts and alcopops
2006-04-28
Its nearly the weekend. Last week, Duncan Chapple wrote a piece on analysts and wine. I said in the comments that I think he missed a few beverages, and I've added a few more here:
- real ale - a little nutty but better than the ordinary
- lager - bland, but reliable
- cheap whisky - best watered down
- port - you'd think it would improve with age but it doesn't
- good brandy - aging, but well formed and consistent
- vimto - sickly sweet but strangely endearing
- guinness - impenetrable
- caffeine free diet cola - so much taken out you wonder why its there at all
- alcopops - seems good at first but leads to confusion later
- "new" coke - invention beyond the call of necessity
- vodka - don't notice its there but makes a big difference
Any more?
May 2006
This section contains posts from May 2006.
QTek 9100 - the perfect handheld?
2006-05-03
A few years ago I was doing some work for Sun Microsystems, something to do with wireless, mobility and all that. One day I was in London, near Westminster and I had one of those "a-ha!" moments. I distinctly remember popping into Macdonalds in the old city hall, opening my pad and drawing what I considered to be "my perfect handheld".
Earlier today I found the piece of paper lurking in some old files. Here it is:
For the past few months I've been road testing the QTek 9100 (linking to where I nicked the picture, thanks). Here it is - looks awfully similar, hein?
So the question is - now I have it, how does it stack up? I'd love to say it has achieved handheld perfection, but it doesn't, quite. There are a number of reasons:
- It is sloooow - only a 200MHz processor for all that voice and data gubbins, it just doesn't cut it. It is usable, however it isn't what I would call slick. There is a Skype client but that runs like an absolute dog which is a shame, as teh dev ice could almost pay for itself based on a few months' Skypeout.
- the screen is too small. I need a screen I can comfortably cut and paste sections of documents on, and while the resolution is great, the screen itself is not quite there. Not really an e-books client at this point.
- it is buggy - sometimes the email client crashes for no apparent reason (sorry Microsoft, but I would have thought you'd cracked email by now).
- its too thick. Nuff said.
Apart from that, it does a pretty good job. Its got some features I didn't specify - note the absence of Wifi on my picture for example, which dates it somewhat! It achieves my personal goal of a single device which could (potentially) be used for everything I do, I'm a bit of a laptop junkie these days so I'm not sure that'll ever be necessary, but anyway.
No, its not perfect, but it is getting pretty close.
How to make a John Collins
2006-05-04
Taken directly from Sam Malone's Black Book, 6th Edition?(bought in Cheers Bar in Boston):
Fill Glass with ice
2 oz Whiskey
Fill with sour mix
Shake
Dash of Soda Water
Garnish with Cherry and Orange
For the sour mix:
In blender:
1 Egg White
1 cup of Water
1 cup of Lemon Juice
3 tbsp Sugar
Blend until?sugar is liquefied.?
My take on blogging
2006-05-08
I don't believe blogs are all they're cracked up to be. Neither do I believe that they are not.
Powered by Qumana
Detheming and reframing
2006-05-08
Its time for my next step on the journey that we call blogging. I blog to experiment with the medium (I've never been into experimenting with the small and large), and I'm going to try out a couple of new things around the principle of themes.
The first is to de-theme this blog even further - it will be about everything and nothing, a stream of consciousness as pure as my impure consciousness can make it. If it comes across as a self-indulgent mess, so be it - there are things I can learn from that (and you don't have to read it :-) ).
The second is to extend my sensible blogging into more thematically coherent places. These are (or will be):
- Enterprise Information Technology - I've already started to blog (though I could do more) over at the Macehiter Ward-Dutton web site.
- Consumer Technology - plans are afoot - watch this space
- Music and media - plans are, well, planned!
Realistically, this is all an experiment - I have a number of theories that I'm testing out, notably comparing multi-user and single-user blogs, and curiosities around themes. There's also a navel-gazing perspective around linking one's blogging closer to one's personality. In my case I've never been consistent about anything for long, and blogs are by their nature fixed points of reference, so I'm interested to see how that can work. Anyway, more news soon!
Powered by Qumana
Symptoms of Oblivion
2006-05-10
A blonde-haired girl sits on the London tube. In her left hand, she holds a latex-covered iPod. In her right, a grande Starbucks. Next to her, a middle-aged Japanes lady is consulting an opened, silver deviice. It looks more advanced than the norm, perhaps a simpler version may one day become available in Western shops.
In the toilets in the Hilton hotel at Paddington station, a man is urinating. He is holding his Blackberry with two hands, his thumbs tapping out a message.
They are all oblivious.
The first life is currently preferable
2006-05-12
So, I've had a go at Second Life. Two gos, in fact - so far I have been picked up and thrown into the sea, told I have a big bulge and offered sex. Now, while I of course felt flattered (I believe I looked rather dashing in my tie-dyed shirt and shorts, set off by the blue hair), I happen to believe it was a bit forward for someone who has literally spent no more than half an hour in the "game". I'm still not quite sure what Second Life is, but family entertainment it is not!
Update: Here's a section from the beginer's guide: "Please take some time reading the Terms of Service ("ToS" for short). Unlike some sites or programs where you can safely press Enter and forget about it, here the residents live by the ToS and it is actively enforced by them - you can report abuse by someone or something whlch violates ToS, and this can lead to suspension or even expulsion from the game - or even a suit against you. We live in anarchy, where everybody can do what he/she wants, EXCEPT violating ToS. The first thing to notice is if you are in PG or Mature land. PG is much more restrictive - no violence, no sex, no offensive language, no running around naked or with "revealing" clothes (or even changing clothes!). If you think this is too restrictive, stick to mature regions and events."
... which explains things some more. Hey, I'm a stranger in a strange land here - Second Life is being constructed from first principles, to the extent that teh entire world is streamed as it is built, nothing is fixed. All the same, if I do get around to constructing a residence I think I'll do it in the PG world for now.
Hols
2006-05-25
Birmingham airport, Terminal 1. The gate for Heraklion has yet to be announced, I have bought the spare batteries and stocked up on Sudoku. I reach for "Cooking with Fernet Branca," which starts at an airport and whisks the reader quickly away from the masses and up into the mountains.
All is well. See you in a week.
June 2006
This section contains posts from June 2006.
Sympathy is what we need?
2006-06-05
As I come back from a most wonderful week in Crete I find the world is much as I left it - Rush fans are still avidly awaiting the next output while complaining that its a rip-off, Software as a Service is still going to raze all previous models to?the ground and someone's taken the pile of gravel on Freecycle. There's no doubt still dubious goings on, on Second Life and the earth is still warming up. There was a very interesting Panorama programme on the latter last night, on the Beeb, and one remark caught my ears - that US government officials had been "actively seeking influential people with a sympathetic view" (or words to that effect) to the government's perspective.
This idea - or at least, its statement - left me a bit cold to say the least. Influencers have opinions of course, and are individuals that will have sympathy to one perspective or another. A cursory glance over the output of some of my IT analyst colleagues,?competitors and other influencers shows that some write about one company more than the rest, some favour small innovators over?the big guys, some think one model of software delivery (SaaS, open source, etc) will "win" or at least is more interesting than the traditional models. The oft-asked question, "Do you follow market X" could be interpreted as meaning "Are you sympathetic to market X," as if not, why bother - so we end up with the potential for bias, just by standing still.
Overall, perhaps, any bias irons itself out if one is broad enough in seeking opinions, but lets face it, its not in the vendors interests to "go broad". With the influx of blogging as well, opinions are often seen through the filter of bloggers, who will tend to favour - err - blogging and?the Web 2.0 gamut over past and parallel technologies. Not necessarily wrong - just - incomplete. No doubt I'm as guilty as the rest about this, but its definitely one to watch for, and to keep in check. Meanwhile, the great thing about MWD and its IT-business alignmant mantra, is that it sets a framework for allocating and measuring sympathy - according to business value. As I come back from hols I set the following goal for myself: I can keep my favourites, but when I talk about them I shall endeavour to do so from the perspective of business value.
Even if voice recognition is the killer app for handheld devices, whatever the naysayers may think?( :-) you heard it here first), it'll have to be useful and usable to end-users first.
July 2006
This section contains posts from July 2006.
Thought for the day
2006-07-07
Bombs don't discriminate.
Isn't it scary...
2006-07-29
... when your own kids can do things you can't do? Here's one of Ben's animations. I taught him all he knows, honest...
August 2006
This section contains posts from August 2006.
A Tale of Two Horses
2006-08-04
I now have the latest beta of Internet Explorer installed, and it really is quite good. I won't be de-installing it, as the performance issues in the previous beta seem to be resolved. I have, however, had to install Firefox in parallel, for a number of reasons: not least that certain Web sites (Blogger?) still don't work properly, but also that some sites don't list IE7 as a supported browser.
Its interesting - following the browser wars of the past, if I squinted a bit I could have the impression that Microsoft is going out of its way to open a hole for Firefox. I know, that would be impossible... but in this world of Betas, perhaps having a stable alternative is the key to the future.
ALternatively, perhaps Microsoft could just fix the bugs before release, beta or otherwise.
September 2006
This section contains posts from September 2006.
Looks like I missed a good debate
2006-09-21
Well, the gang - Neil, Dale, James and others were at Hursley earlier this week and at lunchtime the conversation turned to an interesting discussion about kids, computers, antisocial futures and all that. I've not much to add apart from the fact I wish I'd been there (not just for this discussion!), but two things spring to mind:
- the best advice I've been given is to keep the kids computer access in a place they won't be shut away (i.e. the bedroom). Not only does this enable you to keep a parentally watchful eye, but also it means they will still be partially involved in what is going on.
- seems to me that there are three behaviours - passive/passive (TV), passive/active (traditional videogames) and active/active (social software). While these categories are broad generalisations - there is nothing more active than watching kids fight over whose turn it is next - the general tendency seems to be towards more activity and interaction, rather than less, which (again, in general terms) I see as a good thing.
Meanwhile I had a very interesting conversation with a CIO earlier this week, where he was talking about kids coming from college with all their computer-literate-social-networking skills, and the issues these people faced in the sometimes uncommunicative environments of the workplace. It will be interesting to see what happens in a few years time, when the trickle becomes a flood.
Powered by Zoundry
Does anyone recognise themselves in this?
2006-09-21
Today's Dilbert. Deeply unsettling...
Eucon say that again
2006-09-21
How very remiss of me. Two weeks ago I attended Eucon, the Rush convention at the Limelight (of course) club in Crewe, UK. It was a convention in the traditional sense - seasoned fans taking the opportunity to share their experiences and share the experience. As well as shifting a few books, it was a distinct pleasure to meet so many friendly people from the UK and elsewhere around the world. It was also a delight to finally meet Donna Halper, after so many phone calls and emails! Thanks very much everyone for making me feel so welcome, to Paul for keeping me company, and of course to Ashley for inviting me!
Determinedly blogging
2006-09-21
Its been a bit quiet hasn't it? I've been testing a few blogging tools, notably Zoundry and (this post powered by) SharpMT. Lets see if it works, in more ways than one.
This morning I will mostly be running…
2006-09-23
... the rather neat mashup between Pandora and Last.fm, at Real-ity's PandoraFM. Recommended music I haven't necessarily heard before, and it logs it all so I don't have to.
Shmokin' ;)
Claude Nouveau for all your wine needs
2006-09-24
A few years ago we were travelling through France and doing a bit of wine buying in the Burgundy region (as a Frenchman once said to me, its harder to get a bad Bourgogne than a bad Bordeaux...). We stumbled across Claude Nouveau, proprietaire-recoltant in Marchezeuil who, from his small establishment, helped us to choose some lovely wines to take home. We are still drinking them, and once a year I get a price list from M. Nouveau, informing me of the latest from his vines.
Now others might see this as junk mail but every year it raises a smile, as he personally sends out a letter to me and no doubt many others. I'd love to buy from him again, and if ever I am passing through the region again I will no doubt do so. HIghly recommended, if ever you are in need of a few Santenay/Maranges at reasonable prices, you know where to go.
Update: you can also go here, from the comfort of your own armchair.
Update 2: I'm clearly not the only person to think highly of M. Nouveau's outputs!
On switching content with networks, and great sleep cures
2006-09-25
"What are you thinking?" Liz said to me last night, just as we were settling down for bed.
So I talked to her about how, when I started playing World of Warcraft, how I thought it was a game with a social element, and now that I'd completed the levelling phase, I saw it more as a social network with a gaming element. I mentioned watching the Tottenham-Arsenal match on TV last Saturday and said how it occurred to me that in the old days, it was mainly about the football with a celebrity element, and now it was more about watching the celebrities, and football was one part of that. I commented that maybe there were some parallels, as the content - gaming or football - were being superseded by the contextual network that ran around both, and perhaps it was only a matter of time before sport and online gaming, watching and networking also merged. I started to say something else...
... then I realised Liz was sound asleep.
FrieNDA
2006-09-26
A term has popped up in the valley that I quite liked - the first reference I can find is here: Web X.0: FrieNDA.
Thanks for that - of course, people have been practicing it for years but its nice to have a word for it.
Porcupine Tree at the Astoria 29/09/06
2006-09-30
Not a remarkable gig from the point of view of performances - as fine as ever, but half the set was new material and I'm not the best person to 'get' live music from a cold start... most importantly, the best audience reaction I've seen at a PT gig, and only a small proportion of old faces in the crowd. This wasn't preaching to the converted - which bodes well for the new album next year.
BTW new material verdict: not bad - some strong songs, a couple of growers.
Today I shall mostly be...
2006-09-30
Juicing pears. Mmm.
October 2006
This section contains posts from October 2006.
Cotswolds to Capetown
2006-10-01
Just back from my local, to see Nick and Nick as they set off to Cape Town on motorbikes. Emotions were kept in check until the helmets went on, at which point it was best they got on their way. Nick Graham is raising money for an anorexia charity, and Nick Clarke for Leukaemia research. More soon but for now here's a link to their site, you can follow where they get to on their blog (they have a Blackberry).
IT people - have you subscribed yet?
2006-10-03
When I first joined Neil Macehiter and Neil Ward-Dutton at MWD, I had an immense feeling that it was a good move, but I didn't for the life of me know why - sometimes, you just have to go with your gut. 8 months later and I know it was the right thing to do because we share a common goal - to help people do Information Technology better, breaking with the habits and mindsets of the past. At MWD we are building a corpus of information about how this can be done, and we're making it available via Creative Commons to anyone who feels it could be useful to them. Currently we have reports on areas such as Identity Management, Service Oriented Architecture and IT Service Management, as well as vendors capability assessments and general briefs on vendor offerings, and we're adding to the pool all the time. If you feel these reports would be useful to you or if you would like to sign up to our monthly newsletter "Signposts", please do register your details. There is no obligation whatsoever for this, please do pass this information on to anyone you feel might benefit.
H at Riffs Bar 01/10/2006
2006-10-03
Having missed all of the h-natural gigs for reasons of just being too darn busy, I was delighted to be able to catch Mr Hogarth at Riffs on Sunday Night. It was one man and his Yamaha keyboard, playing a mix of Elvis and The Beatles, Tim Buckley and a few tracks from Marillion. Actually, it wasn't just him - Swindon's own guitar hero and "chum" Dave Gregory joined h on stage for a couple of songs, a sublime Sound of the Siren and XTC's The Loving.
Which was nice. Actually there was more to it - the Buckley-esque Gabby Young put on an admirable support performance. It was all in a good cause as well, being part of the Oxjam music event. Shmokin'.
Just back from
2006-10-03
Actually it was Barcelona - but all I saw of it were some street signs in Spanish (and probably Catalan), on my way from airport to hotel, then from hotel to airport the next morning. That was my choice as things are a bit busy at the moment, but a shame nonetheless as it's a great city. Bizarrely, I had to throw away my disposable razor on the way out, only to buy another exactly the same once I was through security. Explain that one.
Stylus styloss
2006-10-04
A new stylus arrived in the post this morning, following me dropping the old one in the dark of the Astoria last Friday night. I felt weirdly bereft without a stylus, a bit like going out without a wallet or a mobile, I would reach for it from time to time and feel distinctly uncomfortable when I found it (still) wasn't there. So, I am now complete once again. Scary.
Incidentally, I traded in my iMate Jam for a JasJah, or whatever this one is called - the Orange SPV M5000. It wouldn't suit everyone, its very big for a star but it has a hi-res screen, full keyboard and, delightfully and unexpectedly, it supports 3G/UMTS. So I've also been able to cancel my WiFi contract wth BT Openzone. Its got a faster processor than the IMate, I'd say this was the minimum spec necessary if you'd want to use it as a laptop replacement. Oh, and it can also play DivX video in full screen! Now I can record from the TV on my Archos AV340 and copy it onto here (it can also do blog posts) for viewing on the move. Nice when a plan comes together!
Maybe this is why I'm so messed up
2006-10-04
Just looking at the site stats for last month. They say seach strings are revealing, so here's the strings that had people land on my site:
1 | 31 | 48.44% | qtek 9100 |
---|---|---|---|
2 | 10 | 15.62% | qtek |
3 | 2 | 3.12% | alphadoku solution |
4 | 2 | 3.12% | dentist equipment |
5 | 2 | 3.12% | four colours problem |
6 | 2 | 3.12% | jon collins |
7 | 2 | 3.12% | q-tek |
8 | 1 | 1.56% | agile web services |
9 | 1 | 1.56% | bluetooth interferes with wireless |
10 | 1 | 1.56% | charlene zivojinovich |
11 | 1 | 1.56% | colin woore |
12 | 1 | 1.56% | courses for myspace |
13 | 1 | 1.56% | courses myspace |
14 | 1 | 1.56% | funny lessons from mishaps |
15 | 1 | 1.56% | goat sheep on the farm |
16 | 1 | 1.56% | invisible earbuds |
17 | 1 | 1.56% | marillion |
18 | 1 | 1.56% | porcupine tree |
19 | 1 | 1.56% | punk eek |
20 | 1 | 1.56% | sudoku creating puzzle |
OK Qtek I get, Marillion is fine and I'm pleased to see punk eek and alphadoku. But funny lessons from mishaps? Goat sheep on the farm? No idea, sorry - perhaps a deep insight into my psyche.
No sheep jokes, please.
Edit: I take it all back - those phrases do indeed link to the site. Well I never.
Desert Island Discs
2006-10-06
It?s a Friday, so here?s 10 songs?
Everybody Hurts ? REM
Mr Blue Sky ? Electric Light Orchestra
Lilac Wine ? Jeff Buckley (cover)
Clarinet Concerto ? Mozart
The Great Gig in the Sky ? Pink Floyd
Ocean Cloud ? Marillion
Sunday Bloody Sunday ? U2
Arriving Somewhere But Not Here ? Porcupine Tree
The Psychic ? Crash Test Dummies
Song to the Siren ? This Mortal Coil (cover)
Dirty little analyst secrets
2006-10-10
#86: Analysts read their competitors’ reports
Actually I make no bones about this - I’m of the opinion that there is nothing new under the sun, and that we can all stand on each other’s shoulders and get that little bit closer to it (or the Oracle, whichever is our preference). Of course blogging leads to a kind of collaborative analysis, but some firms do still find it difficult to share. Equally, I’ll quite happily promote the work of other firms - today I received the latest, typically excellent newsletter from John Katsaros and Peter Christy, and I was prompted to comment on it. I heartily recommend it to anyone that’s interested in the goings-on in the Valley (that’s Silicon, not Thames). You can see the kinds of things it contains, and sign up here.
On the subject of newsletters, I embarrass myself every Friday morning as my delight at receiving Silicon.com’s Roundup is once again eclipsed by the fact that I will have totally forgotten it would be coming. Like a goldfish swimming round a bowl, I am… but anyway, it has often been the perfect antidote to a hard week.
Too many cool things
2006-10-11
I was about to comment on Guy Kawasaki's Six Cool Things post, when I noticed the title of the post was "five cool things"... I have no idea which was the sixth but I'd bet on the shirt!
Indeed, I was going to propose a seventh - Salling Clicker. Turns your mobile phone (or in my case, Pocket PC) into a Bluetooth presentation device. The Pocket PC version even lets you see what's the next slide. And it works with iTunes/Media Player. And it can work over wi-fi. One of the coolest things I've installed, then bought, this year.
I haven't tested the Bluetooth range, but it hasn't let me down yet :)
Blogger goes multilingual
2006-10-12
Great verification word from blogger this morning - ?rdpam. Hang on, I'll just go get my germanic keyboard...
Performancing fixes
2006-10-13
Been testing some changes to Performancing, a rather handy blog editor that runs in Firefox (I'm also still running Zoundry, its better for backups etc). According to this post on the Performancing forum it looks like a number of ISP's are blocking access to the xml-rpc file. There is a workaround but both Performancing and Zoundry require you recreate the blog accounts. So be it...
P.S. Zoundry also seems to upload the graphics better...
Intrepid bikers on Fanta
2006-10-13
Nick and Nick have reached Africa at last - that's 12 days to cross Europe and drop down through the Middle East. Hurrah! Spare a thought for the poor lads - they've been without beer for three days...
Cotswolds2Capetown: Day 12 - Thursday 12/10/2006
History of Guns in WGC tonight
2006-10-13
Goodman Max Rael is a many-faceted character, and a good mate - or at least he would be I'm sure if I saw him more than once a year. Anyway his band The History of Guns is playing Welwyn Garden City tonight.
Think goth-punk with lots of shouting. Nische.
Any SD Card computer Geeks Out There?
2006-10-16
As I was spending a bit of time tuning my Windows XP settings, I noticed I could set the paging file to be on any drive, even ones that were only temporary - i.e. hanging off a USB port. Suddenly I had what might be a brainwave - but might not - It occurred to me that I might be able to make use of the SD card port as a solid state paging volume. It wouldn't be as fast as RAM, but surely it would be faster than paging onto the hard drive? Laptop memory is a darn sight more expensive than expansion cards
The demise of blogging
2006-10-16
It seems ironic that one should write a blog about blogging being on the wane, but it is. History will put the start of the blog-downturn when Robert Scoble left Microsoft, as (perhaps) he found it impossible to square his corporate and blogging existences. He's now a PodTech blogger, and spends as much time promoting his multimedia show as he does talking tech. Which is fine. Similar examples can be found here and there - a journalist leaves a publication because he says something out of line, implying that those who remain will be expected not to; links to blogs indicate that the blog is no longer kept up to date, as the blogger now works for XYZ-insights.com, the Web-based publication. The number of blogs may still be on the increase, but this is more a factor of the simplicity of the mechanism than the betterment of the blogosphere as a whole. Up at the top, the club of elite bloggers in each sphere is now saturated, or if not it shortly will be. Bloggers of all persuasions are talking about cutting down the numbers on their blogrolls, implying that the number of relationships they can maintain is arriving at a sustainable level. From that point on the growth is organic, not exponential.
Which is fine. I have never had any problem with blogging, as a mechanism - in the same way that I don't have a problem with knives and forks, lathes and printing presses. The problem was never the pamphlet, nor the pamphleteer, it was the idea that pamphleteering was in some way going to replace what had gone before and lead us to some greater thing, previously undiscovered. The greatest of greater things was, inevitably, the collective consciousness - this, too, is on the wane as neither Wikipedia nor any loose-knit community of like-minded individuals have proved that they will ever be able to agree on anything. The most hardened bloggers take offence at each others' interpretations of their own remarks, pointing more to the downsides of collective anarchy than its upsides. If blogging were an island tale, William Golding would be writing the script. Or perhaps George Orwell - for as we all know, some bloggers are more equal than others - by humbly suggesting they are not as bright as their readers, they are nonetheless illustrating the divide.
Which is fine. The point is not to say it is bad, or it is good: it is neither, and it is no different from what went before or what will come after. Humans interact following ancient models and customs that have been documented to the n'th degree by anthropologists, sociologists and various other types; no doubt they are at it right now, drawing up the table of personality types and comparing them to (or should we say, "mashing them up with" the blogiarchies.) No doubt they did it too with open source, and I am sure they will do the same with whatever comes next - for in this very human existence we have, there will always be a "next". Put it this way - teenagers are not blogging, or at least they don't see it as such, more as an extension of Myspace, Bebo and MSN. Nor will they ever - something else will build upon the foundations they are already creating, and they will have their own version of the road to Nirvana. Its as old as the tower of Babel, though admittedly a lot more sexy.
So, blogging is doomed then? Of course not, the mechanism will achieve its rightful place on the workshop wall, next to all the other tools that have failed to be rendered obsolete. Cries that the media was in some way doomed become laughable as soon as the more savvy media chose to add blogging tools to their own sites; similar debate still rages over other outspoken types such as columnists and IT analysts, but it won't last much longer. (Of course there were gaps opened up, new success stories, and there were casualties made of older hacks that were unprepared to move with the times.)The best way to knock the wheels off a new bandwagon is to integrate it with everything else, and that's as true for masses-hype as it is for corporate-hype. Blogging in its current form cannot survive - not because it is in any way inadequate in itself but rather because, as a single mechanism, it is constraining. One blog cannot cope with multiple personalities with multiple things to say - and much as I enjoy reading about the day to day activities of my friends and family, I confess to not giving a monkey's grunt about Irving Wladawsky-Berger's baseball habit or Bob Sutor's porch. The mechanism as defined is not capable of filtering out what I would consider noise, just as I cannot share information discriminately with my family and friends, my musical friends, my gaming other friends, my work colleagues and my customers and prospects: instead, I use multiple blogs, channels, columns and I am delighted to be a part of all of them. No doubt I will one day be able to manage everything from a single point, perhaps the resulting mechanism will be called something to do with blogging (though hopefully with a nicer name), but there's just as good a chance it will be called something new.
At which point, of course, we can start all over again, convincing the cynics (as I was, about blogging) that that there was more to it than they thought, then riding the wave for a while before realising (as I am, now) it is starting to peter out and scanning the horizon for the next one. Enjoy the ride, but lets not get too hung up about the shape of the board.
Master of None at the Sundial Theatre Cirencester, 14 October 2006
2006-10-17
Three acts - a singer/guitarist called Richard Cox, a guitar and fiddle combo called Take Two and the main act, Stroud-based Master of None. An evening of gentle music, punctuated by amusing chat and the retuning of various instruments. Multi-instrumentalists Howard Sinclair and Alex West deserve to be shot (not really) due to their ability to switch from flute to sax, guitar to piano at the drop of a felt cap; Colin Sillence brings the experience, wonderfully virtuoso fingerpicking and dodgy lyrics. An enjoyable evening out - my only criticism might be that there could have been a bit more oompf in the main set, folk doesn't always have to be quite so laid back, perhaps that was as much down to the regimented seating as anything.
FUDCO - for all your corn needs
2006-10-17
New brand available from Tescos supermarket in the UK. How soon before they start selling software using this brand? We can only hope!
Anyone going to Storage Expo?
2006-10-18
Storage Expo - The UK's only dedicated data storage event
For the next couple of days I shall be at Olympia, London for Storage Expo. Anyone who thinks storage is boring need only to... no forget that. In any case, do stop by. Tomorrow I shall be chairing a session on future proofing storage and I'll be on a 'Dragons Den' panel where we get to grill the vendors. Now, that should be a lot of fun.
See you there perhaps.
The Matrix Revisited
2006-10-18
The Matrix sequels have to go down as one of the greatest wasted opportunies in cinematographic history, but time is the great healer. Having got over the fact that (it took me a while) there is no philosophy worth discussing, I watched Matrix Reloaded over the weekend. It actually stacks up pretty well as an action film - more Terminator than Blade Runner perhaps, but the motorway sequence is incredible. Maybe its time to take another look at Revolutions.
Mick Wall has a blog
2006-10-18
The music journalists' own classic rock god Mick Wall has his own blog, and in these pseudo-declarative, self-obsessed diarising days of "what can I disclose about my goings-on" Mick offers a refreshingly uncompromised version of what life's like at the front line of rock. Tax. Meetings. Frances Rossi. Crapping. A highly readable account, if mine were a "proper" blog it would be modelled on Mick's. Though it wouldn't be quite as interesting.
Thanks to Simon for the tip.
CIO P2P
2006-10-18
I picke this up from Vinnie Mirchandani's blog:
Charles Zedlewski (of SAP) sent me this quote after reading my "elephant hunter" post
"In the 1970's when CIO's wanted to know what to buy, they asked IBM.
In the 1980's when CIO's wanted to know what to buy, they asked Andersen Consulting.
In the 1990's when CIO's wanted to know what to buy, they asked Gartner.
In the 2000's when CIO's want to know what to buy, they ask each other."
I thought this was both simple, but very profound - it implies - and this is something that I am seeing - that CIO's are starting to take control of their own destinies. Unsure what to add without sounding patronising, but all for it.
Second Sun
2006-10-20
Helzerman's Odd Bits - Sun holds press conference in Second Life - what to wear?
Only a week or so late, I just popped along to the Sun Pavilion in Second Life. There wasn't much going on, strangely, I went and had a look at the pictures. One may scoff at the idea of replacing good old 2D interfaces with an immersive environment, but its got its upsides - one can look at a number of things at once, there's less pressure, its more intuitive etc. Ain't going to work for everything but I can see places it would.
Its also probably the only place in the world I'll get my hands on a Sun Container (and it may be the first place it ships - I wonder if they've thought of that - directly hooked into their Grid facility?)
Didn't get offered cybersex this time, either.
The best conference giveaway ever
2006-10-20
Just back from Storage Expo, which was a surprising amount of fun - generally beacuse it was great to catch up with a number of people I'd happily call friends if I ever saw them more than once a year. The sessions were good as well, particularly the Dragons Den thing. Those vendors really can be good sports!
But anyway, I just have to comment on the Gyrotwister I picked up at the PCICASE stand. Its a gyroscopic spinner that you have to try to keep going by the power of your wrist. Forget the hats, t-shirts and pens, this is the business! Though it could get you some funny looks if you tried it out in public... it comes with a CD-ROM, which can measure the RPM by listening through the computer microphone. Just how cool is that. I thought the USB hub and cup warmer I picked up at a Dell gig was the ultimate, but this thing wins hands down. You can even get one with lights...
Comment apologies
2006-10-21
I've been asked why comments aren't working on this site, I've tried to fix things but the answer still eludes me. Something to do with .htaccess/xmlrpc/other php files and the fact the blog is in a "wordpress" subdirectory. Normal service will be resumed... but in the meantime, do please email me and I'll post comments that way. Anything(at)joncollins.net should reach me.
Keane at the Wolverhampton Civic, 22/10/06
2006-10-23
I was full of trepidation about this event, not just because Tom might have rushed back to rehab at any moment, but also that we didn't have any tickets - I was assured they would be available for pick-up at the box office, but the only proof I would have would be after the hour-and-a-half drive. As it turned out both Tom and tickets were there, and what seats - balcony right, just above the stage (Tip: never believe them when they say lines open at 9am, call 15 minutes earlier).
Given that Keane are considered about as middle of the road as a squashed hedgehog, to put it bluntly, they kicked ass. Tim played keyboards like Peter Crouch plays football - a gangling mass of energy, playing without letting up but always in control - I wouldn't want to be his keyboard tech. Trim-looking front-man Tom seemed geniunely surprised at the level of support he received from the well-lit West Midlands crowd, and responded accordingly. Drummer Richard Hughes was understated but solid and capable, as I think he probably is in real life. Not much to say about the set list as the band aren't yet over-endowed with material, yet each song was bashed out with energy and abandon.
Say what you like about Keane, but there's a band that are doing it for love and they are getting it paid back in spades. Long may they continue.
Ben Folds in Second Life
2006-10-23
Interesting article here on Ben Folds and his Second Life gig. More interesting perhaps for the comment trail - I don't know much about the music but its a good indication of what works and what doesn't.
Why I like Zoundry
2006-10-26
... for my blog editing:
- because it doesn't need me to install any frameworks
- because it doesn't crash
- because it auto-creates thumbnails
- because it is usable and straightforward
- because it doesn't clog up the editor with click-through advertising capabilities
I'll add if I think of anything else! I also still use Perfomancing a little.
Powered by Zoundry
Torchwood and Cardiff
2006-10-26
Just watched the first episode of the Doctor Who spin-off, which is not only set in Cardiff, it proudly advertises its Welshness. Very good, very enjoyable. Mention of "the rift" takes me back to when I saw BT announcing that it would be launching its 21st Century Network trial - in Cardiff. Was I the only person to think - "no, they can't do that - it'll be right on top of the rift in the space-time continuum!"
And then I remembered that one of them was just an imaginary story about a fantastic collection of technologies set some time in the future, and the other ... was a TV programme.
Sorry, couldn't resist it :)
Comments are go...
2006-10-30
I've done a clean install of Wordpress 2.0.5, and it looks like I have comments back. So, if you were one of the three people who tried to comment on a post in the past 6 months, you should be OK now (I know, I know, it helps to post). I won't install anything clever unless I'm sure it doesn't break anything. Honest.
Reviewed you Spam settings recently?
2006-10-30
This morning I finally got round to vsiting my email hosting providers and checking what my email forwarding settings were. I've been getting a lot of spam through certain addresses, and also one of my sites sees to be being used as a spam sender address, which is a pain - not least because I'm geting all the bouncebacks.
So, I've just been online. Lo and behold one of my providers now has an enhanced spam filtering service, another now has an "auto-delete" of the bouncebacks so while I can't do much about them for everyone else, I can at least reduce the pain for me. I had no idea about the services, perhaps they'd been mentioned in a bulletin at some point but I quite possibly deleted it - as spam, no doubt.
So - it might be worth visiting your hosting provider and seeing if they have enhanced their services while you weren't looking! Of course if you're the kind of person to do this on a regular basis, then I take my hat off to you.
Powered by Zoundry
Online backup solutions
2006-10-31
The demise of a friend's hard drive prompted me to take a quick look at the online backup market, hence the delicious links. While the shortlist is down to 3 companies, I'm testing AOL's XDrive service first, as it offers 5Gb storage for free. The software still has to prove itself but its difficult to beat at that price.
November 2006
This section contains posts from November 2006.
Layering the blogosphere
2006-11-02
I'm understandaby pretty interested in how IT analysts, bloggers and so on interact and overlap - the nascent "influencer community". A conversation with James earlier today resulted in 4 levels of documentary process, from the bottom:
- blogging - writing about the facts of the matter, often in specific interest areas
- aggregation - reading lots of blogs and other sources and summarising what's being written about
- analysis - consolidatation and interpretation to deliver some understanding about what it means and what could/should happen as a result
- influence - the ability to project the information or interpretation to an appropriate audience.
The point is its about roles, not people. I'm an analyst, always have been in many ways (even when I was an IT manager) and always will be looking to understand how things fit together. To be frank I don't read that many blogs (tens rather than hundreds), and I am thankful to the aggregators who have the inclination and skill to read multiple sources and summarise them. Often aggregators will analyse, and I know analysts that aggregate, the more the merrier - and all of the above rely on some kind of influence - if a tree falls in a forest and I write about it but nobody reads my blog, did it make a sound? So - am I a blogger? Could do better. An aggregator? Sadly not. An analyst? I do hope so!
First word about the Microsoft-Novell partnership
2006-11-02
And the word is -
More to follow, over at MWD.
The music industry is f*cked - Peter Jenner
2006-11-03
Music industry veteran and one-time Mike Oldfield link man Pete Jenner waxes lyrical on the state of the music industry. Everything I thought was true, but wasn't sure enough to say. Or maybe I did.
Jo McCafferty, Cheltenham, 04/11/2006
2006-11-06
Bit of a fix this one, as Jo was playing at Em and Jase's wedding (and what a wedding!) - but it was lovely to see her play again, one of those people I wish could reach a wider audience. Restrained and soulful, yet energetic and funny.
Jo is a singer-songwriter, she's played quite regular support for Midge Ure, among others. For more information, click here.
Nice graph
2006-11-07
I'm not absolutely sure how web site hits relates to blog stats, but what I do know is that when I stop posting they go down, and when I post they go up. I'm just capturing a copy of this month before the this-time-last-year stats vanish.
Something to Declare
2006-11-09
Aw, shame. The fuss on declarative living has died down somewhat… and I had this draft on Wordpress I was saving until I could get down to writing a really good article about it. Might as well post these thoughts now, perhaps something will come of it some time.
Consider it a declaration of intent ;)
Thoughts for a future post…
- Declarative living is about much more than uploading preferences
- Depends on roles - what persona is being declared - cf duping Google, duping Audioscrobbler
- Non-declaration is also a tool - publication of reader figures, punching more than one’s weight
- Pre-requisite to engagement is to declare interface
- Declaration, like service, goes all the way down
- Passive declaration by blogging, active declaration needs to be more than it is
- In future, declaration of capabilities to enable service interface
- Agreed that (anonymous) declarative mechanism could replace surveys, charts (Audioscrobbler)
- Dig out blogging is narcissism post
Life 2.0 and Pixie Boots
2006-11-09
This time last year, I remember talking to some colleagues about blogging. Pretentious twaddle, was the feedback. Now, they're all doing it.
Last week, I was talking to some other analysts about Second Life. "Have you got avatars," I asked; "Don't be silly," was the response. I've had a similar experience with Myspace.
So, the question is, will they have Second Life avatars next year? My guess is, yes. And we'll have gone through the, "Of course, it was all crazy then, but its useful now," debate.
Reminds me of when I went to University, and got my first pair of pixie boots - everyone in Leeds was wearing them. When I went home, Paddy (who had stayed down a year) laughed his head off, as did many other old mates. A year later, Paddy came back from his University - in pixie boots.
"But..." I spluttered.
"Yes, but they're fashionable now," he said.
What a difference a year can make.
Caveat: Robin's already there.
Hotel wireless: nice when it works
2006-11-13
There's been quite a lot written recently about the failures of hotel wireless in London, but things seem to be going my way this evening. After a little mix-up in travel at a Radisson hotel (no fault of the hotel), I booked a last minute "top secret" hotel round the corner, which to my surprise (its a lot cheaper) was another Radisson. This hotel chain has free wireless in both hotels, it may extend to others but it certainly works in the Covent Garden area.
So, having made my online booking, I was then able to email my booking code to the front desk as the back-office systems weren't integrated enough to deal with such immediacy. To me, that was fine - as I had free wireless, I'll send them any email they need.
Nice when a plan comes together.
What a palaver! On rubber balls, customer service and spam
2006-11-15
I can see it all so clearly. Over the past decade, hosting companies and other internet service providers have been building their businesses and implementing appropriate customer service mechanisms. In general this has followed a 3-tier approach:
- web based self-service - for the standard stuff
- email - for the non-standard stuff (or things they don't want you to do so much, like leaving)
- phone - for the more complicated stuff
Phone support can be slow and laborious, in some ways deliberately causing the punter to opt for one of the other two mechanisms. Bottom line: its not perfect, but it works.
Or worked. Over the past few days I've been trying to communicate with Verio to transfer a domain. Verio's fine, I just wanted to consolidate down the number of hosting companies I used, and they got the short straw. But phew - trying to work with them on email was like trying to throw rubber balls through a very small hole, ten feet away! First, it didn't help that they don't make their email address for this sort of thing particularly obvious (there's a list at the bottom). Second, the amount of spam protection on these email addresses is just prohibitive. I must have gone through ten combinations of email sources, addresses and subjects before I finally managed to get a message through. Even once I'd done that, I was asked for more information and I had to do it all again...
I've got there in the end, but I took away a number of thoughts. The first was that what was initiallly a workable model - the three-stage approach above - has become unworkable due to the late addition of Spam protection - and such companies need to rethink it. Second, with my analyst hat on, it is a clear example of how security needs to be about business risk management and not just "block that nasty email", IT risk avoidance.
The business risk in this case of course, is that customers get peed off and go somewhere else.
Here's those emails - you wouldn't guess them!
domreg@verio-hosting.net; domains@ntteuropeonline.co.uk; shared_support@ntteuropeonline.co.uk; support@ntteuropeonline.co.uk
The tip (which I got by phone, ironically) was to put the web site address in question as the subject, which overrode the spam filter - you have to do this every time you mail them, don't just hit reply and expect Re:whatever to get through. Finally, try to mail from the registered email address for the account administrator, otherwise they'll just ask you to do it all again.
Life's (not) a long song
2006-11-17
Last.fm is great, isn't? Well, perhaps not, for some artists. Why? Because its unit of measurement is the track, not the album.
Take a track such as Jethro Tull's 'Thick as a Brick', for example. The fact it comes 8th in the Tull chart is astounding, given that it is 12 minutes long. I would be prepapred to wager that it would be higher, if it were shorter - for the simple reason that in any 12-minute period, it can only be played once, whereas a four-minute track could be played three times. Mike Oldfield's got it even worse of course, with Amarok clocking in at over an hour for a single track! Is it any wonder it comes in at only 94th on his own chart?
This matters also for artist charts as much as tracks. If, say, one is listening nonstop to Pure Reason Revolution, each play of the debut album 'Cautionary Tales for the Brave' will result in 4 tracks, i.e. 4 "votes" for the band. A single spin of Moby's 'Play' would result in 19 "votes".
Now, of course there are those 3-minute boys (not them, but theirs is the song) who would say that it serves anyone right if they have songs that are too long, but that's neither here nor there. I wonder how long it will be before an artist actually constructs a track listing so to dupe mechanisms such as Last.fm.
Its only a matter of time, surely.
Things I should have patented #335
2006-11-17
Triggered by the faux-patent debate, I was reminded of something I thought might actually be worth registering at some point, namely a coding system that is organised to generate characters based on combinations of 6 simultaneous key presses. It goes with idea #334 - the keyboard glove. The 6th is to do with clever use of the thumb on a balled fist.
TBH it would probably lead to an RSI nightmare, but it passed the time :)
Next: Idea #241, the rubber pavement...
And this week's word is... Skullet
2006-11-17
Think: mullet (that 80's hair cut), now think same person, a bit older, losing it on top... and you have the skullet.
Expect to see plenty of them at revival 80s pop gigs. Photos here.
Thanks Shane, Tips, everyone!
A brief history of Coates
2006-11-17
This history has now been updated at:
http://coatesvillage.wordpress.com/a-brief-history-of-coates/
Please update your bookmarks, and thanks for all the feedback!
Next year’s must-have gadget: Samsung SPH-P9000
2006-11-20
Still waiting for the specs but just looking at the picture, this has to be top of the 2007 must-have gadget list: the Samsung SPH-P9000. It's an ultra-mobile PC with 4G (or is that 5G) capabilities. Of course the big question has to be - how well does it handle voice recognition?
Thanks Jo at Silicon for the gen, and the pic.
That was short - and to the point
2006-11-22
Seth Godin's post about the demise of traditional TV. Short and sweet:
"That was quick: Helene points us to this press release from CBS in which they are touting how well they're doing on YouTube, including a glowing quote from a YouTube VP. Think about that for a second."
Can I use the "Waiting for Godin" pun yet? No, I suppose not ;)
What a nice man - Mike Oldfield's Changeling
2006-11-28
From yesterday's Publishing News:
Oldfield to donate autobiography money
ROCK GUITARIST MIKE Oldfield, who is writing his autobiography for Virgin, has announced that he will donate all proceeds from the book for the first two years to the mental health charity SANE. On publication he will also auction the guitar that was used on his classic Seventies album Tubular Bells, in aid of the charity. The phenomenal success of Tubular Bells led to mental health problems for Oldfield who commented: ?For some time I have wanted to tell my story, particularly the dark and difficult times I went through when I was making my early albums. This book is my way of off-loading the past, and I hope it will help others as they face up to challenges in their lives.? Virgin will publish Changeling: The Autobiography of Mike Oldfield in May 2007.
What a thoroughly generous gesture - I'm all for the idea of course, though I'm sure this will raise more than I ever could. And yes, I can confirm that the book really is nearing completion!
December 2006
This section contains posts from December 2006.
It's not your computer
2006-12-04
When things aren't right in IT, it can be quite difficult to put your finger on exactly what is wrong. A week or so ago, a helpful blogger put a clip from The Office on Youtube, relating to how IT and the business relate. I've never been a great fan of The Office, partially because I find it a too-painful reminder of certain past experiences... but this particular scene absolutely summed up one of the things we're up against. Here's the transcript:
“What you doing with my computer?” “It’s not your computer is it? It’s Wernham Hogg’s.” “Right. What you doing with Wernham Hogg’s computer?” “You don’t need to know.” “No I don’t need to know but could you tell me anyway?” “I’m installing a firewall.” “OK what’s that?” “It protects your computer against script kiddies, data collectors, viruses, worms and trojan horses and it limits you’re outbound internet communications. Any more questions?” “Yes. How long will it take?” “Why? Do you want to do it yourself?” “No, I can’t do it myself. How long will it take you out of interest?” “It will take as long as it takes.” “Right, er, how long did it take last time when-” “It’s done.” “Right thank you.” “Now I’m gonna switch it off, when it comes back on it’ll ask you to hit yes, no or cancel. Hit cancel. Do not hit yes or no.” “Right.” “Did you hear what I said?” “Yep.” “What did I say?” “Hit cancel.” “Good.” “Thanks.”
Priceless. If anyone can now find the original clip, I'll put that up as well.
"You put them in the bin, Mr Collins\
2006-12-08
It is difficult to avoid pinning all of the blame for shoddy delivery on the IT department. Of course, everyone should take their share of responsibility but I am reminded of the problems I used to have as an IT manager, due to the complex and bureaucratic procurement systems in place. No order could be under a few hundred pounds, as it cost at least that to get a purchase through; even when successful, it would invariably take weeks unless it was rubber-stamped at the highest level.
Still, it could have been worse, had the bureaucrats had it all their own way. I remember when I started in the job, I stared in trepidation at the forms I had to fill in on recepit of even the simplest of orders. They were long, complex and - worst of all - they were in sextupulate, each of the six copies having to go to different departments, most of which I had never even heard of.
Having struggled to work out what to do on my own for a few days, I eventually plucked up the courage and called up the accounts department.
"What do I do with all these forms," I asked, in my pidgin French.
"Ah, Mr Collins," came the all-very-understanding answer. "The first, yellow copy you keep; the green copy you send back to me, and the red copy, you give back to the people in goods-in to confirm receipt."
"Thank you," I said, "but what do I do about the other copies?"
The accounts manager became even more understanding, as if she could detect my suppressed panic over the phone.
"You put them in the bin, Mr Collins," she said.
It was a small victory over bureaucracy, but distinctly pleasant nonetheless.
Maintenance, innovation and half-baked pies
2006-12-08
I'm getting just a little bit bored of a certain slide that seems to appear in every single IT vendor's enterprise pitch at the moment. It's the one with the pie chart - where about 70% of the pie is allocated to "Maintenance" and about 30% relates to "Innovation". The theory is that CIO's are looking to reduce the amount they spend on the former, so they can free up resource for the latter.
On the surface, this appears all well and good. Scratch a little beneath the glaze however, and things become far less simple:
- few companies have a clear idea of the size of their own pie. In the discussions we have had around the book, IT executives have been telling us how difficult it can be to get a clear picture of spend, for new technology projects or for maintenance of older systems. This is true particularly if IT responsibility is devolved to the lines of business.
- In similar discussions, we are told that projects are more and more being driven by quite stringent business cases. While the budget totals may add up to the 30%, this is because the line has to be drawn, rather than any "here's a piece of pie" view.
- To extend on this, organisations that see themselves as technology-driven are looking to the business benefit of any technology as well as looking at its intrinsic cost - more of a whole-meal view.
- Perhaps for these reasons, the pie itself is shrinking. As discussed by Nicholas Carr a few weeks ago, IT budgets are reducing, and the maintenance side is coming down faster than the innovation side.
- Finally, what does an innovation become, the day after deployment? Why, maintenance of course. There are plenty of new projects going on that are in fact replacing older systems with updated versions - as illustrated by Dale Vile's recent SAP post.
The pie analogy as it stands is not completely wrong, but it is over-used and simplistic. Where it may be true is that the CFO says to the CIO, "We're not going to give you any new money for project X, you're going to have to fund it yourself." In which case of course, it is inevitable that money needs to come out of one part of the budget, to shore up the other.
However, the suggestion that one side of the pie is shrinking and the other is growing, is a leap too far. It is also a dangerous starting point, suggests my colleague Neil Macehiter: "The key point is that innovation without a stable foundation where you understand your key assets, their costs and the value they add to the business, will mean that the only thing you innovate is chaos. If you simply shift budget, without stabilising and consolidating the foundation, you're heading for trouble."
So, what's the alternative? Rather than drawing the line pre-and post-deployment, a better place to start is to distinguish between IT investments that relate to non-differentiating parts of the business, and IT investments that help the organisation differentiate itself from the competition. Of course organisations will still have to work out what IT they have, and where it adds value; but if the goal is to rob Peter to pay Paul, it is a far better approach to drive costs out of the non-differentiating parts of IT so that the differentiating parts can be funded, extended and improved upon, whether they be in maintenance or otherwise.
2007
This section contains posts from 2007.
January 2007
This section contains posts from January 2007.
Happy New Year!
2007-01-21
As Christmas vanishes inexorably into the past, like the dot on the TV screen (get the allusion in quick, before flat screens make the dot a thing of the past as well), 2007 begins and it's all change, change, change. I'm moving companies from one set of buddies to another, books are finished and I'm planning all kinds of exciting things for the year, we'll see which ones come off! More soon... no, really!
Just back from the Architects Council
2007-01-23
The past couple of days, I’ve been very lucky to have attended the Microsoft-hosted UK Architects Council, which brings together a number of quite senior architects and CTOs from a broad range of companies, public bodies and software and service providers. IT was my first opportunity to present the 6 principles in their entirety, so I was ever so slightly nervous as I took to the podium. However, I was reasonably comfortable that the audience were exactly the kind of “we get it” people with whom the core messages would resonate. Or at least should, unless I messed up!
As things turned out, the reception was warm, the feedback highly useful, and a number of interesting debates were sparked off. Following the run-through of the principles themselves, I hosted a workshop session in which the attendees divided into three groups to consider Enterprise Architecture (EA) in the context of the Technology Garden. I wanted to know the answers to three questions:
- What were the potential ways that EA could help deliver IT-Business alignment, and how should these be articulated? - What are the challenges faced by organisations trying to deliver EA? - What actions need to be undertaken to maximise the chances of EA success?
The results are shown in the mind map below. One of the key take-away messages for me was how important EA is seen to be, when trying to enable IT people to understand what the business actually does. Equally, however, this will inevitably involve treading on toes, on both the IT and business sides of the house. If organisations want to reap the benefits of an improved dialogue between the two sides, they will have to consider how they counter the inertia of past practices and behaviours.
By the way I meant what I said by ‘lucky’, as this really was a top-class forum. Day 2 was spent discussing how to raise the level of IT skills in the UK. More on this – and another mind map – soon!
February 2007
This section contains posts from February 2007.
Script for a Jester's Tear, Riffs Bar, 27/01/2006
2007-02-03
Surely the best way to judge a gig is by looking at the crowd, and nobody could question the level of enjoyment felt by those leaving Riff?s bar last Saturday evening, having just seen Mick Pointer and his friends perform a rendition of Script for a Jester?s Tear. Nick Barrett was on guitar and John Jones took on the vocals, aided and abetted by a word-perfect audience (incorporating, of course, several Norwegians). It was the first time for many years songs like The Web have been played live, and if it turns out to be the last, it will have been a fine note to finish on.
Support was by Holly Petrie, who can be found on Myspace.
Get a first life
2007-02-03
I was tickled pink by this link, forwarded to me by Neil W-D. On the day after they talk about it at the Davis summit, it seems appropriate...
... meanwhile, here's a few random predictions:
* second life is a precursor to something really good, that is both usable and compelling
* Microsoft brings out its own virtual world, which doesn't do very well
* online communities are taken to court under the new US gambling legislation
You heard it here first :)
Start 'em young
2007-02-05
Another of the sessions at the Architects Council was a workshop held between members of academia and the council members (largely from industry), to report on last year's "Developing the Future" initiative, and to discuss the quality of people coming into the UK IT industry and how it could be improved. This was a hugely valid, and valuable discussion - particularly in the context of IT/business alignment. Undoubtedly there will be issues in other countries, but there does seem to be a dearth of high-quality post-graduate trainees in the UK, that really "get" what IT is for - to the extent that UK companies are looking abroad for skilled staff not just because people are cheaper, but because, quite simply, they are seen as better.
The mind map below shows the outcomes of the discussions, in terms of both the challenges and the resulting requirements. I thought the school system came in for a lot of stick, which was unfortunate as (or perhaps it was because) there was nobody in the room to defend it!
Marillion - Port Zelande, Netherlands, 02/02/2007
2007-02-10
Well, this was a bit of a cock-up on my part - I had booked tickets for the Marillion Weekend without taking too much note of the dates... and too my horror, (daughter) Sophie's birthday was slap in the middle of it! Give that I'd booked, I still felt it was worth going for a couple of nights, not least to see This Strange Engine played, and to meet up with some of my best, yet normally all-too-distant friends.
It's funny, one would think one could get bored of seeing certain bands play, but all they had to do was start for me to be reminded how it was about the music. The energy of the 2,500 strong crowd was electric, and when they started playing, first some new songs and then the album - the atmosphere was just amazing. I wouldn't go as far as saying it was of the best Marillion performances I have ever seen, but it is certainly the best one I can remember!
Leaving on Saturday (thanks so much for the lift Mark and Ken) was not without issue - I left my phone behind, but it did manage to have some fun without me... priceless :)
March 2007
This section contains posts from March 2007.
What characterises a service?
2007-03-28
A couple of weeks ago I was party to another event, this time hosted by IBM. At short notice I was asked to facilitate a session on defining services, which was interesting in the extreme as it very quickly became clear in the earlier sessions that there was no clear definition of what a service was – particularly as there were two types of people in the room – business analysts and technical architects.
So, I decided to take a different tack. Rather than trying to fix a definition of service, I thought we would go round the room and ask what characterised a “good” service. Here’s what we came up with:
• Value – the benefits of accessing a service should outweigh its costs • Reusability – it should be possible to access the service repeatedly, with the same level of interaction and service quality • Meaningfulness – it should be possible to describe the service in clear, relevant terms • Autonomy – the service should be cohesive, i.e. clearly bounded • Independence – the service should also minimise dependencies with other services • Contract – the service should offer its own guarantees in terms of what it delivers, and how: such terms should be subject to prior acceptance by the service user. • Uniqueness – the service should minimise overlaps with other services
With hindsight, there is possibly some honing that could be done with the above list – the difference between “Autonomy” and “Independence” for example, is not all that clear. What was interesting however, was that even as the debate raged around what a service should look like, there was relatively little controversy about what separated the wheat from the chaff. For organisations looking to develop their own service strategies, this would appear to be a good place to start.
April 2007
This section contains posts from April 2007.
Rush - Chemistry now available in German
2007-04-04
Well, this came as something of a surprise to me - I didn't even know it was being translated! But then, a copy dropped on the doormat this morning. While I prefer the original cover, the graphic is quite clever even if it does give Alex a black eye. Inside, better print quality and bigger pictures are a considerable improvement, but still no colour.
More information, as ever, at Amazon?- which has some good quality pictures, better, dare I say, than in the book itself.
June 2007
This section contains posts from June 2007.
Phew! What an upgrade!
2007-06-30
Well I've just upgraded to the latest version of Wordpress. I won't bore you (i.e. I can't be bothered to list out) the entire blow by blow account but here's the lessons learned:
- Check your ISP can support it - in particular MySQL 4.0 and above - there was a helpful message at the end of the installation process to this effect, at which point I found I didn't have it. Which brings me to: - Absolutely, definitely, completely do make sure you back things up first! Take an FTP dump of the Wordpress tree, plus a MySQL export, that should be enough. To be sure you can revert to the previous version... - Possibly test the restore process before committing, I had the rather alarming message to the effect that MySQL could only re-import export files less than 2 Meg in size. Phew - mine was. - When upgrading, follow the instructions how to copy the directory tree. Don't do what I did and overwrite the wrong bits (refer back to "make sure you back things up" :) )
That's probably it - apart from a few hiccups the whole process was remarkably smooth. I'd like to tip my hat to the folks at Easily who were both highly responsive in tech support (thanks Francois!) and who did the upgrade when they said they would.
There, you never know I might even start posting again!
P.S. Just noticed I'm getting an SQL error in my links listing - oh well, I'd better sort that as well.
July 2007
This section contains posts from July 2007.
Not quite a job
2007-07-05
Greetings from Mallorca. Or Majorca, depending on who you are talking to. I've been invited here to speak to some senior IT guys, and my chosen topic is (unsurprisingly) The Technology Garden. They're just setting the room up now, but I was able to wake up and feel the sun on my face - which could be the only time this year judging by the UK weather.
Test Post from Live Writer
2007-07-06
I should probably delete this post as soon as I type it, but equally, I probably won't. Following a recommendation from goodman Governor, I thought I'd give Live Writer a try. Not bad so far - I hear Adobe has a competing blog posting tool, but there's one major difference - this one is free!
Oh, and one of our chickens died yesterday.
The Grossest thing I have ever done...
2007-07-08
... was yesterday, when I ran over an already dead hedgehog. With a lawnmower.
InstantRails in Vista Ultimate
2007-07-12
OK, so I couldn't find this anywhere on the Web so I thought I'd write it here. if (when) you get the message "Either Apache or MySQL cannot run because another program is using it's port", this is proably because you already have a web server running. I don't know if this is IIS but if you run through the Getting Started section starting "Note that if you have IIS installed...", that stuff works - or at least it did for me.
What’s wrong with being pro-Gartner anyway?
2007-07-12
"Interesting" question whether ARmageddon's pro-Gartner, or is anti-Gartner? I wonder if this whole line of thinking is missing the point. I mean, I know I'd rather have a bigger slice of all that subscription funding, they are the 800-pound gorilla after all - but is it so wrong to think, or indeed say, that Gartner might do at least some good things? I've seen a few magic quadrants in my time, and some of them are pretty well thought out, solid pieces of analysis that raise a bunch of seriously important questions that end-user organisations should be asking.
Of course, that doesn't mean everything they do is going to hit the target - and of course therefore, they should be subject to scrutiny - just like the rest of us. It's also been written that some vendors feel they have to pay Gartner's fees before they'll ever see themselves represented in the quadrants - rightly or wrongly - I know Gartner hotly contests this! It may even be that Gartner's product-oriented model is itself based on an industry as it was ten years ago, and not how it will be in the future - but that's an industry-wide issue, and it doesn't prevent Gartner analysts from being insightful in their own domains.
Meanwhile, we believe we have a whole bunch of differentiators that make us a pretty attractive alternative - always happy to share these! But perhaps its just too easy to bash Gartner because its Gartner, which equates to opinion, not analysis. The only people who can really decide whether or not Gartner is adding value are their enterprise customers, and that's not a revenue stream I see drying up any time soon.
Hotel top tips #182 - working a mixer shower
2007-07-12
If faced with a mixer shower that needs to be operated using the bath taps, first turn on the hot. Then, add the cold until it arrives at a suitable temperature. Then pull the knob to send the water to the shower head - and hey presto! A perfectly regulated shower.
Brought to you from the Horse Guards Parade, London. Grand but a tiny bit shabby, with a nice view of the Eye.
Taste Test: Cotswold 3-8 vs Summer Breeze
2007-07-15
At the Cotswold Show I was invited to compare a recent beer purchase of George Gale's Summer Breeze, with one of our local brews - Three-Eight from the Cotswold Brewing Co. Both are 3.8% alcohol, and both include Saaz hops - but the major difference perhaps is that one - the Cotswold - is a lager, while the other is an ale.
And the verdict is - they're both jolly good. While the Summer Breeze is a fine beer however, the Three-Eight has the advantage of combining the coolness associated with a lager, with a hint of the rounded charm of an ale. This may be down to the ingredients, to be honest I have absolutely no idea but if I were to choose between the two on a summer's day I would probably go for the Cotswold.
I might tell a different story later on, as the balm of the day took on the slight chill of the evening I might be glad of the warmer character of the Summer Breeze. For now, however, it is the Cotswold Three-Eight that has my vote.
Artists as Businesspeople? Whatever Next
2007-07-16
As I sit and listen to Prince's new album (included as a cover disk on yesterday's Sunday Mail), I'm forced to ask myself about this "industry first". While the man previously known as the man with no name may have stolen a march with the act, he's not the first to have achieved the outcome.
Prince reputedly received a million-dollar sum for allowing his latest release to be issued in this way. Now, given that producing an album costs hundreds of thousands, in essence he will have been able to cover his recording costs. I might be assuming too much here but the single, most important benefit is artistic freedom.
Marillion went down a similar track when they invited their fans to pre-order an album before it was written, but while they may have been first with the Internet marketing idea, again, they are unlikely to be the first band to release themselves from the shackles of a contract by finding money outside the recording biz. No doubt, as well, there will be other initiatives.
What both of these examples share is that the artists have minimised sales risk with a non-refundable advance. In neither case is artistic integrity compromised, and both rely on thinking about the bigger picture of sales and marketing to ensure that they're doing more than covering the costs.
There will be other ways of doing this - no doubt in ten years' time we will look back on "firsts" of albums being paid for through government funding, bank loans, lottery wins and Google ads. What with Myspace as the incubator, and with artists understanding they stand to be just as successful (and potentially better off) without major label backing, it becomes less and less clear exactly how the music industry is going to retain any position it has left.
P.S. The album's pretty good as well!
August 2007
This section contains posts from August 2007.
The downside of open communities?
2007-08-29
What fun. Several current blog roads lead to this post, one man's rant against being contacted by PR agencies just because his blog is seen as influential. Leaves me a bit flummoxed, not so much from the "what did you expect, when you're famous" angle, but more from the whole nature of Internet-based social activities. Fact is, if you want to swim with the fishes, you're also going to be in with the sharks, octopuses (octopi?) and plankton. To suggest things are otherwise is fundamentally missing the point of open communities.
September 2007
This section contains posts from September 2007.
Deeper meanings?
2007-09-16
Due to a sequence of circumstances I have ended up reading four books in parallel. At first glance they are unconnected:
Derren Brown - Tricks of the Mind
Richard Dawkins - The God Delusion
M. Scott Peck - The Road Less Travelled and Beyond
Christopher Booker - The Seven Basic Plots
At first glance these were totally unconnected, apart from being non-fiction and vaguely esoteric that is. Delve a little deeper and two are people wanting to debunk myths, one is explaining their roots and the other is exploring that part of the human psyche that needs them. Derren was a staunch Christian who is now keen to reveal all things "magical" as mere (though still clever) trickery, Richard is keen to state that magic is how we see things we don't understand, and Scott thinks we could all do with a bit more of it. Christopher won't be the last to point out how we love certain types of stories, but whether they fulfil a purely human or a spiritual need is a moot point. I'll let you know in, ooh, about 3,000 pages!
How did I end up reading them all at once? Who knows, perhaps it was meant ;-)
This morning I shall mostly be...
2007-09-18
Installing Ubuntu Linux in a Microsoft VirtualPC virtual machine. And some other things besides.
For my own future reference as much as anything, I had to:
* install in safe graphics mode so that the display was viewable during install
* reduce the colour depth to 16 bit (booting in recovery mode and running "dpkg-reconfigure xserver-xorg")
* add the parameter "i8042.noloop" to the kernel command in the file /boot/grub/menu.lst so the mouse can work
* log in as root and run "dhclient" to pick up an IP address from my wireless router
* (update) added "snd-sb16" to the file /etc/modules, to enable sound
There - so easy my granny could do it :-) I then downloaded all the patches (180Mb of them) to be up to date.
First impression - not too shabby! I'm running 2Gb of RAM, so I've allocated 500 Meg to Ubuntu and its running fine. Nice, clean interface as well - and plenty of games!
Circular arguments on Analysts from the Fool
2007-09-20
Took this from the Motley Fool:
“The current model of analyst-intermediated, opinion-based technology buying and selling produces poor financial returns. Across the world, corporate managers spend more than $1 trillion a year on technology. To help managers invest, and technology marketers persuade, $2 billion a year is spent on technology analysts. Yet 70% of projects fail to deliver a financial return, according to the Standish Group and other commentators.”
That’s an interesting, but not particularly insightful point. IT may produce poor financial returns but that doesn’t mean Gartner, Forrester etc are failing their shareholders. Also, what is Standish Group other than an analyst house? And finally, what of the 30% ? Should we all go back to pen and paper, and blow the Internet? I don’t think so.
Not so massive
2007-09-20
Well, I finally got round to listening to Massive Attack. Proof (should we need it) that another man's meat is this man's poison. I'm sure it's very good, if you're in to that sort of thing.
Update: Hold that thought... now playing: Teardrop
"This is a business line...\
2007-09-20
...is the best brush-off I know for unsolicited callers. As it happens, it's true - BT has messed up and put my business number in the home phone book. When the callers come however, all I have to do is say the above and they immediately hang up. Superb!
October 2007
This section contains posts from October 2007.
First impressions of Mumbai
2007-10-03
I've been invited to India to give a presentation at a conference tomorrow, so I flew in 36 hours early to see what I could learn from the place. Having arrived yesterday lunchtime (it's now 7am the next morning), I thought I should make some notes because experiences may well end up overwritten by today's events. So, here goes:
- Jet Airways - very comfortable flight from LHR, even managed to get some sleep (however, recommendation: never fly with half-finished dental work)
- Arrival at airport and exchanged 35 quid to 2700 rupees. It looked like a lot of money (and as things turned out, it is). Met by B.D., the conference organiser who took me to the taxi rank, past a row of ancient-looking black cars in various states of dilapidation. No, wait, that is the taxi rank. We get in and I look round for seat belt. Denied.
- The journey offers quite immediate presentation of the complexities and contradictions of India. At least I think it does, not having been here long enough. In the road sit some women, a girl picking nits from her mother's hair. Pass the conference centre, very posh, and almost immediately a set of slum dwellings by the side of the road. Most of roadside is covered with stalls, shops, and people sit, squat or lie just about anywhere, many asleep in the noon sun.
- Hotel Highway Inn is unremarkable but comfortable, friendly staff. Set back from the road right in the middle of an area I would have baulked at staying in, if it hadn't been pre-booked. Oh well, in for the experience I think, clutching my laptop just a little too tightly to my side. Room has shower, TV, and a whopping fan on the ceiling. And a noisy but serviceable aircon unit. There is no Internet access, when I enquire I'm sent to a small shop across the way, which does indeed have access for a single computer but no laptop connectivity. Even the phrase "laptop connectivity" is starting to sound a little alien. I buy a bottle of lemonade and return to the hotel, planning to head into town where I am sure there'll be the usual Starbucks, Wifi, street painters, book shops... hang on, where exactly did I think I was again???
- I enquire at reception for a map, and I ask how to get to the Gateway to India - one of the first places that popped up on the tourist sites on the Blackberry (not totally unconnected then), and incidentally, where the last ships set sail for Britain as India was decolonialised, or whatever the word is. Incidentally, today is Gandhi day, for added poignancy and (so I am told) less traffic. I assume - wrongly of course - that the tourist areas will have such facilities as the average incomer (me) might expect. i.e. Wifi. The receptionist, in broken English, firmly steers me away from attempting to use the train system, and so instead I get in another taxi.
I should at this point stress that I know I was singularly unprepared for this trip, for a whole stack of reasons. The unpreparedness signalled itself in a number of forms, not least my usual catch-up-on-email-on-plane-ready-to-upload-when-I-arrive habit, which was why I was rather hooked on the idea of finding a Wifi signal. I'd already established I could get GPRS but there were some rather large file transfers waiting to take place and given wireless roaming, I didn't want to get back to find a bill for more than the cost of the flight. Besides, it gave me something to focus on, so into town I headed.
The round trip journey was about 5 hours, spent in taxis, motorised rickshaws (Italian scooters on steroids) and walking. I was offered drugs twice, women once, and I had a man grab my ear as if that would make me more likely to want him to drill it with a piece of not-so-sterilised metal. I visited two tailors who made me feel very uncomfortable about the fact I didn't - no, I really didn't - need a suit right now. Down a dark, back alley full of cats and stinking of piss I found, or I was shown an airconditioned oasis of computers believe it or not with Wifi access, 10 rupees for 20 minutes which equates to not very much at all. I gave money to a sunken-cheeked man who really looked like he needed it, then denied it to another man who whipped up tears, not knowing who was telling the truth and who wasn't. New development everywhere - to the extent that it all looks ramshackle, like a Frenchman's house when he is half way through his renouvellements.
I agreed to meet with the same taxi driver 2 hours after I left him, ended up arriving back early and waiting an hour, only for him not to turn up and me wondering whether I'd been the idiot for agreeing, or was seen as the idiot for not communicating my request properly. There's more, so much more, tidbits of experience that would bore anyone stupid if I tried to list them, fragments of understanding that leave me none the wiser but nonetheless feeling I had learned something.
Back at the hotel. I buy a local SIM so at least I can get some GPRS access, with an international calling card. Out for a meal, chat to diamond seller sitting opposite. We talk abut contradictions, call centres, success and failure, the new riches of India making it harder to be poor as the cost of living rises. The value of property in this area has doubled, he tells me, due to a new microtrain that is to pass straight down the main road. And then, I go back to the hotel and sleep, unencumbered by the noise of the aircon or fan, which just about drown out the street outside.
This morning, a shower, and I write this. Today I meet with B.D. and then I visit some of Mitul Mehta's colleagues at TekPlus. First I need to find some food. Let the day begin.
An Email Day
2007-10-05
Thanks to the long-haul flight back from Mumbai, I have now spent exactly 8 hours doing nothing but email. That's responding to emails I've been sent, completing actions (e.g. report reviews) and emailing the responses, or replying to people I have recently met and doing whatever action was scrawled on their business cards.
Quite stunning, how it should all take that long. If anyone's still waiting for anything from me, its probably not going to happen so don't hold your breath.
Clubbing - the LinkedIn way
2007-10-05
As a test of the LinkedIn "Ask a question" mechanism and in response to a genuine need, I recently asked my network of connections what they thought about private clubs in London. I really had no idea - but having been stung for £250 one night I had this thought that there had to be cheap yet comfortable places to stay, if only I knew about them.
To my delight, I received no fewer than 10 responses, each from a different perspective. They ranged from the cheap and comfortable (just what I was looking for in fact), to the kinds of places that needed references before they'd give you a look-in, which is incidentally exactly where something like LinkedIn comes in extremely handy. Among the pointers were also links to useful web sites, such as www.travelstay.com for budget hotels. Several people also suggested block booking for a number of nights per year, which enabled rates to be negotiated. And finally, an old friend suggested we met up at his favoured joint for a beer next time I'm up.
So all in all, a rather good use of my time!
Second thoughts about Mumbai
2007-10-06
I was only in Mumbai for 3 days as I had to get back for a funeral (yesterday), which was a shame, but it was long enough to make a lasting impression. My first thoughts were very much from the perspective of a westernised rabbit finding himself in the headlights of another culture, so rather than attempting to rewrite them, here's a snippet of an email I wrote to an Indian friend.
- The general feeling of industriousness was telling. In England there have been examples of people feeling threatened by people coming over to the UK and working far too hard compared with their western peers. Catching a glimpse of the other side of the fence, where such effort appears absolutely the norm and not the exception, really put things into perspective for me. Meeting the staff at TekPlus was great, 10 MBA university graduates all with such drive and enthusiasm! India has so much to offer, and very clearly, it will be a major economic power in the future (in some areas of course, it already is).
- The amount of construction work going on was stunning. One day I took a rickshaw to Andheri (west) from where I was staying, in Andheri (East), and saw plenty of new buildings going up; on another day I headed south in a taxi to the Gateway of India, and saw a great deal of development as well on that journey. I understand in some areas of Mumbai, property can cost as much as in Manhattan.
- I had some good conversations, for example with a diamond seller with whom I shared a table at the local restaurant. He was saying how it was difficult for the poor, as successful businesses and people were getting richer, pushing prices up beyond what poorer people could pay. Square footage is doubling in price where I was staying, for example, due to a new micro-train being planned.
- The seeming contradictions between rich and poor, as both rub shoulders, was quite a surprise to my untrained eye. This was entirely down to my perceptions of course, but to see people from all walks of life going about their own business right next to each other was very different to how things are generally in the West (where we like to partition things up, and there is much fear and resentment). I was staying in Andheri (East), near the railway station ? so it was certainly not a ?sanitised? tourist resort but equally, I had aircon in my hotel and a hot shower which I took to be a luxury. Many people were sleeping on the street outside.
- The media, I was an avid reader of newspapers while I was there ? gave me a great deal of insight as well, both into cultural differences and local issues. In general I would read with interest something differently presented to here ("Guidelines for hugging in the workplace," for example), and then almost immediately think of several examples of similar contradictions in my own culture. Interestingly though, I did find (in the papers and during the days) more examples of cultural alignment than I found differences, which helped make me feel quite comfortable wherever I was.
To my surprise I was not daunted by the squalor in various places but neither was I unaffected by it, nor the bustle and the noise. Overall, I found people very welcoming, accepting and helpful, and I didn?t feel particularly threatened. The obvious question of course is, "why should I be?" but then, it was my first time in a very new place, by myself, where I really did feel I stuck out like a sore thumb. It won't be my last - I have already been invited to come and speak again at another conference, and also to be a visiting lecturer at a University in South India for a couple of weeks. We'll have to see what develops but equally, I'm very much looking forward to going back and I have no doubt I shall be spending a lot of time in India in the future.
Looks like I wasn't the only person writing about travelling in India last week, I defer to the greater experience and I'll have to read the book!
Off to IASA
2007-10-08
I'm assisting Matt Deacon this evening at the IASA meeting in London. While I'm not an architect myself ("I'm a lover, not a fighter" springs to mind, though I have no idea why), I do get a lot from hooking into the people whose job it is to make IT work in their organisations. More than just keeping me grounded, its also a great environment to test ideas and find out what's really going on.
From Alphadoku to Alignment
2007-10-08
I've been meaning to write this for a while, but as usual I couldn't find the time to write the full story. So I won't but at least I'll give the highlights! It just struck me as an interesting example of how connections yield, well connections, and potentially results.
A long time ago, I'd been thinking about writing a book about how to do IT right. Not that I had all the answers, but on my travels I've met plenty of people who have suitably inspired me, and I wanted to distil their collective knowledge in some way. I even started writing bits of it, but it never really reached critical mass.
Meanwhile... one day I was feeling a bit sneaky. Sudoku was growing in popularity, and I realised it was only a matter of time before people brought out bigger versions. How about an alphabetic version, I thought, a 25x25 grid based on letters? It could be called - alphadoku! A quick check revealed that the web site www.alphadoku.com was free, so I, ahem, purchased it.
Almost immediately I felt guilty - how could I sit on a web site without doing something useful? So, I set about producing one of said puzzles. A little while later, it was done - and I posted it up. One day, I thought, I would get round to writing code that could autogenerate alphadoku puzzles (note: started, but never finished - yet!)
So, I left things as they stood.
A goodly handful of months later, I got a phone cal from Wiley, the "for dummies" publishers. Was I interested in writing an "Alphadoku for Dummies"? Yes of course, I said. Unfortunately, Wiley went away to think about it some more, but when they came back they had decided the market was probably no longer in the ascendant - which may have been a good job considering my coding skills.
However, I did ask - "while you're there, I've been thinking about this technology book - interested?" Perhaps out of their guilt this time, I was put in touch with the right people. Almost immediately I realised how crap the book would be if it was just from me - by no coincidence, I was at the time working with the guys at MWD, whose opinions I valued (and continue to value) enormously, as well as those of my past and present colleague Dale Vile. So, I proposed we jointly wrote the thing, and for better or worse, everyone agreed.
So, from registering a Web site, we have a book - "The Technology Garden". I thoroughly recommend it of course, and it wouldn't be a quarter of what it is without being a team effort between the four of us. Still, ain't it interesting what unexpected acorns can grow into?
Mobile experiences from Mumbai
2007-10-08
Quite beside all my other experiences and insights derived from spending 3 days in Mumbai last week, I did have a good try with mobile technologies. Here's a quick summary:
- mobile access via Blackberry - check (though I am feeling a bit nervous about what the bill is going to look like). Roaming service was EDGE provided on the most part by Hutch, now part of Vodafone
- mobile access on laptop via HSDPA modem - check, but I rarely dared use it for fear of roaming charges.
- there was no Internet access at my hotel despite it having been advertised. Indeed, the only Internet within walking distance was a computer in a nearby shop. I had to travel to get full-speed Internet on my laptop.
- many businesses had a "Walky" telephone, which was essentially a mobile phone for office use. Looked like a desk phone, no wires, big aerial.
I did purchase a local pre-pay SIM, so I now have my own Indian phone number which is a bit cool. Setup was straightforward for calls, and pretty cost-effective, about 2 rupees per minute for international calls, as long as I took out the international calling card option - which I did. Topups could be done just about anywhere, retailers who lacked terminals had a clever scheme where they could top up my phone from a phone of their own - now that's what I call mobile! Strangely, despite being with Vodafone, the service showed up as "Orange" on my handset.
Unfortunately GPRS access could not be set up in time before I left the country, due to the time needed to process my ID. Oh well, hopefully this will be working for next time!
EMC buys Mozy - should we all be doing online backups now?
2007-10-10
Online backups of desktops and laptops are such a no-brainer for so many small companies and individuals - aren't they? Markets are all about supply and demand, so if this were true, there would be a mass of different options available. But there isn't, which suggests that either the conventional wisdom is wrong or something that needs to be in place, just isn't.
I started playing with online backups a while back, when I was introduced to the company Connected.com, way before it was acquired by Iron Mountain (a move that continues to flummox me). At the time the bottleneck on such capabilities was the available Internet network bandwidth, but then broadband arrived and took that problem away. I continued to use Connected.com for quite a while, but then after one laptop upgrade I never got round to reinstalling it. These days I'm running a RAID box in the spare room at the home office, and backing up our computers to that on a nightly basis. Hmm, no online backup. Why?
The main answer is probably that the quantity of data that is changing, despite broadband, exceeds the bandwidth available to back this up. For me this is about email - to keep some kind of control on my email load I make extensive use of offline folders, which are now several gig each in size. While Connected.com professed to do clever things with email data files, my personal backup windows were growing far too big. Meanwhile, personal photography and video capture habits are growing a large quantity of multimedia files, and to back them all up (40 gig and counting) would break the back of all but the most expensive online backup services, not to mention my ISP fair use policies.
Meanwhile, there is the question of usability. It pains me to acknowledge that many small organisations and people aren't running backups - or at least, are facing the risk that if they have a hard disk crash, they could lose all of that customer data, or family snaps, or whatever. I really, dearly want this problem solved. While Connected.com was eminently usable by me, it still fell into the trap of assuming that the user was computer literate, and so could only ever be working with the minority.
And so, to Mozy. EMC's acquisition is the first time (to my knowledge) that an online backup service has entered the fold of a major vendor whose business is based on effectively delivering storage solutions. I'll let EMC blow its own trumpet on that one, but let's face it, you would hope that if anyone can crack the code it would be a company that had set its core business on it!! Immediate caveat, no, I don't believe EMC's going to get it right just because they set their store (sic) on these things. However, one would hope they stand more of a chance than, say, manufacturers of washing machines, or indeed, companies that have built their businesses on providing secure offsite locations for holding large wads of paper and boxes of tapes.
To resolve the issues of bandwidth and usability together, Mozy needs to be delivered as an integral part of any small company's information risk management strategy. I deliberately use this term rather than backup strategy, because lets face it, backup is the answer, but not the question. Not all information was created equal, and not all information is subject to the same risks - so, given the fact that we have different ways of backing up and protecting information, we should be able to pick and choose which mechanism is appropriate for what type of information.
Looking specifically at usability, however, such gubbins needs to be kept under the bonnet. Consider a specific example - the emails I have sent over the past few days are quite likely to see responses (I hope), and I would probably like to refer back to them. Meanwhile, while I may want to refer to documents I created as part of older projects, the chances are I won't want to change them. In terms of risks - the chances of a house fire in the next week, are 52 times less than the chances of a fire over the next year (and potentially increasing further - if my son's demands for fire poi are ever heeded). In other words, I would dearly love to know that the information I have just created is protected in some immediate way, and I am pleased to have my older data protected, but they don't necessarily need the same mechanisms: if my recent data is backed up in-house that's a pretty good start. I may change my mind if I'm working remotely from the office
The bottom line, for me anyway, is that Mozy is a feature, not a product. The product I'd like to see in small businesses is one which exists as a client on every computer, and which can then deliver a co-ordinated backup strategy that meets the needs of each individual. If disaster (of whatever form) should strike, then emphasis switches onto the usability of recovery tools, so that individual files (or indeed, the entire environment) can be rebuilt. Mozy by itself may offer a tool for individual use that can offer significant protection, particularly for IT-literate individuals with lower data transfer requirements or high bandwidth availability. For it to be sustainable into the future, and pass the "upgrade and reinstall test" it will need to become an integral part of the backup toolset, in-house and external.
reCaptcha - Real text verification
2007-10-10
I've just installed this handy little widget on my personal blog. For those (anyone left besides me?) who hadn't heard of it, reCaptcha is a word verification tool reputedly to guard against comment spam. I've only just installed it so its success is yet to be proven - however it does have a spin-off benefit. Text strings are taken from publications being scanned (currently) for the Internet Archive, this is what the reCaptcha guys have to say about it:
"reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly."
I believe comment spam is still coming in through the back door - but I have Akismet to help with that (and thanks to those guys). Meanwhile, here's a tool that is not only keeping my front door clean, it's making a literary difference as well. Hurrah to that.
SAP blots out Business Objects
2007-10-10
"Never comment about those things you know nothing about," is the recommendation - so I won't remark on SAP nor, particularly, about Business Objects, though I have had dealings with both at various times. Aside from questions about what BO means to SAP (maybe I should rephrase that), interesting to me was what this means for the wider market.
As I have written previously, information management and, for that matter, service management are like different tracks up the same mountain, with fissures in between caused largely by historical scenarios - even the separation between structured and unstructured data comes largely from the adoption of two types of (incompatible) data repository. The result is that we have, today, a number of larger companies that offer large-scale, solution type software, and meanwhile smaller vendor companies whose job it is to add functionality or enable integration between the bigger stuff.
There's a whole stack of acquisitions we're seeing right now, which are potentially partially to do with the liquidity of the market (i.e. companies have cash to spend) but equally, there's a consolidation phenomenon taking place. The big players - IBM, Oracle, EMC and now SAP are all filling gaps in their own portfolios - like blotters, absorbing the smaller players as they go. This is healthy, particularly for larger companies that want to rely more closely on a smaller set of vendors; but also for the vendors themselves, who can look to deliver more integrated solution sets and service packages to go with them.
P.S. Still waiting for the HP acquisition of Opentext/Ixos! No I don't have any insider knowledge, but to me it would be like a key fitting in a lock.
Rush - Wembley Arena 10/10/2007
2007-10-10
Now the first thing I should say is that I'm still in "digestive recovery mode" following last week's most excellent Mumbai adventure. So, I had to leave my seat more often than I would have liked... Equally, I'd bought a ticket at the back of the hall due entirely to lateness of the decision, it was only a few weeks ago I knew where I was going to be. Speaking of lateness, a burst watermain in the Greenford area led to me arriving 20 minutes after the show had started.
So, perhaps unsurprisingly, I was feeling a little detached.
Things seemed to kick off quite slowly, like the band were going through the motions... but about half way through the first set they seemed to come alive, like someone turned on the lights. Or perhaps the lasers. Overall it was a good gig, a fine gig but maybe not a great gig, from my distant standpoint. The sound was reputedly much better than other shows so far on the UK leg of the tour, and the light show was superlative as always - I was left wondering how an arena could possibly be filled without such a thing. We idolise the bands, but where would any of them on these massive stages be without the lighting rigs?
From the gods, the view was of a nearly packed hall having a great time. Hands were waving, voices were singing along, applause was forthcoming particularly it has to be said for some of the old classics, but also such songs as One Little Victory and for Neil's drum solo. Personal highlights were Natural Science, Between the Wheels and Subdivisions, which will always take me right back to the Laserium Signals show, goodness knows how many years ago.
Preaching to the converted maybe, but then, why not.
The ITIL Placebo Process Effect
2007-10-10
Goodman Martin just forwarded to me a most excellent blog post about whether ITIL implementation could be replaced by a placebo, based on (say) astrology. Could it be done, and would it be effective? I'd have to answer yes, but that doesn't diminish the value of ITIL itself.
Instead, what this comes down to is process. Management initiatives have been legion probably since Plato suggested everyone should get out of the cave (John Gray, eat you heart out). But what often yields success in management initiatives, is the process that is worked through and delivered upon, rather than the specifics of the initiative itself.
In some ways this is similar to therapy. Some advocates of homeopathy say that a great deal of the benefit comes from having someone with whom one can share one's problems, and the fact that one is told at the end to stick a couple of small, sugary pills under one's tongue is just one element of the overall experience. It is a tricky one - because if it is ultimately seen to be true, it does suggest that snake oil salesmen were maybe not quite such confidence tricksters after all. And IT marketers... maybe that's a step too far!
And so to ITIL implementations, which will undoubtedly require some of the following:
- strategy definition
- discovery of "what's out there"
- reviews of existing processes
- interviews with key stakeholders
- training
- definition and use of metrics
These characteristics are not new, and they are not specific to ITIL. What they do offer however are opportunities to engage, to review and to update the organisation (IT or otherwise) on the latest incarnation of best practice. Right now, in IT management circles, this best practice revolves around ITIL - which, let's face it, is a pretty good starting point. But even if ITIL is being picked up by an organisation in a faddish way, the process still offers such opportunities.
Of course, the question then becomes, how useful are activities such as those listed above? In my consultancy days, I can remember being brought in to work on business process modelling exercises, but when I went to interview key people, they would tell me it was the third time that year they'd been interviewed, for different initiatives. At which point, any such efforts become counterproductive.
Overall then, should we have IT astrology improvement programme? Well, potentially. But only if it goes through a process that will make a difference to the organisation, all by itself. Indeed, an initiative without a correct process is probably no initiative at all.
Total Immersion - My New Work Blog
2007-10-11
I've kicked off a new blog relating to my more geeky side, that is, my professional persona at Freeform Dynamics.
It's called Total Immersion - both a statement of intent and a cry for help :-)
A number of the posts so far are test posts, i.e. I don't know what the format will end up like, but for now the most important thing is that I get writing.
Fill yer boots!
Information Management - three sides or three mountains?
2007-10-11
Yesterday I had a rather interesting conversation with Dale about how perspectives on information management can vary according to the provenance of the people involved. At the highest, most visionary level Information Management can be defined (and indeed, is, by IBM) as getting the right information to the right people at the right time. That's simple enough to have people nodding sagely or shaking their heads in a "well, yes of course" kind of way.
While the "what" might be simple, the "how" can very quite considerably. Essentially there are 3 camps:
- the Business Intelligence (BI) crowd - with a structured data background, these people discuss normalisation, cubes and master data.
- the Content Management (CM) crowd - historically working with documents, they talk in terms of taxonomies, workflow and rights management
- the collaboration crowd - coming up from file-based environments and delving into Intranets, for these it's all about desktop access, email and office integration
While each group of people may be highly computer literate, each will tend to talk to its own using a certain language, philosophy and (dare I say it) psychology. I know this to be so, based on experiences talking to groups of each type. I could waffle on about that for a while but hold that thought - what's perhaps more interesting is how this impacts specific vendors, again, based on their provenance.
The obvious examples are:
- IBM, with its DB2 heritage, still very much in the BI camp despite having acquired FileNet
- EMC, having purchased Documentum 4 years ago and falling rather naturally into the CM camp
- Microsoft, its Windows-as-the-platform heritage yielding a firm position in the collaboration camp, with its Sharepoint "flagship"
Savvy, companies marketing execs in these companies will be able to pitch at the visionary level. For the rest however, its not so much that they don't get the existence of the other areas, more that they don't see the point of dwelling on them. They'd be right to surmise that there's so much going on in their own areas, they're kept too busy to step across into the other areas. The end result however is that we end up with three communities, not one, the behaviours of each dictated by their own technological heritage. For a stark example of this, consider what part Lotus has to play in IBM's Information On Demand vision (here's a clue: not much).
Does it really matter? I believe it does, not least because of the inefficiencies of reinvention and the requirements for integration across the piece. Perhaps it matters most of all because our research tells it so - organisations really do want an integral view of their data assets, of all kinds, and they are frustrated by the inability of vendors to serve it up. I do understand how hard it is to build a scalable repository, but that doesn't mean we need to stick with models and architectures that are some 20 years old. Neither am I sure we can afford the luxury of preaching to the converted within our own comfort zones.
P.S. I'll leave the decision about where Oracle fits, as an exercise for the reader :)
del.icio.us Tags: freeformdynamics
Things I miss about UNIX
2007-10-11
A long, long, time ago, in a vertical far, far away, I once used to spend a lot of my time doing various things with UNIX - as an administrator or a developer. Somewhere along the route I appear to have become waylaid - I now spend most of my time (literally, I feel) in front of a Windows machine, like one of those office workers I used to have to support. How the tables have turned.
Don't get me wrong, I'm quite satisfied with what I have, though some things could of course be better. Much like, I suspect, the majority of office workers out there. However (and probably unlike the majority of office workers out there), there are some things I do miss about UNIX:
- text manipulation - the number of commands (sort, uniq, awk, etc) available to muck around with strings, extract them, compare them, munge them together and so on
- pipes - with the above, making it very simple and elegant to develop command lines that could do some very powerful things
- the command line (of course) - but not just for text stuff, also just to make it very easy to move around directories and move things around
- the knowledge (though this had to be learned) of where admin information was being stored, generally in text files which could be easily changed (though this was a two edged sword!)
- finally, the general feeling of control that comes from having an operating system in which everything was configurable, even to the extent of tuning the kernel...
... so I guess the question is, if it was so great, why don't I still use it? Bluntly, the answer is that it wasn't - so great, that is - at least, not for I have needed computers for, for the past 10 years. This is possibly more an indictment of me than of any specific technology, in that I don't have the time or the inclination to spend my time tinkering with software, when I should be getting on with other stuff - writing, generally. Also, and again to be blunt, office apps have (until recently) been poor or stupenduously expensive for UNIX, as anyone who tried WordPerfect for UNIX will testify.
All the same, that doesn't stop me missing such things as the above - particularly when desktop tools from Windows don't cut the mustard. If I wanted to dedupe my email contacts in UNIX for example, I could do so with 2 or 3 commands piped together in a single line... whereas in Windows, I have the option to filch around the Internet for a piece of freeware (slow), or write a bit of code myself (unlikely - and option 3, to actually buy something, is beyond me completely). Of course, all is not lost as I could just install Cygnux for many of the above benefits, but somehow, I just don't think it would feel the same.
Descending on Documation, Schmoozing at Storage Expo
2007-10-15
For anyone who's in London this week and wants to meet up, I shall be at Olympia Wednesday and Thursday for the jointly held conferences of Documation and Storage Expo. Indeed, I'll be presenting at both, specifically at the following sessions:
- Information Lifecycle Risk Management, 17 Oct at 1145 (Storage Expo)
- Using Content Management to Power Business Intelligence, 17 Oct at 1600 (Documation)
- Governance, Liability and Responsibility: Getting rid of information, 18 Oct at 1015 (Storage Expo)
In case I haven't banged on about it enough, I don't really see the above as anything other than facets of the broader picture of information management, indeed I've had some good chats with the guys at Reed Exhibitions about the way things are going. From our research there's a clear desire to link BI and CM for example, and good governance and risk management cannot exist without a firm understanding of your information assets. But I'm telling you the plot :)
Should be a lot of fun - see you there!
Silicon Agenda Setters - the results are out
2007-10-15
Sometimes, this job is, well, just a job and sometimes it's a damn good wheeze, such as when I was invited to come along to the Silicon.com agenda setters panel last month. My approach was quite deliberately to try to bring to the party people who wouldn't necessarily be top o' the list but were still ground breaking in some way - hence for example Rob Pardo of World of Warcraft fame as well as all the usual suspects, Jobs, Schmidt, Benioff and the like.
The final list is here, with Facebook's Mark Zuckerberg at the top. We don't see Facebook as a major business tool, but I have to agree with another agenda setter, JP Rangaswami, that it's rattling a few cages in corporate land. That doesn't mean traditional IT business leaders have been pushed out - with John Chambers, Diane Greene and Mark Hurd rubbing shoulders with the new media darlings and offshore tycoons.
Fascinating to me was how much there is of the IT industry about which I have very little understanding. We talked about the inner workings of venture capital for example, and the influence of Asian manufacturing. Great, mind expanding stuff.
I can't wait 'til next year - if they'll have me of course!
OK, I can't spell "the\
2007-10-15
Not sure what happened about 3 years ago but my fingers started jumbling up on the keyboard to the extent that "the" always comes out "teh" etc. Annoyingly, I've accidentally added "teh" to my Windows Live Writer spell check dictionary, so I'm not sure I'll catch them all. Anyone have a cure for this, do let me know...
Sleepwalking Towards the Last Post
2007-10-16
The one thing I didn't expect to be doing this morning was agreeing with Tory MP John Redwood on the plight of the postal workers in the currently still-ongoing postal dispute in the UK. For one, I am full of admiration for the merry fellows who, come rain or shine, ensure the message gets through - people like our very own post man, Andy, who always has a smile and a friendly word.
In John Redwood's own words however, "The country is not grinding to a halt." Here's perhaps the nub of the real crisis that faces one of Britain's last public industries - that technology has it doomed. Not only is email providing an unthought-of transport for many of our communications, but also, any attempts to bump up the relevance of the post by (for example) not delivering it for a few days are not only falling on deaf ears, but are counterproductive in the extreme. "Give me your bank details," someone said to me a couple of days ago, "I won't send you a cheque, I'll pay you online instead."
It's a tough one. The post is like the messaging equivalent of the book: if one is old-style data in motion, the other is data at rest. Just as we don't want to replace books with electronic equivalents for oh, so many reasons, neither do we really want to lose the sound of envelope on mat. Even such junk as catalogues has some merit, judging by the amount my kids pore over them as they spend their pocket money many times over. And the thought of a birthday mantelpiece being reduced to a bunch of printouts... don't even go there.
Meanwhile, there are, oh, so many items of post that really don't need to be sent. Junk - of course (apart from the catalogues maybe); utility bills; bank statements; cheques to cover bill payments; the list goes on. Why are they still posted? Because (a) that's the way its always been and (b) not everybody yet has the email and Web alternative. To catalyse both requires significant effort, or a limitation on supply, which is exactly what the postal unions are providing.
Where will it all end? In 5, 10, 20 years time I have no doubt there will still be a postal service. It's not just about the postie: Post Office Counters provide a wide number of services themselves, many of which are relied upon by a great many people - benefit payments, car registrations and the like. From an economic standpoint however, the counters are propped up by the numbers of stamps on envelopes and packages. As postage revenues fall, so will the numbers of post offices, and (hence) the quality of service. I don't want to see that - but neither am I likely to send letters for the sake of it, particularly in times when there's no guarantee of when they will arrive.
As someone who lives in the country, the absolute last thing I want to see is that postal workers go the same way as the post offices that are already closed - I do understand that for many, in remoter places, the arriving post can be one of the rare contacts. These things are important, and an intrinsic part of sustaining rural communities. All the same, I can't believe that retaining the status quo for the sake of it is the right approach. Email and the Web are here to stay, for better or worse - and we need to face up to how they impact on even our most loved of institutions.
There are undoubtedly a number of services that we need, that we don't want to put into the hands of a private company, and (most importantly) which we are prepared to pay for. Perhaps some of these are listed here; there will be others. If we want them, let's think about how to ensure they continue to happen; the alternative is to stand by and watch as they crumble.
My First Video Briefing
2007-10-16
I know those stalwarts at Cisco have been doing this for yonks, but today saw my first videoconference briefing, via WebEx. A few issues with setting up the camera at my end, and quite jerky but it did make it more of a participatory experience. Thanks Lisbet for organising and Chris for the call!
Eucon Dance if You Want To
2007-10-16
On Sunday I was fortunate to attend (and shift some books) at the Rush European convention Eucon. It was a great day, most people seemed to agree - luminaries of the Rush story Terry Brown (who keynoted), Howard Ungerleider and Andrew MacNaughtan were all present, and a good time was had by all, capped off by a stonking Rush gig in the evening!
Thanks to Ashley for inviting me, and to everybody else for making me feel so welcome.
Day 1 of Storage Expo
2007-10-17
Random thoughts...
- walking in was like the scene at the end of Trading Places, the noise, the hubbub. "Lots of people!" I remarked to Bob Plumridge. "Yes, but they're all vendors!" he replied. True, but different later.
- First session went pretty well, good level of questions and feedback
- Most of day spent taking repeated bites of the Expo apple. Too many people I know to list. Symantec, Riverbed, Pillar, EMC, LSI, Emulex, Plasmon, Quest, 3Par, Copan, IBM. Lots of conversations, all good. Friendships remade, relationships rebuilt, business redone. Sorted.
- Podcast with Reed on future of storage. It has one. Cool.
- Quick review of second pres with Xerox - then more of the same
- Second presentation coincident with football kick-off England vs Russia. Disproportionate number of females in audience, which outnumber males - realise football is answer to rebalancing sexes in IT. Pres goes OK despite diminished numbers.
- Leave conf too late, arrive at pub after everyone has left, story of my life.
And tomorrow is another day.
HELLO! I'M ON THE PLANE!
2007-10-18
It was with initial trepidation that I read the news that mobile phones might be allowed on plane flights around Europe. After all, along with the metro/tube/subway, churches and libraries, Antarctica and deep space, there are few places left that have managed to avoid the encroaching wave of over-loud teenagers, ringtones and sales reps.
But then, here I am on the train. There's a guy opposite me calling his missus to check on childcare arrangements, the occasional ping of SMS and, somewhere in the background, an ongoing conversation that could be on a phone but I can't tell. Realistically, perhaps it won't be so bad.
A major upside is nothing to do with voice at all of course, but with data. Plane travel is both a sanctuary and a frustration, particularly given that it's an ideal time to catch up on email etc. At least this provides a choice.
The downside is mainly to do with cost - unlikely the operators (or airlines - the system is based around on-board picocells) will feel under to much pressure to offer this as a low-cost facility. Ironically, this could also be to its advantage if it means that the louder-mouthed talkers are more likely to be in the upper class decks.
We'll have to see how it pans out.
Nostalgic? Try Mobile
2007-10-18
Anyone who thinks modem speeds are a thing of the past (hi Joe, not a criticism) can't spend much time working mobile, using GPRS, throttling back the arrival of the 4Meg file in Outlook so that he can get onto the Web and post a blog. 56Kbps doesn't seem all that distant, because that's pretty much exactly what I'm on right this minute. In central London, no less.
Day 2 at Storage Expo - Greening Up the Act
2007-10-18
Exhibitions are like travel - one spends a lot of time seemingly not doing much at all, and its completely shattering. In the case of Storage Expo, "nothing at all" translates into presenting, listening to presentations, participating in meetings, walking between stands and just downright chatting, all in all very productive from an information gathering perspective but tiring nonetheless.
So what's a-buzz? Admittedly I was looking through my own, undoubtedly biased "solutions not products" spectacles - there may have been interesting things being demonstrated in the hardware layer, but I confess they passed me by. It was the companies doing "joined up storage" that I found most interesting, such as Copan getting together with Data Domain to offer concentrated offline storage for deduped backups. I seemed to spend a lot of time talking to people about green issues as well, which prompted me to think even harder about the solutions/software side of the house.
Put simply, most storage hardware vendors are currently pretty shabby when it comes to being "green". For all the talk about "our watts are lower than their watts," which may or may not be true (but is probably a leap-frog thing between the vendors anyway), the whole notion of a constantly spinning mechanical device is never going to look good from a power consumption perspective - as companies like Plasmon (whose storage doesn't have to spin constantly) are keen to point out.
It's a bit of a cruel trick really, a bit like the printing industry might feel if it suddenly had to use paper from sustainable forests (hmm. not a bad idea. but anyway). Since that guy from M.A.S.H first invented the hard disk, the general principle has been, "let 'em spin" - which of course uses power. The more data your business generates, the more disks you need, and asking a disk storage sales guy to regulate the flow is going to be a bit like asking Cadbury's to restrict the supply of Flakes to ice cream vendors. Tricky.
But, for all manner of reasons, we have arrived at a point where "get more disk in" is no longer the default answer. All those disks are keeping on spinning, using up power whether the data on them is being accessed or not. Result: almost overnight, storage hardware companies look like the bad guys and are trying to out-do each other and hiding behind the worse-er offenders.
All of that is missing one fundamental point, however, which is to question whether or not the disks are required in the first place. For example: I was speaking a few months ago to an oil company that had 17 SAP instances, all on different hardware. We have organisations who have "keep everything" policies for data retention rather than spending the time to work out what they don't need (or shouldn't keep). There are database compression companies like SAND Technology and Sybase, whose products remain niche due to lack of market take-up.
All of this points the finger of blame at a point quite a bit higher than the hardware layer. Sure, there will undoubtedly be clever things that can be done within the SAN, such as dynamic storage provision into virtual pools, for example. However, to solve what is fundamentally a data management efficiency problem, requires serious thought right up the stack, in terms of clearer definition of what information the business actually needs, down through more intelligent application architecture, to better storage provisioning and resource management. Let's not stop wagging fingers at hardware manufacturers, but while we're pointing out the motes in their eyes, let's also recognise the environmentally unfriendly and resource hungry logs in our own.
P.S. No, the guy from M.A.S.H didn't really invent the hard disk, it was in fact named after a rifle. Who says there is no humour in storage.
A not particularly exhaustive Twitter client study
2007-10-19
I had a quick browse about for a Twitter client, and there are plenty, many require the .NET framework which I didn't fancy installing unless I had to. So, from a shortlist of:
- Twadget, which sits in the Vista sidebar (and what a cracking name)
- MadTwitter, simple but effective
- Pwytter, written in Python
And the answer is: MadTwitter. It stays minimised and pops up when there's a tweet, lets me post back (and succeeds, Twadget sometimes fails), and is fast (unlike Pwytter). It lacks plenty of features, but maybe that's the point.
Whoa! How Green are We!
2007-10-19
By total coincidence, Tony and I both posted a green review of Storage Expo yesterday evening. Thinking about it, it was probably watching Tony checking a certain vendor's credentials that at least partially prompted my own post - so perhaps its not that big a surprise... and not the first time yesterday I accidentally trod on Tony's broken toes, sorry mate ;)
Still, topical, topical!
What a way to start RSA - with a virus
2007-10-20
Well, well. The last thing I expected to see when I plugged in my SD card this morning, was a virus. I think I must have been picked it up earlier in the week. as I was transferring files between computers.
First thing was when an AVG window popped up, to say a file was being quarantined. When the file re-appeared, I knew there was something awry. For anyone who is interested, it was the "microsoftpowerpoint.exe" virus - conveniently explained (along with removal instructions) on the Trend Micro web site, among others.
(Unless I speak too soon,) I got rid of the blighter in the end. But it was a timely reminder that, while the debate should quite rightly shift to take into account the true breadth of the risk landscape, that ol' external threat is still alive and kicking.
Nigh time to check those signature files are up to date, before heading off to the RSA conference in London next week...
This message is so wrong in so many ways
2007-10-24
Just seen as I resumed my computer from suspended mode:
"Not enough memory or disk space to update the display."
Sheesh.
Shoulder standing 101: being influenced by the influencees
2007-10-24
These are indeed "interesting times" to be an influencer of any form, not least as we see the democratisation of influence - interestingly, not a term that has yet been adopted particularly widely. It is a timeless truth that every human being has an opinion, which is expressed more or less willingly; what has changed is the mode of expression, the Internet providing a voice louder even than the loudest rock band in the universe.
Whether or not this is a welcome change is a moot point, particularly for organisations who have made their money controlling the flow of such expression, such as news organisations and, indeed, industry analysts. The fact is that the guy in the bar now has global reach, and the rest of the world has to deal with that fact.
To the point - what can we learn as industry analysts? To me it's simple - our role and privilege is to spend time learning about what is going on, and to draw insightful conclusions that can then be fed back for the common good. While many may have time to think about aspects, it is a rare luxury to be able to do this as a career, without the distractions of what many would consider to be a real job. We're standing on the shoulders of giants - one set of insights and conclusions serve as inputs to the next level of analysis, and thus can we all move forward.
So, I don't feel in any way threatened by these developments; rather, I revel in the fact that the number of fire hydrants to drink from is increasing. Welcome indeed, for example, to the vendor analyst relations blogs such as those from Carter and Skip; or indeed AR professionals like Jonny and David, and all the rest (I said it was like a fire hydrant!). I would love to say we have a monopoly on how things are evolving but the truth of the matter is that nobody does, so all help as we evolve our services into the future is gratefully received.
In the future, then, we shall continue in our role as aggregators of opinion and behaviour, and offer our findings back to the community in the way we do now. It's an eminently scalable model, and for now we believe, adaptable to what the wonderful world of influence throws our way.
Sun vs NetApp - Good Hippies Don't Divorce, Do They?
2007-10-24
Funnily enough it was only today that I was recounting a tale to goodman David, about a formative experience I had a few years ago when two hippy friends of mine decided to divorce. It took me a while to reconcile this - after all, I thought, if hippies were so laid back and peace loving, surely they'd just get on?
And so it is when I see people like Dave Hitz and Jonathan Schwarz in a spat. Admittedly I've never met Jonathan, but from what I've heard about him he's a regular guy, who just happens to run a rather big IT company - and he sports a pony tail. I've met Dave on a couple of occasions through the years, and he's come across as a regular guy as well. As in a divorce situation, I know I have to be grown up and recognise that (a) books shouldn't be judged by their covers, and (b) there's probably an element of truth on both sides.
The story seems to have unfolded something like this:
- many years ago, NetApp decided to build a storage box based on available technologies, and stick some clever IP into the I/O layer while leaving the processing layer as a reliable, if back ward overseer - I seem to remember the expression "trailing edge" technology being banded about, not in a negative sense but as opposed to "bleeding edge" - including such stalwarts as the NFS protocol.
- a couple of years ago, StorageTek was miffed about something NetApp had done, but the sides never reached any particular resolution. STK was then bought by Sun Microsystems, who then continued bickering with NetApp. However, as storage wasn't seen as particularly strategic at the time, its still didn't come to anything.
- quite recently and indeed laudably, Sun decided to treat storage more strategically, at the same time as reaching a level of corporate psychological resolution about the relationship between its own software and open source. All admirable stuff, with the result that Sun decided to do some more stuff with storage - including releasing the ZFS file system to the open source community.
- unfortunately, NetApp saw this and wept, in the belief (now to be proven in a court of law) that Sun was riding roughshod over some of the "clever layer" intellectual property, indeed, patents that Dave Hitz himself had filed all them years ago (and suddenly, its personal). This may or may not have been ill-thought-out but unintentional on the part of Sun, or indeed, it may have been deliberate, anti-competitive move, a bit like "accidentally" leaving Coca-Cola's ingredients list on Howard Stern's desk. We'll find out - but right now, we know that it led to NetApp taking out a lawsuit on Sun.
- of course, the dot in dot-com was not going to take this lying down. After (no doubt) that quick call to determine whether some amicable resolution could be reached, Sun assessed its options and today has decided to countersue, not just about ZFS but calling into question the very, "trailing edge" foundation that NetApp had adopted, at the very inception of the company.
Nasty. Commenters have quite rightly compared this to the SCO vs Novell case, and indeed raised questions about whether IP can be open sourced, or indeed closed back up if it does infringe on patents. To me, this also echoes the rather dodgy ground Microsoft finds itself standing upon whenever it reiterates its patents issues against Linux (I'm still not sure if Steely Neelie Kroes has put this one to bed).
I'm also rather fascinated at how the battle lines are being drawn up in the blogs. Mr Schwartz has always been an advocate of the openness of CEO blogs, but of course when the fecal matter hits the fan, one cannot stop being declarative even if one wants to. This is where it's like my hippy friends - bloggers are supposed to argue, sure, but when it comes to them acting like good ol' boy CEO types, what happens then? things sure as heck don't get decided by the number of diggs assigned to the counter-arguments, or consider adverse comments as a form of cross-examination.
I confess I have a nasty feeling about this. Court battles are just that - battles - and fighting dirty is acceptable as long as it happens within the confines of the law. Already we have seen Dave Hitz branded a liar and a troll, and while Dave H has not used the terms, he has called into question Jonathan S's records of events. NetApp has (as illustrated in that same blog post) started from a position of, "let's get along, but use the courts, that's what they are there for," but it doesn't appear that Sun isn't going to keep the gloves on. Call me old fashioned but Jonathan's statement, "we are requesting a permanent injunction to remove all of their filer products from the marketplace" doesn't appear to be wanting to meet anyone half way.
Where will it all end up? Messily, certainly. We now have two court cases, one of which may find it correct to say Sun shouldn't have released NetApp's IP into the open community - it's difficult to know what the outcome would be from there, other than requesting people to kindly switch it off. Or it may find Sun in the clear, in which case NetApp will look a bit foolish, and perhaps in trouble as the patents in question are reputed to be a mainstay of the company's offering.
Meanwhile, Sun's own case, if it succeeds, could bring NetApp to its knees. If it fails, Sun will look a bit silly but can revert to the counter-sue argument, so nothing lost other than a bunch of legal fees. Hopefully this second case exists purely as a counterpoint, and the situation will be judged on the technical merits of the claims rather than obscure interpretations of patent law.
What do I feel about it all, besides just wishing naively that everybody could just get on? The only recommendation I would make to both sides, is that the behaviours exhibited during the process can be as damaging to a company as the topics under discussion, so - play nice, guys. Meanwhile, while I'm not sure I'd want to rush off and install ZFS across the entire organisation until I was sure I could keep it, I wouldn't be switching off my NetApp filers just yet.
Twitter's just a big chat, right?
2007-10-25
S'funny. There I was thinking that Twitter was in some way different from, well, anything else. To the extent that it had taken the web publishing model and reduced it to the finest level of textual granularity, expressed as a 140-character "tweet". And it's a platform, open API's, the lot.
Meanwhile, we've been using Skype as our messaging tool du choix between Freeform team members. We even use it for voice sometimes, but text is the default.
So there I was last night, getting on with various things - with a MadTwitter window open on the left, and a Skype Chat window on the right. And, behold, I was using them both in exactly the same way.
Sure, there's differences. Twitter is the ultimate in broadcast chat - when I post, it's like shouting across a crowded room where everyone can hear (and fortunately, not everyone is shouting). Meanwhile, in Skype I have to pre-select people I want to chat with - but I can have multiple chats with individuals and different combinations of groups. I can access Twitter on the Web, through phone or via my handheld, and while I can't open a Skype window on the web, I can do the latter two. With Twitter, I can write to it from other programs. So I can with Skype. Etc, etc.
Other messaging apps offer a bunch of facilities that are much more controllable than either Twitter or Skype, including IMvironments, talking avatars, enterprise logging features, unified comms and so on - which makes me wonder even more. Aside from the "following/followers" concept, what exactly has Twitter got that traditional messaging hasn't? It's important - because while this would be quite a simple feature to add to the majority of text messaging clients, it would be quite a challenge fro Twitter to bulk itself up to offer these stock features.
I've probably missed the point entirely, but then, so did the kid who said the king had no clothes on.
Lighthouse stories
2007-10-26
I was prompted to dig into the story behind the picture on my wall, when David Brain questioned whether it was real or photoshopped. The answer - very real - and the bloke in the lighthouse was lucky to get back inside in time!
Brings a whole new meaning to leaving the door ajar ;)
Pulling a blog up by its bootstraps
2007-10-26
It's an interesting experience, starting a new blog - like any investment, one has to take the long view. As a stake in the ground, Totalimmersion currently has up to about 40 readers a day - it'll be interesting to review this in a year. Meanwhile, its being syndicated to my chums at IT-Analysis.com, whose hit rate is much better. But watch this space, the only way currently is up!
The bigger picture of behavioral analysis - a conversation with Tier-3
2007-10-26
In a break with tradition, I'm going to write about a specific company in this one, or at least a specific series of conversations. I've been talking quite a lot to the guys at Tier-3, a company specialising in software that can look for anomalies in how IT is being used. While there are many potential applications of such a capability the company has focused its efforts on looking at IT security, sucking in events from computer logs and looking out for things that don't fit with the norm. Think intrusion prevention, unauthorised access and the like.
It sounds so great in theory - and indeed, the company has recently announced wins for its HUNTSMAN product with some quite sizeable players such as Toshiba, so it must have something going for it. I still find myself feeling dubious however, not least (indeed, mostly) because whenever we do research into who's buying what in IT security, behavioral analysis software seems to come out near the bottom of the pile.
So, there appears to be a bit of a behavioral anomaly about the whole thing. If such products are recognised to be so blooming useful, why is nobody buying them? My conclusion has been that, while such security products as antivirus, firewalls and VPN are quite simple to explain and therefore cost-justify, it was always going to be harder to assemble a business case for such tools as behavioral analysis.
When I spoke to Tier-3 I put to them this position, and asked (on the back of such deals as Tosh), whether it was changing. What Peter Woollacott, CEO told me, was that it was true, but he shed a bit more light onto what made it so hard. "Anomaly detection investments are currently being driven by the value ascribed to IT/IP assets relative to cost," he said, "yet many organisations still fail to understand the value of their IP assets." In other words - if you don't know what you've got, it's difficult to work out its value, or indeed (as Peter explained), how vulnerable it is against the legions of potential threats.
It's an interesting one, not least because (according to my illustrious colleague Martin's report) the lack of asset knowledge is such an age-old problem in IT, leading to that other age-old chestnut- how can you secure your IT environment, if you don't know what you've got?
Funnily enough however, the answer to the asset management issue may well come form considering some of the desired outcomes of security - not least that mother of all reasons, the reduction of business risk. Peter used the term "return on security investment" - the ramifications of which can be seen quite clearly in more regulated environments, and are starting to be visible in other verticals. "Just as Basel II rewards better operational risk managers with lower costs of capital," commented Peter, "risk adjusted decision making is already featuring in corporate investment cases."
Understanding of IT risk requires (and therefore drives the need for) understanding of IT assets, and their vulnerabilities. Ultimately this also drives the need for products such as those from Tier-3, but its unlikely that the company can currently use this as a product pitch. Rather, organisations that are already educated on the need to manage risk for business reasons, and are acting upon it, will also want to get on top of their IT assets and what they are up to.
To take this one step further, perhaps there is no business case for behavioral analysis per se. That is, if such analysis is seen purely as a security measure, i.e. a way of working out what went wrong after the event so the hole can be plugged, it will always be difficult to justify. Alternatively, organisations that "get" such topics as risk management will be able to see behavioral analysis as a way of achieving some of the higher level goals that ensue, such as ongoing monitoring of risk levels in an already well-managed environment. In this context, anomaly spotting becomes a feature, and not an outcome.
Which is perhaps, as things should be. Companies such as Tier-3 better be in it for the long haul however, as there is still plenty educating to be done just to get some organisations off the starting blocks.
Why I like phone briefings
2007-10-26
I was asked today whether I could pop into town for a half hour briefing next week, and I said I'd prefer phone in the first instance. When asked why, I gave the following example of my London routine. I thought it would be useful to post it here for future reference:
- leave home, 07.45
- Get train, 08.08
- Arrive Paddington, 09.25 (on a good run)
- Get tube, 09.35
- Walk to destination, arrive 10.15
- Briefing finishes, 11.15
- Walk to tube, 11.30
- Tube to Paddington, arrive 11.50
- Next Train, 12.30
- arrive at station, 1.50 (on a good run)
- get home, 2.15
So that's 6 hours 30 minutes for a 1-hour briefing.
Don't get me wrong, I love meeting people face to face, and it would indeed be preferable in many cases. With phone briefings I can quite literally fit 5 times as many in (allowing for coffee breaks), which is also time well spent!
The Wii Has Landed
2007-10-26
So, today was the day. In a rash moment of parental materialism I agreed with my son Ben that if he could raise half of the cost of a Wii, I would provide the second half. Never have I seen a boy work, save, plan so hard. Finally the day has come, we nipped out to town at lunchtime to pick one up - having reserved it by phone, such is the Wii demand.
Ben's nipped out, otherwise I would no doubt be on the thing right now - but it has already taken pride of place next to the telly in the front room.
Bowling, anyone?
Working through the book pile
2007-10-28
Six weeks ago I started reading a number of books, possibly a bit ambitiously I kicked all four off at once. This is no more than checking in as I haven't yet finished them - but I am over half way. So far I have completed:
Richard Dawkins - The God Delusion. Clearly in places a rant and not as well argued as expected in places, but a very enjoyable book and necessary reading for anyone who takes the topic of religion seriously, on either side of the divide.
I felt Mr Dawkins was like the man who finally had had enough of the sniping and negative speak, and in the end felt he had no choice but to say things as he really felt. As such, he made a few cutting remarks of his own - but having got these off his chest, presented the arguments against organised religion pretty well. There were a number of weaknesses - the Buddhists, Hindus and other non-Abrahamic religions got off pretty scot free, and the presentation of religiousness as 2-dimensional, i.e. there was a straight line between atheists and zealots depending on level of belief was, I thought, a bit simplistic.
Perhaps weakest of all was the explanation of the fact that there is no God primarily on the basis of probability. Of course God is highly improbable, but then so is the human race, the latter fact one which Mr Dawkins felt proved itself by the presence of thee and me. The fly in the ointment is perhaps the assumption that we can only judge by what we can measure, when of course at the same time we are woefully inadequate to to such a thing, both from our ill-equipped position at the periphery of some far-flung galaxy and also our fundamental, stupid humanity.
None of which proves there is a God either, but as spake the humanist prophet Douglas Adams, with proof, there is no need for faith. I do wonder about this one - specifically (and I would love Mr Dawkins' feedback) that perhaps we have a genetic propensity towards such socio-psychological constructs. More specifically still, perhaps it is our drive towards higher planes of thinking that have in some way enabled us to evolve, to the point where we are now. Organised religion may have been the cause of much that is wrong, but what if it is a prime factor in our development as a race?
I don't know the answer to this, but it is certainly worthy of investigation. Food for thought: would the Buddhists in Burma have taken on the government there, if they had no faith in their own higher powers? Does religion come from community, or community from religion? And indeed, have we really advanced so far in the past 10,000 years that we no longer need such a crutch? As I was walking the dog earlier I was considering the existence of "psychogenes" - perhaps these are as selfish as those concerned with our more physiological aspects (and if these are already well-established and under the scope, clearly I need to read more).
M. Scott Peck - The Road Less Travelled and Beyond. I was really looking forward to this as I got a great deal out of the original The Road Less Travelled, and to be sure there were some moments of clear insight in this book. Indeed, there was a point about a third of the way in where I thought how great it might be if Mr Peck and Mr Dawkins were in conversation together: one, whose science had proven there was no God, and the other, whose experiences in psychology had proven that there was.
Unfortunately, Mr Peck's book was like a mirror on his own, fragile humanity. To say he had "lost it" towards the end is a bit strong but his arguments were blunted by his own desire to get closer to the higher truth, and to present it from a Christian standpoint (though he did bring in some teachings from other doctrines). It didn't help either that I was well aware by the time I got half way about his own weaknesses, and while I was happy to reconcile that he could talk about his experiences as a psychotherapist without considering his own shortfalls, I wasn't prepared to put up with him being too preachy.
In some ways Mr Peck came across like Icarus, having flown too close to the sun, or perhaps one of the architects of Babel, returning from the top but ill-equipped to articulate all he found there. Not long afterwards of course he was to die, all too young, of Parkinson's disease, proving beyond doubt his thesis of death being the great leveller.
In conclusion, what both books taught me was that we are all only human, and that there are some things that we can never prove for sure, one way or the other. This is perhaps less about God, and more about us... but still, it may just be the way things are supposed to be.
I'm now tackling Tricks of the Mind by Derren Brown, and it is with no small sense of irony that I find myself drawn more towards this illusionist and iconoclast. You see, here's the rub - I do happen to believe that we can be convinced of all sorts of things, good or bad, its one of the things that makes us human (and its a reason I could never subscribe to the Wisdom of Crowds). We also love stories (as described in my other book on the go, The Seven Basic Plots) - these are things that make us who we are. Equally no doubt, is the fact that we love to constantly revisit these arguments. To paraphrase Douglas Adams, with proof there could be no debate. And where would that leave us.
November 2007
This section contains posts from November 2007.
BT skates its way to transformation
2007-11-01
When I was a kid, once a year we used to go and watch an ice show. For adults perhaps, it might have been an excruciating panto rescued from the brink of despair by a few spangled costumes and tight-fitting lycra; but to my childish eyes, it was sheer magic. Every year, the centrepiece of the show would kick off with a few people, rotating slowly but steadily and largely keeping their positions, in the middle of the rink. Gradually more and more skaters would join them until eventually the whole troupe would be involved, apart from one solitary figure who was yet to join. Of course, by then the end points of the spiral would be moving so fast, the poor chap would have to sprint like a billy-oh to catch up.
And so, to BT. Back in July, the company announced it would be launching a new transformation and innovation process, which is now 100 days in. At yesterday's progress meeting for analysts, hosted by execs Al-Noor Ramji, Roel Louwhoff, Paul Excell and Dina Matta, topics included the usual crowd pleasers such as "customer service is our number one priority", through to genuinely interesting examples of how BT's customers are working with the company to drive innovation.
It's difficult to know how to judge this, latest initiative from BT. Certainly in the UK, we have consumer-based experiences (not all of them good) which can colour our opinions; meanwhile, over the past 5 years the global company has been through a number of other change programmes - in terms of both internal restructuring and application rationalisation, and incorporating technology infrastructure transformations such as the 21st Century network, currently in mid roll-out.
Perhaps the crucial axis upon which BT's future rests, is its stated goal of delivering software-based services. This could mean multiple things, some of which ("Is BT taking on SAP now?") might be seen as a step too far for the company - so it's important to stress that BT isn't going to be ditching its core, platform-oriented business. When asked, the panel explained how it would be building on top of its service provider heritage with said (software-based) services, in a way that can be integrated (or "mashed-in and mashed-out" in Al-Noor Ramji's terms) with both the enterprise environments of its corporate customers and the burgeoning new era of Internet-based software.
Rather than mucking around too much with the company's product and service portfolio, the plan is to do similar things as currently, but far better and more efficiently than in the past. "The 'what' will stay the same, but the 'how' will be different," said Roel Louwhoff. Improvements to the "how" will (so we were told) enable the company to be far more innovative, or at least, far quicker in how it brings its innovations to market.
What's going to prevent such a transformation? Perhaps the main challenge to BT remains the company itself, as defined by its staff. There can be no papering over the cracks here, as it will undoubtedly be a challenge to get all of the company's employees moving in the same direction - please do note that this is not a comment on the quality of the people, but more on the fragmented nature of BT's historical structures.
The proof of the pudding will only become visible in a year or so, as BT becomes able to offer demonstrable evidence that this latest change and innovation programme is making a difference. Like the big wheel of skaters, BT doesn't want to move dramatically from where it is now, but it does want to be able to turn faster, whilst keeping everyone involved on board. In people-centric terms this means balancing the momentum being driven from the centre, with appropriate bottom-up activities such as training, personal staff development and so on. Sharpen the skates, if you will, rather than sharpen the saw.
Overall it's a laudable initiative, and on paper at least it sounds practicable. It is still early days however. Of course success will need to be judged in terms of metrics such as time to market reductions or increased customer uptake of new, software-based services - and the consequent, directly attributable impact on the company's bottom line. However, perhaps the real litmus test will be the ability to go to any of BT's 110,000 employees and get a clear understanding of what the company stands for. Like the guy at the end of the ice-spiral, for this to work, BT can't afford to leave anyone behind.
Oh My God! They Killed Music DRM!
2007-11-02
Digital Rights Management in music was, to be fair, always doomed. As long as there exist free mechanisms to transport music, it will always be impossible to protect it from unauthorised copying. The Internet was to music distribution like the steam engine to industry - it has revolutionised how things are done, but not without some serious fall-out in the traditional world of large corporations - who, lets face it, have been conducting rear-guard actions without really managing to keep the revolution at bay. Just recently we've seen EMI fall to an equity company whose goal seems more to sweat its assets than release the creative juices of its signings. I know, if we put all our effort into Coldplay and Robbie Williams, we can just get them to work 36 hour days. Oh dear, Coldplay have broken up. Robbie, where do you think you're going?
One of the technical casualties of the demise of the music industry, it is already becoming apparent, is DRM. Today we saw the launch of Qloud (pronounced "cloud", and to quote from this article (which looks like a royalty-free direct copy of the press release):
"The Qloud My Music application is a revolutionary music service that delivers online music to users how they want it -- legal, cost-free, DRM-free, on-demand and linked to their personal music libraries -- and where they want it -- inside social networks where they can share music with and discover it through their friends."
Qloud isn't a one-off. A couple of weeks ago, Apple cut the price of its DRM catalogue to 99cents. Want more? Consider EMI's earlier announcement of the same, followed up also a fortnight ago by its announcement of a partnership with Imeem. As well as indicating just how confused EMI is right now (one is reminded of Microsoft's relationship with Open Source), its a sign that things are unravelling rapidly for DRM, for a number of reasons.
Not least, technical. As part of a recent purchase, I was offered a license to download a Keane song. I had to jump through several hoops just to listen to the darn thing: create an account, download file, go somewhere else, download license, install... sure, it was a nice song. At least I think so, I've since reinstalled my OS and I can't be bothered to go through all that rigmarole again. Alternatively I could choose to lock myself into a platforma such as iTunes. Both approaches are directly opposed to the viral nature of social networking, pioneered by Myspace and being picked up by Facebook (no doubt to be continued via Google's OpenSocial).
Music needs to be made to be shared, and this is what will put the final nails in the coffin of DRM. Be not alarmed, dear musicians - there's still plenty of money to be made even if the model may sometimes need a bit of tweaking.
P.S. who's going to bet me a bottle of shandy that Nokia's spiffing new music store doesn't go DRM-less by the end of the year? :-)
P.P.S. Steve Jobs wrote a good article on the weaknesses of DRM, here.
Well-meaning, harmless drudge...
2007-11-05
... was how the Oxford Dictionary of Computing defined a system administrator. Or at least it did in the second-hand copy of the first edition, I used to own in my university days, 20 years ago (ouch). While I loved the self-effacing humour, only today did I discover it was also a hat-tip to Dr Johnson, whose dictionary, first published in 1755, defined a lexicographer as:
"A writer of dictionaries; a harmless drudge, that busies himself in tracing the original, and signification of words."
In these hyperbolic days of IT, perhaps it is right to wonder whether one day the role of the administrator can once more be distilled to that of a lexicographer, be it no less useful or rewarding a position nonetheless.
IT Security Analyst Forum (a.k.a. Hey Mum I'm on the telly)
2007-11-06
I was fortunate enough to attend the IT Security Analyst Forum a few weeks ago, where I was one of many analysts meeting with a number of security vendors. A a kindly gentleman was there recording the proceedings, and I just came across the videoed results - isn't the Web marvelous?
Anyway, if you'd like to know more about Freeform Dynamics, how we operate or my views on IT security, please do watch the below!
Part 1: About Freeform and general security views
Part 2: what trends are you noticing?
Part 3: has the analyst forum been a success?
P.S. Yes that is my bald pate in the first frame...
Playing on trains - testing the Eurostar Terminal
2007-11-07
I had a day off today. Well, kind of - it was one of those days where I actually got a lot of things done, largely because I'd told everyone I'd be taking it as a day off: my reason was that I had been invited to test out the new Eurostar terminal at St Pancras.
I'm still not absolutely sure why I agreed to do it in the first place. Was it driven by my interest in all things new, or my curiosity to see a work in progress on the scale of a station? Was it purely the allure of a free ticket, or something more fundamental, a deep down, inexpressible yearning to spend more time with... trains? Whatever it was, I was in good company, as I found out looking at the motley collection of slightly flummoxed "passengers" that had assembled themselves at St Pancras for the day.
The drill was simple. Turn up with pre-issued tickets (sent in the post), and get on a certain train - as if going to Paris. Get off at Ebbsfleet 15 minutes away, forget quickly about Paris and pretend to be going from there. Find oneself at St Pancras again, forget Paris and check through the arrivals lounge (showing passports - I wonder what would happen if someone lost theirs, having never actually left the country). Check back in and get on a train to Paris. Five minutes later, have train stop and reverse back to St Pancras, requiring one to once again forget about Paris.
Apart from the obvious result that, by the end of it, I was quite hankering after the dirty chic of the Gallic capital, it was all a quite enjoyable affair. For myself I took the role of a "business traveller", and true to form I also managed to simulate the characters of both "late arrival at terminal" and "apologetic queue jumper". There was free coffee and tea, a pack lunch and - I am sure this won't remain the case when the doors open - hordes of smiling security staff to help us through the X-ray checks.
One thing that did surprise me was just how much work there still seemed to be required. While the main concourses were largely sorted, there were swathes of cloth across many of the side-alleys, from which the usual sounds of drills and angle grinders could be heard. For the techies there was Wifi access (though the login wasn't yet working), and a feature I particularly liked was a 50-yard-long counter with electric points at intervals, for laptops. Though of course, the sockets weren't yet switched on.
What else? I'd love to be able to comment on signage, announcement quality and passenger facilities, like a good reviewer. Unfortunately however, it looked exactly like a train station, or more specifically, like the soon-to-be-closed Eurostar terminal at Waterloo - apart, that is, from the blank wall of red bricks that faces new arrivals ("Welcome to Britain. Here's a blank wall, to help your first impressions."). Most importantly, apart from a glitch at the end (when we were delayed as we tried to leave the platform on the final leg) everything functioned quite smoothly.
To conclude, while I'm still not absolutely sure why I went, I will probably look back on the experience with something approaching pleasure, and with my inner train spotter feeling appropriately nurtured. Peep peep!
Writing Lessons from Ron James
2007-11-07
Last week I had the good fortune to have lunch with Andrew James, whose father, Ron James has written a number of books about climbing. At the age of 73 he's still outdoors, these days having hung up his karibiners and turned his attention to mountain biking, but still writing books about his passions. In the discussions it struck me that Ron had cracked what may be the golden rules of writing non-fiction for "the rest of us," that is, people whose careers and lifestyles lie outside of the mainstream media.
So, what can we glean from Ron's experiences?
1. Find a domain that has a community. There are plenty of interesting subjects to write about - but like the tree that falls in the forest when nobody is there, it is unclear whether such writings will ever have a readership. This is pure pragmatism - not necessarily commercial as we shall see, but it will only be the most devoted of authors that will write an entire book to reach only a handful of readers. The Internet can be a great help in this regard - message boards and forums are not only a source of information but also can give you a good idea of the scale of the audience. In Ron's case he has stuck to outdoor sports - niche market perfection, with plenty of devoted followers.
2. Differentiate what you are writing about. It would be pointless to cover a topic in a way that has already been done - unless, in the past, it has been covered poorly. So if you're writing a how-to guide look beyond the "First lessons in..." to more specific topics, building on the literature that's already out there. But do be careful not to forget point 1 - you don't want to end up too niche! For example, Ron's current focus is mountain biking, to be sure - but for the over-60's! Don't be afraid to research the topic and find out what else has been written on it: indeed, it's good practice to write a proposal, if for no other reason than to ensure you answer the questions a proposal demands - such as, for example, what differentiates this one?
3. Make sure the benefits are broader than financial. A tough one, this. Its not that nobody gets rich and famous writing books, but more that it is highly improbable. A common fallacy - a bit like seeing someone on the telly and assuming they live in a big house somewhere, whereas the reality is that most actors are only as well off as the next job permits. So, if you're writing, do so in a way that covers your costs and maybe makes a bit of cash; meanwhile however, ensure take into consideration the wider benefits - through sponsorship for example, or purely the fact that writing enables you to spend time covering a subject you love. Which brings me to...
4. Write about something that you love. There are surely plenty of areas that fit the above three criteria, but you're only going to get old and resentful unless a certain part of what you do is for its own sake. Write not only because you love writing, but also because you love the subject that you're writing about, be it music, fly fishing or industrial archaeology.
This last lesson is important. Ultimately, whatever you do, you need to be doing it for your own satisfaction, as well as for the potential readership. This will not only help you enjoy the (sometimes mind-numbing) process, but also result in an output of which you can be justly proud.
More washing of dirty linen in blog public
2007-11-07
Thanks to David's connection with Euan Semple (I was idly following Twitter links), I stumbled across this post which led me here. Priceless comment fourth in the list, thanks for pointing it out David!
Why I've replaced Vista with Linux
2007-11-19
This decision was a long time coming but I think it is the right thing to do right now: I have reformatted the hard drive on my laptop and replaced Vista with the latest version of Ubuntu Linux, as the main operating system. I did this for a number of reasons: it's probably worth going through them one by one.
Building a picture of Open Source today. “Desktop Linux is ready for the mainstream” we are told – but is it? And how to know without trying it for real? I had Linux running in a virtual machine on Vista, and it looked fine, but I tended only to play with it and not really put it through its paces. To give it a proper once-over there really is no substitute for putting it in as the “main” operating system. I should say up-front that this shouldn't be construed as a comment on Vista, which I am actually getting to like (see below). The same caveat should be applied for other applications, proprietary or open source (for the record, however: I'm not in any hurry to move over to OpenOffice just yet!)
Testing virtualisation. There's a variety of combinations of virtual environments that can exist today – one of the strengths and weaknesses of virtualisation (I am quickly discovering) is that anything goes. Linux on Windows, Windows on Linux, either or both on a hypervisor from either side; add to that the potential for running individual apps (e.g. with Wine or Softricity) or remote desktops and it all becomes very complicated indeed. I decided to start with Linux as, to be fair, Vista is already big, and I decided I could do without the base overhead. I'm now running virtual instances of Ubuntu server, Ubuntu desktop and Windows XP – see below.
Getting my hands dirty. Here's the thing – I'm an old UNIX hacker at heart, and I kind of miss playing around with this stuff, which I haven't really done since the 0.94 SLS days. It's certainly been an interesting experience so far pushing a few of the boundaries of today's desktop Linux and seeing what gives... or doesn't. I'm also planning on doing a bit of programming again, most likely in Ruby on Rails, for which direct use of the LAMP stack seems more appropriate than developing in Windows and running emulators or indeed, virtual machines. Of course this will also help me build more of a picture of open source in general, or at least trigger a few conversations: see next.
Engaging with the community. There's just so much happening in the blogosphere, and some of the most animated discussions come from developers and open source advocates. For me, this decision partially comes down to succumbing to the temptation and joining in – heaven knows I won't be able to keep up but at least if I'm sharing some of the experiences I'll participate more than just watching from the sidelines.
Avoidance of bias. It's important in this job to be able to see all aspects, and I have felt uncomfortable in the past commenting on certain subjects without a full appreciation of how it feels to experience the other side of the coin. Meanwhile, Linux adoption is rife in Eastern Europe and Asia, making it even more important to understand what life is like for non-Windows users. Its worth doing this to get the balance right – not least because certain behaviours and expectations are very different. In Linux, for example, the attitude is very much “there will be a package out there” (the package manager lists twenty-three thousand packages, of which I have a paltry fifteen hundred installed) but the “out there” experience also extends to tweaks and fixes, so be prepared to muck in. The Windows “attitude” seems to be more, “I've paid for it, so it better work!”
Response to accusations of bias. I want to be able to talk about the good stuff that comes out of Seattle without being accused of bias, or being considered some kind of shill. At the risk (see, here we go) of facing the wrath of all those who feel Microsoft is the nemesis of the IT industry, I actually do, really believe they come out with some pretty good stuff. I also think Sun, IBM and everybody else comes out with good stuff. There's plenty of good stuff out there, and I really don't see why Microsoft should be excluded from the good stuff debate just because they had some sharp business practices in the past, or present. After all, who didn't - and who doesn't.
Finally, I secretly wish I had a Mac. No I don't... well, yes I do but I'm not sure it would be the answer to my prayers, and I would be concerned about lock-in. Oh, the irony.
So, there we have it. It's already been quite a ride, as I've tested out a number of Linux distributions, tools and configurations before settling on my preferred setup. Which is, Ubuntu Linux 7.10 running KDE, and hosting a virtual instance of Windows XP via QEMU/KVM for my Outlook Exchange client. For virtualisation, I did try out Xen, both from within OpenSuse and as XenSource Express, but neither supported laptop suspend/resume (and XenSource setup on a single laptop was becoming a pig. I've needed to do various tweaks and resolve a number of issues, as I started doing this I wondered whether this was a comment on Linux – but it could equally be due to my lack of current experience. I have set up a dual-boot configuration with Vista, but this does not boot by default so (for the time being) it is there as a security blanket.
Does it work? So far, so good. I'm having to use the command line more than a little, but to be fair this is largely due to using the virtualisation capabilities, which are outside normal (i.e. non-geek) behaviour I think. There are a few bugs and things I might suggest were done differently, if I were in a position to comment – which is exactly what I'm getting myself into here, so expect some further case notes on my own blog under the tag “geeking out” (these won't appear on IT-Analysis or IT-Director if you're reading this post on one of these sites.) I still need to get myself organised from a data standpoint – I'm configuring Samba as I don't just yet want to trust my data to sit inside a virtual machine, for example! - and I also need to set up my external monitor for ease of switching screens.
Whether or not I can work like this is one thing. I am missing certain things, not least LiveWriter and the Vista Sidebar – as general remarks things are not quite as slick as Windows, but perhaps I haven't got my configuration right yet. I'll give myself a month or so like this, so I can establish whether or not I actually want to work like this. For now, the jury is out but I shall keep everyone posted.
What's in the (Linux) box
2007-11-21
Here's a bit more information about my chosen Linux configuration. I'm using a Samsung Q35 laptop with a Centrino Duo processor and 2 gig of RAM. It's partitioned with a 30Gb drive for Vista and the rest given over to Ubuntu Linux 7.10. I've installed a number of packages on top of the base install, specifically for:
Virtualisation
I wanted to run XP within Ubuntu to access my Outlook email client - though there may be other ways (for example through direct Exchange access). I did have a go with VMWare Player for Linux VM's, and with Xen as a more general virtualisation platform (no suspend/resume, fwiw) before discovering that KVM worked with the virtualisation features build into the Intel CPU to give a pretty fast and very usable virtual experience.
Having downloaded the KVM package, I've set up a number of virtual machines, including a 5 gig one to run XP with Outlook and Word - it's tight, but it works. If I wanted to run any other Windows apps I'd probably have to roll a new VM - at the moment the only software I lack is Groove and Mind Manager, so perhaps I should do this.
The two commands I need to remember to run each time are:
$ sudo modprobe kvm-intel # to insert the necessary libraries into the kernel $ sudo kvm -boot c -m 768 -smb windows -cdrom /dev/cdrom /home/jonno/vm/xpdesk.img
Occasionally KVM will crash on boot - this seems to be a known bug, and I should stress it's highly repeatable (this is a good thing i.e. it doesn't happen randomly), which means that once a configuration works, its good to go. Specifically (and ironically) it crashes when trying to load the splash screen from the Ubuntu live CD, for example. The workaround is to run QEMU or KVM with the -no-kvm option, which doesn't talk directly to the processor. It's slower, but if this is done for XP installation then it generates a working image that can then be booted normally. Ubuntu server installs OK, but Ubuntu desktop hits the issue every time it boots so for the moment it isn't an option for a KVM virtual machine.
File access
For the time being, my files remain in the Vista partition, call me old fashioned but this is a pilot test, not a gung-ho let's-throw-away-the-key epiphany. So I need to be able to access them not only in Linux (easy enough, the partition is mounted by default in /media), but also from inside the XP virtual machine. For this I have installed Samba, which enables files from the host PC to be accessed like they were on a network drive. Hence, by the way, the need for the "-smb" option in the kvm launch line above.
For my own future reference, the IP address of the network drive is the same as the gateway in the XP VM. I had to go through a bit of configuration rigmarole to set up Samba - I recommend just following the standard tutorial to set up a Samba user and smb.conf file, and going from there.
Modem access
This was one of those "cool - it works!" moments. Turns out that the Huawei E220 device that T-Mobile provided me with is a pretty standard piece of kit (who knew?) and that the driver for it is already built into the kernel (how cool is that). Trouble is however with the device, being a la fois a USB thumbdrive and a modem, the OS doesn't always recognise what its being at the time. To resolve this, some kindly fellow has written a short C program, which needs to be preceded with a modprobe command - at least I think so but I haven't fully worked out the order yet. This seems to work:
$ modprobe usbserial vendor=0×12d1 product=0×1003 $ sudo /sbin/huaweiAktBbo
For info on the source code, run a Google on huaweiAktBbo, it will also tell you which library to include for the compiler to work (and yes, I popped it into the sbin directory myself). Another couple of tips - use dmesg to see what the latest status is, use lsusb to see whether the requisite 3 USB tty's have been created, and do check the lights are on on the modem.
I'm using wvdial to connect to T-Mobile in the UK, with the following /etc/wvdial.conf file:
[Dialer Defaults] Phone = *99***1# Username = web Password = web Stupid Mode = 1 Dial Command = ATDT
[Dialer hsdpa] Modem = /dev/ttyUSB0 Baud = 460800 Init2 = ATZ Init3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ISDN = 0 Modem Type = Analog Modem Init5 = AT+CGDCONT=1,"IP","general.t-mobile.uk"
Random thoughts
As well as the above there's some other bits and bobs - like discovering I didn't need vncserver to get remote access to my Linux desktop (Xvnc works just fine) - incidentally the command line is:
$ x11vnc -usepw -auth /home/jonno/.Xauthority -display :0
Still to be sorted are:
- switching between laptop and external monitor. I have configurations to run either, but I haven't yet arrived at a point where I can switch from one to the other without any gyp. My Microsoft head says "surely this should just work," and while I know there will be some clever setup of the xorg.conf file that does just that, the tempting option (kludge) is just to have two xorg.conf files, and to switch between them as necessary.
- installing a decent Synaptics Touchpad driver. At the moment its too sensitive which means that suddenly I will find myself typing somewhere else in the document than where I started. Shouldn't be too hard to fix.
- installing some network monitoring software. The 3G modem may work, but its not very forthcoming when it comes to telling me signal strength, connection speed etc. There may be something out there or perhaps I will just write something (yeah, right).
- there remains a bug in suspend/resume - or it could be a configuration error, hard to tell. The symptom is that the screen sits there, black and the disk light stays on, even though there's no activity (or at least, sound). It's not the same as X failing to load, as ctl-alt-f1 doesn't show the terminal window either (and ctl-alt-f7 doesn't bring back the graphics)
- there's also the occasional difficulty copying a file. Again I don't want to be quick to leap onto this as something wrong with the system, as often it can be that a previously unknown fault (eg file corruption) can be revealed by trying to access it with a different platform. I'll report back on this.
- playing multimedia. Given that I've handed over a slab of my disk space to install two base operating systems and run a bunch of VM's, my MP3 collection is feeling the squeeze so I haven't got round to installing anything here just yet.
Overall though, I have a working system. The Ubuntu forums have been superlatively useful, and there's lots of other sources of help out there, which is nice to know.
I'm afraid to say however that I share the feeling that Ubuntu 7.10 (aka Gutsy Gibbon) may not have been quite as ready for prime time as it should have been. I had installed Feisty Fawn (the previous Ubuntu version, 7.04) before, and it did seem to "just work" which is of course a major factor with which to judge desktop Linux. First I tried downloading 7.10 as an upgrade, but for reasons I can't now remember I decided to go the whole hog and replace it with a clean version. Everything seems to be there but there are a few quirks - KDE menu items showing up in Gnome for example, or the fact that every now and then my window manager seems to change - it went from Gnome to Xfce once, and another time from KDE to Gnome. Very strange (though in hindsight it might be associated with installing updates).
Having said that, it is quite usable - I just think if I was installing Ubuntu for somebody else I would default to Feisty. And I'm very, very sorry to say but the arguments "new versions should always be treated carefully" or "with Microsoft it would be worse" just don't cut the mustard, if mainstream desktop users are being targeted. Once it makes it onto the magazine cover, it's got to work.
More soon as I work through the other stuff above.
December 2007
This section contains posts from December 2007.
Has it been a week with Ubuntu already?
2007-12-01
Its been an interesting experience so far - notably my reading and writing of blogs has suffered as I've been tinkering and tweaking, but I think I now have a stable environment, notably:
- Ubuntu 7.10 running Gnome - VirtualBox for Outlook, Office and Mind Manager access - Firefox and Thunderbird for Web and personal mail - KDevelop for Ruby development - gTwitter, Skype and Xeyes in the toolbar - OpenOffice for simple word processing and looking at presentations - Drivel for typing this
And it all works OK - well, it should, shouldn't it? I've tested pretty much all the options and features that could be alternatives to the above, but for the most part they're either not suited, or not working. Specifically, there appears to be a bug in the current release of Evolution, which is preventing me from accessing Exchange directly. I haven't spoken to the Evolution guys but I've read pretty widely on this and no dice. Its not blocking but it woudl be nice if it worked. I've also tried the gadgets tool (name eludes me) - it doesn't work under Gnome, which for some reason I keep coming back to from KDE, don't ask me why but its just simpler and cleaner. Ah, that's why ;)
I have had inordinate problems with screen resolutions, on my external display; I was also having issued with the screen freezing up for periods but it now transpires that the latter was caused, or exacerbated by the former. Newbie tip: don't try (like I did) to hack your xorg.conf file, before running the command to detect and auto-generate such a file from scratch. This worked much better - its all documented in the Ubuntu display howto here: https://help.ubuntu.com/community/FixVideoResolutionHowto There are issues with the display freezing in Gutsy, but I would recommend sorting this first and see if it resolves them.
Update: I was also having an issue with suspend/resume not working, which seems to have "gone away" now I'm running with the new xorg.conf. Spooky :)
I've also got to get my microphone working. I was surprised to receive a Slype call a few days ago - surprised because we don't tend to use it that much for work any more (default action: reach for phone). I grabbed my headset and plugged it in to find that I needed to configure the ALSA device driver, and it wasn't going to just play so I left it. Still need to get round to that.
I also want to look at Kandy as an alternative for driving my USB 3G dongle. Apart from that, I think I'm done. It was interesting - a few days ago I went back to my Vista install for some reason, grumbling as I did about the Ubuntu display issue. When I logged in however, I did have a similar issue with recognising the display resolution etc, which made me have a bit of a rethink (conclusion: displays are tough in any OS). I've tried a couple of other things - for example installing a software configuration management too for my development efforts, before remembering that it could be quite a tricky thing to deploy, and removing both it and Apache. Lesson learned - there's such a thing as too much choice!
As a final point I had a sudden ah-ha moment as I used XP within VirtualBox. I had been worrying about what happened to my data if the virtual machine should get corrupted in some way - but then it suddenly occurred to me that everthing within the computer was virtual and at risk, bing converted into a string of 0's and 1's and processed through this sexy-looking, but ultimately deceptive Von Neumann machine. The answer: to back up the data, of course. So I have no installed SmartSync within my virtual environment, and it is doing exactly that. Whoa!
World Community Grid - virtual edition?
2007-12-11
I'm in the process of downloading WCG (again), this time for my new environment. It occurred to me that I might be able to get more out of it by running several WCG instances as virtual machines - turns out I can, according to James Bliss. Worth a go - but I do wonder whether I'm helping destroy the planet with all that additional power required to drive the extra CPU cycles. Ho hum - worth the risk.
There's something about having enough disk... for a while
2007-12-11
I had a bit of a screwdriver couple of days this weekend, building (or, in modern flat-pack parlance) assembling a bed, and also replacing the hard drive in my Archos 340 (AV300 series) audio/video jukebox. This latter task had been a while coming, as my music collection alone now takes up 48GB - the straw that broke the camel's back was inheriting a collection of classical CD's from a good friend. These are now digitised and the originals stowed, leaving me the listening pleasure but also causing difficulty in knowing what to store, where.
So, I finally succumbed and purchased a 160GB hard drive. There's quite a lot of information on the Web about upgrading an Archos AV300 series - thanks guys - the one thing I didn't know was whether it could take a 160GB, though I had read reports of success with the 120GB drives. Answer: no it can't - I now have a 125GB partition for stuff the Archos can play, and a 35GB partition for various videos it cannot. Live and, through a number of attempts at reformatting, learn (second answer: accept the first partition size the Archos proposes, around 128GB I think).
Having then spent a slow and boring time transferring files from the RAID box to the Archos, I now have a bunch of films recorded from the TV, the aforementioned 48Gb of music and our entire digital collection of family photos. I don't know if I am now in that gadget honeymoon period (you know, when anything new seems really, really useful) but it is quite remarkable what a difference it can make to have everything in one place. There are some films, for example, that I have been meaning to watch ever since they were recorded - but now I might actually do so, given the fact they are conveniently placed on the jukebox, rather than stuck away somewhere on the server. Right now I'm listening to a bit of Dvorak on a long-haul flight, you can guarantee I couldn't have done without the new drive.
It takes me back to my IT manager days, when we seemed to be forever struggling against a tide of data. The answer would invariably be the same - to adopt coping strategies for as long as possible before planning in some downtime and going through a consolidation exercise. Things would be great for a while, before eventually our best-laid plans would give way to the pressures referred to by my previous boss Rob Hailstone as "the wardrobe principle."
Perhaps the worst example of this was caused quite ironically from having too much storage. Sun Microsystems, in their infinite generosity supplied a batch of 40 SS10 workstations with an equal number of - if my memory serves me correctly - external 400 MB drives. At first we were daunted and gleeful in equal measure - this was free stuff, after all - but over time the discs became incorporated into the IT environment. Oracle was a hungry beast, not just because of the database sizes but the number of test instances we needed to run.
For a period there was no problem that couldn't be solved without throwing extra disk space at it. After a while however, the disks that had held so much promise became a burden of their own, and we had to consolidate things down again.
Still, and no doubt like things will turn out for my newly rejuvenated Archos, it was nice while it lasted.
P.S. Incidentally, a note for Archos lovers - the trick with bending back (carefully) the battery contacts, as remarked upon in a number of places on the Web, really does work to restore battery life. Thanks again!
Kindle - powered by Linux
2007-12-12
Well, I had to check and sure enough, Amazon's new Kindle device is powered by Linux. Obvious really - and what's equally obvious is that there will spring forth a developer community. Given that Amazon has release the tarball it seems unlikely they're locking things down too hard. While it may not be the prettiest kid on the block, it'll be interesting to see what it spawns.
I did think about registering kindlehacks.com, but then I thought better :)
Can software developers be protected from themselves?
2007-12-12
It's now six weeks since RSA Europe, when I made a diary note to take a deeper look at the SAFECode forum. SAFECode stands for the Software Assurance Forum for Excellence in Code - we can be profoundly grateful that the founders didn't try to expand out the entire acronym. It also stands for "increasing trust in information technology (IT) products and services through the advancement of proven software assurance methods" - a kind of Green Cross man of the IT world, helping software developers across the highly risky freeways of the technologcal world.
The SAFECode idea is to co-ordinate software best practices across software vendor companies, build in appropriate checks and balances to ensure the resulting applications are secure (or at least, to minimise the risks). Is it necessary? Where there's smoke there's fire, and to be sure, Microsoft is no longer the only target of cyber-attacks. As hackers mature into commercial operators, no longer motivated (just) by "giving it to the man," an ever-widening pool of programs is coming under threat.
In principle, then, SAFECode is a good, worthy and valuable idea. It is by no means guaranteed to succeed, for a number of reasons. Don't get me wrong - of course it will be a good thing to co-ordinate and share best practice. From the point of view of its longer term success there are several howevers, based around:
- Credibility. To succeed, the SAFECode forum requires to be seen as successful. This is a conundrum but it isn't new - consider the ITIL library of systems management best practice, which has taken a good 10 years to establish itself. It may be that SAFECode by itself proves inadequate because it focuses only on security, and quickly runs into the weeds as it tries to integrate with the wider picture of software development, which is itself peppered by competing best practice, from waterfall to RUP to agile.
- Critical mass. While there are big hitters in the list (from the site: EMC Corporation, Juniper Networks, Inc., Microsoft Corporation, SAP AG, and Symantec Corp.), the number of members is not yet adequate to cause a mass adoption or understanding of the best oractices it wants to espouse.
- Clarity. SAFECode can perhaps learn from the mistakes of other forums - notably in this case ITIL - by opening its documents to the widest possible audience. A quick glance at the publications page indicates that the organisation does not yet have anything to tell people, not in terms of best practice. The wrong thing to do hereon in would be to make any publications for members only, or indeed available only for sale. Commerciality will get in the way of SAFECode's mission, if not scuppering it already.
- Collaboration. The technology world has come a long way since the smoke-filled rooms in which many best practice standards have been conceived. We have ridden the open source wave and now we are in the midst of a new era of collaboration, as illustrated by social networking. The fastest route to success (and I'm not always a fan) for SAFECode would be to build a Wiki, and open it up as widely as possible with appropriate editorial responsibility. While noise to signal would have to be managed, this would aid both visibility of the process and road-testing of the results
- Certification process. Without some kind of certification, SAFECode members do not have to prove anything for themselves, nor would there be any kind of recourse should SAFECode practices not be kept. Certification needs to have teeth - while anyone can join the forum, only products that fulfil appropriate criteria should be marked as "SAFECode certified", and only organisations that continue to apply the best practices should be able to maintain their member status.
In summary, then, all initiatives such as SAFECode should be applauded. However, the forum should be judged not on its existence alone, but on its ability to change how applcations are written - and ultimately, on whether the risks posed by member applications are reduced. This may seem like a tall order but if SAFECode can't provide some kind of guarantee, then it will be of little use. Not only this, but its currency will very quickly devalue, to the detriment of its founders and the credibility of their products.
Rethinking social networking in 2008
2007-12-12
Spooky - I was just collating some thought about social networking then Anne Zelenka posts half of my thought process. The power of the meme or proof of a higher power? Most likely just coincidence but anyway, it prompted me to throw down my thoughts before they get blogged into the past. So, here's my uncorroborated opinions and unfounded predictions for 2008:
- There will be consolidation of the social networking market. I just received a Xing email, and several Spock and Plaxo Pulse invites arrived today. The fight in the corporate space is with LinkedIn, and there can be only one. The same goes for personal social networking.
- Twitter will "vanish." I don't believe Twitter will exist in the same form a year from now. Most likely scenario: the company will be bought and integrated into a larger offering; alternatively it will become a messaging backbone for other services. Despite the highly vocal tweets of a few twitterati, most of the world don't work that way, or that fast.
- Facebook will lose to the next generation. Lets face it, Facebook was fun for a while, but is there really anything keeping us there? Facebook is all face and no heart, or soul - an integation platform to be written to. When something more interesting turns up, Facebook's fickle "customers" will walk.
- The real winners will be the leaves and the trunks. A few social networking sites will become the "trunks" - consolidation hubs that enable integration between sites. A few others will specialise in "leaves" - offering customer-specific tools that suit the needs of their subscribers. I expect Microsoft to be a leaf player, not a trunk player, for example. Google will be a trunk player.
That'll do - now taking beer-oriented bets for whether or not the above will prove true a year from now.
2008
This section contains posts from 2008.
January 2008
This section contains posts from January 2008.
Why 2008 for enterprise identity management?
2008-01-04
Like many people I suspect, I have struggled to get my head round identity management. This is less to do, I suspect, with the nature of the thing itself (great intro here, and I'd recommend Neil M's reports on the subject), and more with the fact that there's so much going on, in so many domains. The concept of identity itself is a nebulous beast, stretching from personal identity (yup, me, got that one) to corporate identity (aka managing and provisioning roles and access rights) and even more broadly, to that bar conversation – “every person, thing, asset etc can have an identity” – which can very quickly unravel into a flight of fancy.
Identity is a hot topic these days of course, what with incidents like the loss of all those records from the HMRC punting identity fraud into the public eye. Examples are legion, of identities being stolen, misused or otherwise abused – its perhaps surprising that incidents such as Goo-do-no-evil-gle and the Scoble Facebook hack have taken so long to materialise. While none of these examples are particularly relevant to the concepts being espoused by corporate identity management, one nonetheless stimulates interest in the other. There are overlaps of course - the hapless employee who lost the HMRC disks could have been deemed too dim to warrant access to the disks in the first place, but this thought process is in a different compartment to thinking about the risks caused by offering up our kids and (indeed) our bank details to all and sundry.
The issue for corporate technology sellers and buyers alike, is that while the subject of identity may no longer leave people glazing over at the slightest mention, conversations can munge all of the above issues into a convoluted glob, incorporating on one hand worries about the protection of personal information, and on the other practicalities around ensuring corporate information and systems are only accessed by those who have been granted access. Given that this industry thrives on three letter acronyms, perhaps we need a couple of new ones - “Personal Information Protection” for the former and “Enterprise Identity Management” for the latter. Thus, EIM could have been used to support PIP, in the HMCE case.
Taking just the corporate, “EIM” side of things. this looks to be an interesting year. The last couple of years have seen a number of acquisitions and product announcements in this space from the larger management vendors, notably CA and Oracle, IBM, BMC, Sun and Novell: the most recent step has been to bring in roles-based management and directory integration. There have been a number of challenges along the way, some of which remain – for example the architectural decision of whether a database or a directory is sufficiently scalable to serve up identity-related information at the required level of granularity; meanwhile a variety of standards are being put in place. Catalysed by the more general, populist buzz, all of these things put together should yield more general acceptance, and resulting deployment of identity management solutions.
I should admit to a level of personal interest here, in what amounts to “the greater good”. While I view the HMCE incident with disappointment, I don't subscribe to the headline-grabbing faux-abhorrence that some press have expressed, and I certainly don't believe any one person should carry the can. Given the problem is indeed systemic (I believe so too), and if we can also agree that such a thing could have happened in most organisations, then we require a systemic approach to solving it. Taken in the round, identity management can offer such an approach, underpinned by the appropriate use of technology – this is most definitely a place where technology alone cannot provide the answer, but neither can the problem be solved without it. Indeed, if the HMCE incident serves to raise awareness and adoption to the extent that other organisations do not suffer the same fate, then it will not have been without value.
Happy New Year!
2008-01-14
Never reconfigure your computer before Christmas, or you won't post a blog until the third week in January. So goes the adage - and well, look, it's true!
More to follow :)
Goodbye dual boot, hello virtualisation
2008-01-14
I confess, I nearly did away with Linux last week. Something was consistently going wrong and for the life of me I couldn't work out what - the result was that, at far-too-regular intervals, my machine was hanging/locking/freezing. At first I took it as it came (good moment to sit back and stave off the back pain) but after a few weeks it was becoming untenable. Until - finally - I stumbled across the message threads (for example) that suggested setting "pci-noacpi" in the boot script. Blow me if it doesn't work, though I'm sure I'm missing out on all kinds of clever stuff!
As a result, I've decided to stick with Ubuntu Gutsy as my base operating system, for the time being. There's a whole stack of reasons - advantages and disadvantages - and I wouldn't advise Linux (even Ubuntu) for just everybody. I'll get round to documenting these over the coming week or so.
This shouldn't necessarily be construed as some massive switch from Windows to Linux. There are still things I either need, or like to do in Windows, so I am sticking with a hybrid configuration; however, as already discussed, with its smaller footprint (about 650Mb of memory in active use, rather than the 1.2Gb I found was required for Vista in the same scenario) Linux is the preferred base OS for virtualisation. Perhaps the biggest leap of faith is the fact I have just deleted the dual boot: I'm finding that running XP in a VirtualBox virtual machine is just as usable, and far more accessible than having to boot into a separate configuation to access Windows-based applications. It also means I'm working with just one set of files, rather than synchronising between my virtual file store and real, though of course equally virtual file store.
This last point was quite an epiphany for me. At the start I was concerned about what might happen, should the virtual hard disk get corrupted... until I remembered I was equally concerned by (and experienced in) real disks getting corrupted. The answer, of course, was doing a backup. Then, of course, one remembers that everything is virtual, imaginary, made up combinations of electronic signals to give us the impression of data. Phew, but in a good way.
The configuration I have now is much simpler than trying to manage dual boot - there are less file systems to mount, less apps to install and keep up to date, etc. And of course, I can access all my applications at once. On the data side, I still need to delve deeper into questions of file sharing between base and virtual machines - in principle it is quite simple (for example, using virtual network shares in VirtualBox) but I still don't understand how things like indexing are handled, or for example what is the performance hit on very large files, if they are accessed over a pseudo-network.
For now at least, everything is working fine, and so I can get on with writing about technology, rather than playing with it :)
Never trust a man in a shell suit
2008-01-14
I was being a bit slow this morning - in more ways than one, as Liz and I headed off on our morning run. So, of course, I was wearing tracksuit bottoms, and at the weekend I had bought a black windproof top with a hood, which I was sporting as we headed out of the village. We were jogging past one of the outposts of the Royal Agricultural College just as a carload of ruddy-faced students drove out. Winding down the window, one of them cried - in a friendly enough way I should add - "I assume you're running!" I responded with in a suitably nondescript manner and we went our separate ways.
It was only ten minutes later when I realised the alternative - horror of horrors - was that I had actually chosen to be dressed like that. With some relief the rain hit and I managed to muddy myself up enough to justify my purchase. Oh well.
Lessons from the photo pro
2008-01-15
My Christmas present this year was a Nikon D40X camera - I'd sold all my film SLR equipment a few years previously, and I was just waiting for prices to drop, and pixels to rise to the point where it made sense. I'm not a photographer, but I do enjoy taking photos - how fortuitous that my good friend and neighbour Paul Atkinson used to run a branch of Jessops, is a seasoned pro, and also has a Nikon.
Paul took me out the other day to capture a few sunsets: the difference between what I would have taken (think:washed out and hazy) and what he showed me how to do, is quite astounding. I have the latter as my screen background now - it might not win any awards, but it's got my dog in and it works for me.
Bringing wireless networks into the management fold
2008-01-16
As part of the briefing cycle for Aruba's announced acquisition of Airwave Wireless, I had a very interesting conversation with Roger Hockaday, EMEA marketing director for Aruba. In part it was about the announcement, but it quickly turned (as these things do) to a discussion of the wider picture of wireless, and indeed wired network management. "Discussing the wider picture" can sometimes mean, for analysts, expressing poorly veiled disdain at the fact that a vendor has not taken things far enough - a bit like when the triumphant person comes into the room to demonstrate, after 6 months of hard graft, that he can now juggle with 3 balls, only to be shot down by some smart alec who says, "yes, but to do it properly, you'll need to jugggle with 4." Not that I ever would of course, and certainly not with this - because the challenge is not one that can be resolved in one go.
In this particular case, it is more like juggling with a ping pong ball, a meat hook and a chainsaw. Wireless networking protocols remain all over the place as the bubble-headed wonks of 802.11 land continue to squeeze yet more bandwidth, and indeed distance out of some highly unreliable physics; on top of the base protocols are build a number of security and management capabilities, which are supposed to be compatible, but sometimes don't quite manage to integrate. While all the attention has been (laudably) on driving up bandwidth and resolving compatibility issues, the black hole remains centralised management, particularly for legacy products that were not built for remote configuration and monitoring.
Aruba's acquisition signifies both the need to centrally manage the variety of wireless switches that are out there, and the resolve to do something about it. Having not researched this specifically I don't know if this is down to latent demand or direct customer pressure, but it makes sense that organisations which have rolled out wireless access points on a more ad-hoc basis in the past, are now seeking to integrate their operation with the rest of their network management activities. And incidentally, it may be that Cisco have been able to offer a more integrated approach for a while - but it is a rare organisation that is a wall to wall Cisco shop, so the same issue arises.
While WLAN remote management might be able to bring wireless management into the same room as wired networks, this is still a step away from bringing both onto the same console (and I don't mean through screen scraping, or "sure, we can do SNMP traps" type conversations). It is true that wireless network configuration and monitoring has different drivers to the wired equivalent: one will largely be to support roaming end-point devices running a limited set of functions, while the other needs to consider the entire network architecture; there will be different (e.g. security) policies applied to each, etcetera. However, more integrated management tools lead to more efficient (and therefore, less costly) management.
We should therefore see this acquisition, and the impetus behind it, as a flag along the road - an indication of where we are in this work in progress. From the end-user perspective, where (as we have seen in recent studies) people care little about how they are getting their bits, just that they can get them, networks and their management tools should be able to see not just wired and switched wireless, but also "mobile" protocols such as HSDPA as part of the same, managed network architecture - particulalrly when things like Unified Communications really start to tip, and picocells extend such protocols into the office. Now that, if and when it comes, will be juggling.
Road testing the Palm Treo 500v
2008-01-17
Back at IT Forum in November, Microsoft gave me one of those snazzy new Palm devices to evaluate. It felt a little weird on the way home, given it was in the middle of the advertising blitz, to have in my bag the same thing that was appearing on the hoardings (and indeed, cleverly projected from the Heathrow Express onto the walls of the tunnel - not seen that before). Anyway, a couple of months have passed so its about time I wrote up my findings.Working back from the conclusion, Microsoft and Palm have indeed come out with a device that should be more acceptable to the mobile masses than previous incarnations of Windows Mobile. There's a lot in that statement - so let's see if we can cover it off.
Microsoft has had a bit of a rocky road since it first caught onto the megalomaniac "Windows Everywhere" idea. At the time (so we were told), Windows was going to replace, well, just about everything. Anything was possible, indeed the kool-aid powered propeller heads at Redmond thought nothing of porting core elements of Windows to portable devices and setting out their own hardware configuration to support the "new" operating system. To cut a long story short, it all went horribly wrong: the pesky competition refused to roll over and play dead in a whole number of sectors, the open source movement was unwittingly catalysed (if not spawned) and, well, it turned out to be a lot harder than first thought to develop an interrupt-driven device OS.
We've seen several generations of Microsoft's mobile operating system, and several renaming strategies, leading up to "Windows Mobile". What we haven't seen up to now is Microsoft clicking onto the fact that mobile devices are not computers. Up to now, I say - because the Palm device does exactly that. I have been using it in parallel with, and lately instead of my Blackberry 8820. To summarise the positive findings:
- It functions like a phone. There's a lot in this statement: notably that I could give it to an 8-year-old to make a call, and they could get on and do it as easily as any of these new-fangled devices. The keyboard keys are a bit small, feel a bit cheap and make a funny clicky noise which is slightly off-putting, but they are usable.
- It doesn't need rebooting. Why on earth I should have to write this at all, is an indication of where Windows Mobile has come from (I would say and how far, but no-reboots should be true off the starting blocks). Still, it's a good thing.
- It is straightforward to navigate. Straightforward-ish - but I need to add the caveat that I was looking for things from previous incarnations in a way that I probably shouldn't have. Games, for example, are there but harder to get to; ActiveSync also needed some finding (but perhaps I shouldn't have needed to look)
- It doesn't lock up, crap out, put up strange messages etc, or at least it hasn't for me. I used to have to clear out running programs on a regular basis as they would crowd each other out of the memory - perhaps this is down to more memory, or more clever management, as a user I'm not sure I care.
- It's an acceptable size. Notice I don't say "small" - you could put it in a jeans pocket, but you wouldn't want to sit down. I agree with "chunky".
To put it bluntly, it works, "does what it says on the tin" etc. The second question is how well it functions as an email device, which needs to be treated separately as it is here that Windows Mobile diverges from other platforms, in terms of how it connects to the server. Some devices allow access to POP email, and Blackberry adds to this with access to Microsoft Exchange, through the Blackberry Exchange Server (BES). One thing I've always liked about Windows Mobile is the fact that it integrates very smoothly with Exchange, reflecting exactly what I have on the server: more so than BES, which just doesn't deal with my email the way I want it to (example: when I move a mail from my inbox to an offline folder, I want it to be deleted from the Inbox: true in Exchange, but not synchronised for some reason to the Blackberry handset. I have other examples around synchronisation). Equally, I prefer the fact that Windows Mobile presents individual inboxes, whereas the Blackberry munges them all together. So, given the fact I'd prefer a Windows Mobile device for email anyway, it was unsurprising I quite liked how this one functioned.
Returning to the handset, then, what about the negatives?
- Battery life is still, to my mind, atrocious. I need to be able to go away overnight, forget my charger cable, and still be able to make calls the next day - not unreasonable I think, but not guaranteed with this device. Sure, I could be less forgetful, but that's hardly the point.
- No GPS. While I have been coping without this as a nice-to-have, it was very useful when I was out and about to flick over to Google Maps and get directions. There's still the maps, but no understanding of current location. GPS should be standard issue to mobile employees.
- It's not perfect - "deep" configuration menus aren't that easy to navigate, and its not always totally intuitive where to find things, and tehre's little niggles about all the different beeps and buzzes (why can't these all be turned off in one go?). The same could be said for Blackberry, however.
There are other downsides, but these are more a reflection of my geeky side than anything. I was disappointed (particularly as I'd forgotten mine) to find that I couldn't use the Palm as a Bluetooth microphone for my laptop, for example; I can't install everything I wanted to (notably the freedom keyboard driver); and also, the device isn't really designed to be used as an ultraportable computer. Tis a bit ironic really, given that Microsoft and Palm have clearly put such effort in, to be castigated for a design feature - but I did like to have something with a screen and keyboard big enough to be typed on. For most people however, this will not be an issue, just the opposite!
So, there we have it. In this world of iPhones and the impending Android OS from Google, there will always be plenty for the geek. I am reminded however, of colleagues, acquaintances and passing strangers I have seen man-handling eminently unsuitable, brick-like smartphones, when all they want to do is acknowledge a mail, send and SMS or simply phone someone. A Blackberry killer? No - but for such mainstream business users, who need to be able to make calls and access their corporate email simply and without a device that needs a week's training to use, this might be just the ticket.
Flock rocks - and Microsoft is not going to take over the world. Deal with it.
2008-01-17
It was Danny Bradbury that first mentioned Flock to me, a couple of weeks ago when I had one of my regular little tirades against Facebook (in general, I keep these out of my blogging because I might look back and find myself blaming the tool - but if I get one more request to bite a chump, I shall scream.) "I can't cope with all the messages," I said to Danny." "You need Flock," he said, a statement I promptly forgot until I saw it mentioned in an article by James Governor earlier today. That's when I saw it, not when he wrote it.
Two mentions is a strike in this game, so I promptly went off and installed the self-styled social networking browser, Flock. Simple in Ubuntu - if you're a geek who isn't phased by creating a few symbolic links, editing config files, installing that pesky extra library or the (unmentioned) fact that there's already a command in Linux called "flock". 'Tis fortunate, I am. But I digress. I ran it up, Flock that is, and fell in love.
As an analyst I need to retain a level of product independence, but phew, that's hard to do with Flock. It integrates a whole bunch of tools - blog reader, Twitter, Flickr and Facebook feeds and a bunch more - into a Web browser, to the extent that it feels like I have an HTML rendering engine as part of the package, rather than a browser with a bunch of plugins. You'll have to try it for yourself but for me, and from a usability perspective, I found myself an instant convert. As for the Facebook thing from Danny - indeed, the king of clutter is rendered as innocuous as Twitter.
Under the bonnet, Flock is built on top of Mozilla, just like Firefox (and I'm sure a real tech-head could tell you how the two packages align - indeed you might as well get it from the horse's mouth). That's about all I want to say about that. Meanwhile, we have the (unavoidably) ubiquitous, bigger picture.
I've been quite vocal about the lack of innovation in open source in the past. To be fair, open source wasn't mooted to be innovative - it was from the outset planned to offer "open" (aka freely available) alternatives to proprietary software. All well and good - but a bit frustrating sometimes, when the game has seemed to be more about delivering like-for-like functions. Why redevelop what's already there, I wondered, when there was a wealth of new potential to be tapped in the form of new functionality?
Well, innovation has been coming for a while, but its been building on top of the open source layer, rather than within it. Sites such as Google of course are built on the LAMP stack, as are many others: as Stephen O'Grady pointed out, MySQL is the Toyota Corolla of the Web world, offering an easy on-ramp for many a startup. From the shoebox-strewn plains of San Jose to places so esoteric we know them only as 'offshore', developers are building on top of both open and proprietary platforms with impunity, using whichever languages make technical sense for the job in hand.
And the game is just starting to get exciting. Despite my views on Facebook, and taking into account the debate around the more insidious risks to privacy, I have to take my hat off to the guys that built a service scalable enough to accomodate the massive growth it has seen. Social networking is a work in progress - we're going to see plenty of failures, and plenty of pundits dissing products before they've had a chance to find their niche. And, every now and again we're going to see moments of pure joy when a bunch of ideas and technologies come together and create something that really does affect how we live and work, for the better. It's too early to tell whether Flock will deliver one of those moments, but (as with every new innovation), I sincerely hope so. Flock offers a user experience which, for me, is head and shoulders above that offered by "traditional" browsers - the category occupied by both Firefox and Internet Explorer.
Which brings me to Microsoft. I'm sorry, but the idea that the company holds any kind of defensible monopoly position in this day and age, is just laughable. Is anyone, seriously, still "afraid" of Microsoft's position? Sure, It's got a stonking portion of desktop market share - but applications like Flock are symbolic of just how tenuous Microsoft's hold in any area could be, should the combination of functions be right in any alternative platform. This is not new - we've seen it already with Google, literally pulling the search rug out from under Microsoft's feet. How did this happen? Because people chose to use it, and Google had the business model to leverage all those eyeballs. If we believe IT to be a truly democratising force then we should be letting users vote with their technology choices, all the while ensuring that there exist appropriate levels of governance built into international law, so nobody feels locked into any technology platform.
This latter point is important, as we know the lengths that many technology companies have gone to, to get one over on their "partners". Microsoft's in the list of course, but it's by no means a list of one. We should remain vigilant - both to uncommercial acts and the underhand ways in which the law can be played to try to stifle the competition. I agree with James Governor's view on the recent EU announcements being a waste fo taxpayers'money - "The world is changing. Documents today are not static. They flow through networks, largely enabled by a bunch of web standards. Some companies choose to go end to end Microsoft, for its “integrated innovation”. Other companies choose to go a non-Microsoft route to avoid integrated aggravation. That’s choice." James goes on to say: "To be fair all its doing is announcing investigations made at the behest of complaints from Opera and ECIS (otherwise known as the Anyone But Microsoft lobbying club). In other words these investigations may lead nowhere." What a pointless exercise.
For me personally, I shall continue to use non-Microsoft products as well as Microsoft products, deciding based on preference and functionality - that's personal choice. Corporations and public bodies, large and small are equally free to do the same, with the added factors of technical skills, administrative costs and migration challenges often keeping them with whatever's the incumbent. It's unlikely we'll see that many migrations off Oracle and onto MySQL in the short term, and neither do I expect many private organisations will be in a hurry to convert all of their old documents to any form of XML, even if they needed to - which they don't of course. That's the nature of legacy, but its not where the real action is: developing new, innovative technologies and applications that really do enable us to do things that are different than in the past. Whatever happens to Flock in the future, let us take from it this one lesson and let innovation be the goal, wherever it comes from.
Blogged with Flock
Talking storage at Storage Expo
2008-01-18
For anyone that's interested, here's the page for the podcast I did at Storage Expo, talking about what was coming in the world of storage etc. Click here to download it directly. Here's the intro:
"There is no argument that the complexity and volume of business information is increasing exponentially, and hence the rapid evolution of the supporting storage technology. Tools such as data de-duplication, virtualisation and thin provisioning are moving with great speed from the drawing board to mass adoption. In this podcast, Jon Collins, Service Director, Freeform Dynamics, looks at the latest technologies entering the storage market, and predicts trends for the medium term."
Introducing the Buzz: assessing IT vendor mindshare
2008-01-30
In this, most oxymoronic of industries that is the IT industry, founded on the hard logic that characterises computing, procurement decisions shouldn't be about the whims and desires of our all-too-malleable race. But, in the words of an agent in the Matrix, we are "only human" - our judgments are coloured by an ever-shifting set of perspectives taken from our peers, from what we read, from the assumptions we make. And they have to be. Things are changing too fast, as genuine technology advances combine with the leapfrogging tendencies of IT vendors, whisked into a flurry by product marketing and hype. The result: an unending stream of 'new and improved' delivered with the suggestion that last month's investments are now 'old and inferior'.
Within this, ongoing process of distillation there is a smoke-filled chasm of grey between genuine perspective and uncorroborated opinion. As technology buyers, we continue to develop business cases and conduct our due diligence, but our views, and indeed the very process we go through, will be affected by what's on offer, and who's providing it. "Nobody ever got sacked for buying IBM" is the old adage, and IT vendors are forever looking to occupy such a number one spot in their particular markets. The ultimate decision on major procurements may be with the CIO, but influence registers all the way down the stack: in certain geographies the views of front line engineers is courted as part of the due diligence process, and elsewhere, IT operatives have the power to block procurement decisions, or render the resulting deployments next to useless.
So, what drives such opinions, and how much of an impact do they have? We call the overall perception 'the buzz', but its not always the case that to be front of mind is to be well considered. Freeform Dynamics partnered with The Register to produce The Buzz Report, collating readership views in such areas as leadership and culture. So, for example, Microsoft may top the list when it comes to mindshare, but it is VMWare that top the leadership and culture charts. Meanwhile, there is only one thing worse than being talked about. Languishing at the bottom of the pile are companies like CA, Nortel and SAP, either because they are less well thought of, or quite simply, that their products just aren't sexy enough to give them higher profile.
Such information should be handled with care, of course. While Apple may be the current darling of the desktop, it is unlikely in the extreme that we'll all be taking our laptops out of manila envelopes any time soon. Neither should we read too much into low mindshare, particularly for companies that are doing very well indeed within their respective niches. However, the buzz does offer us some deeper insight into the industry itself: while market share statistics help us understand how successful vendors have been in the past, it is the buzz that gives us an indication of how successful they might become in the future. For technology pundits as well as IT buyers, indeed for the whole industry, nobody wants to be betting the wrong horse.
Tags: NFIT
February 2008
This section contains posts from February 2008.
On the influence, independence and impact of IT analysts
2008-02-07
I've been reading with interest the whole "influence 2.0" debate, as characterised by Jonny Bentwood and Duncan Brown. For "interest" read "vested interest" of course - as an analyst, I have the dubious tag of forming part of the "influencer community" - which at first glance (to me anyway) doesn't smell too good - for "influencer" one could read "schmoozer", "evangelist" or indeed the (slightly gangster-like) "persuader" - like I would be going and seeing some hapless organisation and convincing them that they should adopt a certain technology "if they know what's good for them..." When influencers of any description are not doing their job as well as they could, they are at best acting like advertising hoardings, and at worst, pimps. Frankly, I'd like to be neither.
Which brings me to a parallel debate - that of "independence". The concept of an independent IT analyst firm is a tenuous one at best. Most often, the term is used to describe smaller, "boutique" analyst companies, to differentiate them from the big guns (Gartner, Forrester and IDC). But surely these companies are independent as well - or aren't we all in a bit of a pickle? More fundamentally, given that we are actively working within the realms of the IT industry, can we really be claim to be independent of all technological influences? Of course not. By nature, all IT industry analysts share a common assumption - that information technology can add "value" (a nebulous concept) to organisations and individuals. This assumption is by no means proven - we remain, in historical terms, still on the banks of the primeval technological swamp from which we emerge - but we operate on the basis that there is more goodness to be had, the further we can get up the slope. Or perhaps we are still in the swamp, looking for the bank. But I digress.
To base one's opinions on such an "onward and upward" assumption is very different from saying that technology offers some magical salve, to solve all ills. That elephantine businesses will be able to pirouette, and consumers will surf on electronic waves... this is of course, pure and unrepentant schlock peddled by those who want to sell their products or consultancy services. The massive difficulty for all involved in in IT is caused by a lack of foresight - it is obvious now (for example) how much of a global game-changer the internet has become, or the social effects of mobile phones in both western and developing countries; what's not so obvious is what are the next "big things". The IT industry can be seen as a multidimensional betting shop on steroids - Californian venture capital companies and Wall Street, startups and gorilla incumbents, businesses of all sizes and in all sectors - and indeed, industry analysts - are placing their chips on whatever technologies they believe will give them the biggest return. Virtualisation, SOA, agility, social networking, compliance, green - these are all squares on a roulette table - roll up, roll up!
The IT industry is rife with agendas, and nobody working in technology today can claim any kind of independence from the core assumption above, that somehow, technology can make things better. This is as true for individuals as for companies - careers can be based on the depth of learning about specific technologies or delivery techniques (from my own experience I found it easier to get a job as a UNIX expert than as an "IT manager and all round good egg"). However, there is a world of difference between "technology can make things better" and "technology will make things better". The can-will distinction becomes nothing to do with the technology, and everything to do with how it is selected, deployed and operated.
And so, to the third i-word in the title of this post - "impact". Impact is a pretty hard thing to define, but a very easy thing to appreciate, particularly in IT. Most organisations are littered with the detritus of technological failure - applications that were never used or were superseded, servers and storage that were over- or under-sized, network and security devices that failed to be implemented usefully. In a conversation (I've mentioned this before) with a Chief Information Security Officer, he was berating security vendors who didn't hang around for long enough after achieving a sale, to actually ensure the product was configured and operated correctly. Why should they - sales people are measured on their quarterly "number" of sales achieved, and not on whether their customers appreciated the solution once implemented. The agenda item of, "Don't quit until the product delivers the originally stated benefits," is often sadly lacking in the T's and C's.
Industry analysts, too, can be taken to task when it comes to the follow-through. Historically, analyst companies were set up to help organisations with procurement: to buy products, compare functionality, define a shortlist of suppliers and negotiate contracts. All well, good and useful but while analysts have move upwards into IT strategy, the heritage of the analyst industry also tails off post-deployment. James Governor has talked about Redmonk as being "make-side" rather than "sell-side" or "buy-side" - which is a great perspective in itself; it also illustrates the procurement-oriented focus of most other IT analyst firms.
Trouble is, procurement is not where the real action is, in the companies implementing technology. The procurement stage is just one in a series of stages, each of which involves multiple stakeholders (not just the CIO!). The benefits ascribed to a given product area may be correct in principle, but they will be highly dependent on the business and architectural context, and on a whole set of non-technological criteria, assumptions and activities - as a grouping, we can call such things "best practice." Get the best practice right, and there is a far better chance of the technology achieving a positive impact for the organisation. Get it wrong, and the impact may be negative or - and this is just as bad in my books - no impact at all.
So - while it is important to be influential (no point in saying something if nobody is listening), it is equally necessary to have a positive impact on the whole of the IT delivery and operational process, not just the time spent at the shops. Analysts can't escape the fact that much of the IT industry is driven by agendas - and terms such as "market sizing" are squarely aimed at vendor and their stockholder agendas rather than those of end-users. Meanwhile however, the big advantage of a focus on best practice is that, in the (often-strained) dialogue between organisations and their IT suppliers, "best practice" is one agenda that both sides can fundamentally agree on - the vendor really does want to see its technology being successfully used even once the salesperson has received that stonking bonus, and the business stakeholder and operator will both confirm they are happier if the technology is actually achieving what it set out to do. Promoting best practice might not be as sexy as evangelising the latest and greatest technologies (and we love gadgetry as much as the next guy) but it is far more important than any transient attention paid on whatever's currently "hot".
When IT industry analysts flutter round the 60-watt bulb of procurement like tropical moths, they will all be clamouring for a share of the light. When the sun comes up the next morning however, it is the people in the ongoing decision making roles that need the most help - and who, also quite frankly, are often a darned sight more interesting to talk to than those who claim to have the ear of the CIO. I'm not going to claim to be technology-independent, but I can wholeheartedly and categorically state my desire to have maximum influence on ensuring the positive impact of IT through adoption of best practice. That may be outside the remit of influence watchers today, but so be it - I can cope if they can.
On police paperwork and technology promise
2008-02-07
As I was driving to the National Motorcycle Museum to give a presentation on archiving this morning, I listened to a constable from a Welsh police force on Radio 4, backing up the soon-to-be-announced (or has it been already) statement that there was too much bureaucracy in the UK police force. For anyone who has difficulty (as I do) spelling bureaucracy, it is helpful to remember that the first part is "bureau", i.e. "desk" - which is exactly where our local bobbies seem to spend much of their time.
The constable was explaining the overheads involved in stopping a group of individuals and asking whether or not they had any knowledge of an incident (say, a punch-up round the corner on a weekend evening). Each conversation needs to be registered on a form, which takes 10 minutes to fill in - one can only imagine the number of such conversations that take place in central Cardiff on a Saturday night.
Now, of course we need to balance any statements about "overheads" with equal remarks about protecting the rights of the individuals that are stopped and questioned - but that doesn't seem to be the point here. Rather, one is left with a feeling of, if police officers have to talk to the public for anything other than passing the time of day, there will inevitably be more time spent documenting the conversation than holding it in the first place.
This point was reinforced by an IT professional at the archiving event, who happened to be from a city-based police force. I have to ask - is this as far as technology has come - if we still need our police officers to be filling out sheets of paper in triplicate? I had a bit of a chat about the alternatives to form filling - voice recognition, for example. To support the average copper, with the average regional accent, such technologies are still not yet ready for prime time, I am informed. "Police officers need technology to just work - not to sit through hours of training the software for it to understand them," I was told. This seems such an utterly fair statement given the 20 years of innovation we have seen since I kicked off my career - that technology should just work - so why is it not true?
The answer, I believe, lies in the gap between technology companies and systems integrators. Some of the things that end-users see as fundamental are not baked into the base software, but are left to partners to implement: an example in the voice recognition context would be the level of training required to support it. Surely there are mechanisms that can integrate with how people work today - passive voice recorders, local accent filters, document scanning etc - that could be used to make such technologies "just work" when used by people on the front line of the business?
I'm not totally despondent - but its situations like this that serve as a constant reminder of, even with all the innovation going on, just how far we have to go with technology before we work out how to close the gap between the wonderfully clever capabilities it offers, and the real needs of its users.
Quick take: P2P document sharing between platforms
2008-02-11
This is certainly not a formal review but I was doing some investigations into how to share files between team members, potentially on different (i.e. Windows and Linux) platforms. Groove is the peer-to-peer standard on Microsoft Windows, but there are free alternatives - having tried a few, Collanos Workplace seems to be the one that has remained on my desktops. It passes the test of "just working" (though I seem to remember it did need some Linux config tweaks), its not overcomplicated to use, etc. It does suffer a bit from some of the same assumptions as Groove - workspaces lack the ability to sync with local folders for example, which means backups in the traditional sense are difficult (and no, p2p doesn't mean no backups necessary - if you screw up on one peer, you screw them all.) But overall, its a good as any a place to start for offline remote team co-ordination.
Apologies but I can't find the original list of products I was trialling, but there are plenty of them. I do remember that competing platform Collaber had firewall issues so no dice.
Incidentally, for Windows document sharing, Foldershare (now acquired by Microsoft and integrated into Windows Live) does appear to be a very useful little tool. It does enable any normal folder to be just "shared", simply and effectively, so its worth a look. Equally incidentally, the caveat with all such products is they can tend to lack scalability - if you want to put tens of megabytes into the folder at a time, remember how much available bandwidth you have, and how many peers you're sharing with.
Sun buys VirtualBox.
2008-02-13
Sheesh. Does this mean I'm a Sun software user now? And all those years I said they didn't "get" software. Kind of ironic, that last night I picked up a copy of StarOffice for 20 quid from Staples. Two strikes, I'm out (but watch this space for a comparison of StarOffice, OpenOffice and Symphony)
Why another blog?
2008-02-14
Just in case anyone actually reads this and thinks - "but isn't he already posting here, and here?" - this blog is an experiment. In finding a voice that flows more naturally than article-oriented blogs; in self indulgence; and in the fact that I liked the URL and wanted to register it. I wanted to have a go at the kind of stream-of-consciousness blogging that some of my esteemed colleagues practice, out of curiosity as much as anything. It will concern itself more with tech issues and my professional life, for what its worth.
If you found yourself here but don't know how or why, move along, nothing to see.
Drupal 6.0 is out
2008-02-14
Latest release of this popular open source content management system - worth a look for the Freeform Dynamics Web site? Certainly worth testing out. If its good enough for The Onion, its good enough for me.
You came all the way down here, just to do this?
2008-02-14
An interesting remark made by one of the attendees at Tuesday night's BCS meeting in Plymouth, where I was presenting on security topics. Yes indeed, I went all the way down there, just for that - but it was well worth it. While we would prefer not to live in a vacuum, sometimes it can be a bit isolating spending all of one's time in a cave number crunching, or indeed, off at vendor events being dunked in the vat of marchitecture. So, I take whatever opportunities I can find to talk to real people about real IT problems. It does more than keep me grounded - it gives me a reason to keep doing it.
For much the same reasons I shall be presenting at Business Continuity Expo and Infosec in the not so distant future.
In praise of voice recognition, with all the caveats.
2008-02-14
I really like voice recognition. There, I've said it. I get a nasty feeling that such sentiments will set me up as either non-independent, or a geeky laughing stock, or worse. But - it works for me: I dictated this post for example. With voice recognition at the moment however, comes a list of caveats as long as your arm:
- the only technology I've found that really cuts the mustard is Dragon Naturally Speaking, from Nuance. This is less a product endorsement, and more a statement on the fact that IBM ViaVoice and the Microsoft offerings just don't sem to cut the mustard - at least they didn't last time I tried.
- you have to train it. No instant gratification with voice recognition
- results are not perfect - I had to edit the post above afterwards for example
- my experiences with voice-enabled phone directories have been, to be fair, mixed to poor.
All the same, I can see it having a place in our technological future - as a feature, not as a product. We're already (with bluetooth headsets) over the idea of people talking to themselves inanely, so that shouldn't be a barrier. Well, we'll see - I've just ordered an OQO PC (a week ago - the first one had to go back because of a power supply fault) and the plan is to test it out as a voice rec device. As with everything, keep watching this space.
Martin Brampton and Mambo
2008-02-14
Well, I didn't know that. Martin was my old boss (or one of 'em) at Bloor Research - and it transpires that he has also been a major developer on the Mambo open source CMS project. Martin was a very wise and pragmatic analyst, but I didn't know he was an active programmer, more than I can say for me! He's now developing another CMS called Aliro. Hat off to you Martin, I might well throw it into the pot when I take a look at Drupal.
What's all this guff about talent management?
2008-02-14
This is as much a diary entry as any, as I don't have time to write a full post. But tell me - weren't businesses always about hiring, promoting and retaining talented individuals? Or did I miss something?
Incidentally there's a great piece by the FT's Lucy Kellaway here, on management speak. Oh, for simple language in all walks of life.
Avoiding motherhood and apple pie
2008-02-14
One of the dangers in my line of work is to talk about common sense as if it really existed. For example: it is easy to find your car keys: simply decide on a place for them, and make sure you always put them in that place whenever you enter the house. It is indisputable that having a place for your car keys is a good idea. However, as we all know, we are only human, and there is no guarantee that we will always remember the most straightforward of advice.
I am sure it is the same when advising on any topic, though my experience is limited to information technology. Again, it is too easy to offer the most clearly sensible of counsel: know what you have; be engaged with the business; understand the risks. Such principles are absolutely sound, indeed, they are common sense. Once again, however, we are only human: unless I missed something along the way, I believe that most of our organisations are still staffed entirely by human beings., with all of their foibles. It's not that common sense is not that; more, that it completely fails to take into account this astoundingly obvious factor.
Statements such as these can very quickly turn into platitudes if left unqualified: motherhood and apple pie. So, what to do? All that is required is a slight difference in perspective. A statement such as, “the IT departments must engage with the business,” is often seen as a pre-requisite for success. In reality however, it is an aspirational goals, to be achieved to a greater or lesser extent. As aspirational goals, we can see such statements as they are: we can also see the challenges that are faced in achieving them – like, in this case, the political issues, the general lack of trust, the uncertainty about who exactly in the business IT should be engaging with.
It is no, simple, rhetorical difference. By seeing the motherhood as aspirational and focusing on the challenges, we can also focus attention on finding ways in which the challenges can be overcome - if indeed, the return really does justify the effort involved. Sometimes the status quo, while imperfect, may be the best worst: perhaps, for example, the coat pocket is the right place to look for the car keys. To finish with yet another trite platitude, if it ain't broke, don't fix it.
How was your week?
2008-02-15
For information, mine was pretty good. Had some great conversations - particularly with David Rossiter, Ian Murphy and Michael Hyatt, done some good business and generally got on top of things. I've been thinking a lot about security, power over ethernet and IT-business alignment, as well as IT analyst independence and the general state of the industry. Somehow, it all ties together.
The OQO is here #2. Phooey.
2008-02-15
A week or so ago I received an OQO, bought to test voice recognition a bit more. The power supply was bust... so I sent it back. On this one, the battery appears to be duff. Cursory googling reveals I'm not the first with power issues. Please, Expansys, don't make me send the whole thing back again or I may not want to get the replacement. Well, I will, but patience is wearing thin.
Looks nice though.
Declarative living and disclosure
2008-02-15
There's been a lot of chat on the AR back-channels about IT analyst independence. One thing I have wondered however, is how difficult it is to keep up appearances when using such microblogging facilities as twitter. A bit like, sure, you can fudge your musical preferences with last.fm, but is it really worth it? Similarly, you'd have to be pretty sad to use a made-up persona with Twitter, particularly if (as I am) you use it to connect with people who know you.
Robin's must-read AR series
2008-02-15
Just read 'em. You know it makes sense. Despite little of the analyst "industry" having much of that.
How To Deal With Analysts: #1 The Pre-Briefing How To Deal With Analysts: #2 AR or PR? How To Deal With Analysts: #3 Know The Analysts How To Deal With Analysts: #4 What Do Analysts Do? How To Deal With Analysts: #5 Analyst Research How To Deal With Analysts: #6 The Gartner Gorilla
YouTube fame from the Cisco analyst event
2008-02-16
Look Mum, I'm on telly again... rather unnerving how the YouTube site says "Jon Collins of Freeform Dynamics offers his ass..."
Typed from the OQO
2008-02-17
phew - the advice given about kick-starting the OQO battery worked. Now installing: just about everything...
... actually, as an update I have only installed Windows Live Writer, Skype and Dragon Naturally Speaking. Will no doubt get round to Office soon.
This is a dictated posting
2008-02-17
Goodness gracious me, I am trying to use this computer without pressing the key is. It still needs to learn my voice but not a bad start
I am going to trying to publish now.
Can Power over Ethernet make networks greener?
2008-02-18
It is always dangerous to speculate, particularly in this industry, which sometimes seems to be more founded on speculation than practical reality. Consider, for example, Power over Ethernet (PoE) - essentially offering a way of delivering power through an Ethernet cable. Today, there are a multitude of different devices that can be attached to a network that – WiFi repeaters, video cameras and so on -- whose location may not be near a power socket. It makes sense, therefore, that the wire used to connect the device to the network is also the wire supplying the power.
Where PoE has really come into its own, is with VoIP phones – telephone handsets that use a network-based infrastructure rather than a traditional PBX. Voice over IP handsets are exactly the kind of devices that can benefit from power over the net, just as old-fashioned analogue handsets are powered by the PBX. The alternative is to have a transformer next to every phone, which occupies a socket and is one more thing to go wrong.
The downside of PoE is, of course, in the “P”. I've written before about how hard it is for hardware vendors in general, and networking vendors in particular, to claim any sort of green credentials for their equipment. The fact that PoE is delivering power, makes it a bit of an anathema to green – particularly as the latest iteration of the standard enables more power, not less, to be delivered over the Ethernet ports. According to the marketing, such power increases are required to support the increasing complexity of VoIP handsets. Colour screens, bigger processors, more memory – all of these things will take their toll and become more of a draw on the corporate power supply. That's all very well, but it's not very green, is it?
On the surface, then, Power over Ethernet can hardly be held up as a poster child for green IT. That's not necessarily the end of the story, however. Let's consider some of the plans, and likely developments in the PoE space: not least that it may well become built into switches by default, rather than as an exception. From a systems architecture (and indeed, from a manufacturing) perspective, there is little difference between powered and unpowered Ethernet ports. One of the larger network vendors told me that the chances were most of their switches would build in PoE to all ports, at some point in the future.
In principle, that's still not very green – but there's more. There are no concrete examples yet, but vendors are also talking about incorporating power regulation directly into network switches: put simply, enabling the switch to regulate supply according to demand. It is not beyond the realms of possibility to imagine the automatic power-down of devices outside certain hours, or indeed, when no data signals were detected (pretty obvious for IP phones, for example). To take this one step further, it is within the realms of possibility to produce handsets that require only a trickle current when in standby mode – and which could signal their requirements to the switch.
Taking such thoughts to their logical extreme, would it be possible to furnish an entire building with a highly regulated, low-voltage, direct current power circuit based on flood wiring (that is, the networking sockets on the wall)? In principle yes – though indeed, there are a number of hoops to be jumped through first. Not only are there the technological hurdles such as the ones above, but also some basic truths, such as the fact that most network wall sockets are not actually enabled: they may connect to a patch panel somewhere, but this will not necessarily be connected to a switch.
All the same, while it may not yet be possible, there is certainly potential. Such a circuit might, for example, be able to replace the currently obligatory raft of telephone and PDA chargers that litter our offices – indeed, I discussed such a thing with one of the senior guys at network wiring specialist CommScope (who brought up the “not-all-ports-are-wired” issue – thanks Ian). Perhaps it might never happen, but it is often only in hindsight that we understand how technologies are to be used: in this case there has already been a precedent set with the charging potential of USB. Why not the same with the network? Such an infrastructure would be able to support a broader range of devices, far more straightforwardly than relying on the mains: as my colleague Tony Lock has pointed out, consider the efforts of the thin client vendors such as Wyse, who are bringing out devices with power requirements small enough to be powered by PoE alone.
Indeed, it can be dangerous to speculate. But equally, just as many technologies also have a downside, so there may be some upsides of PoE we are yet to experience. Just perhaps, and even taking into account the cost of manufacture, Power over Ethernet might just offer an opportunity for networking to demonstrate its green credentials at last.
Should we be using computers to heat our own houses?
2008-02-20
A random thought, prompted by a discussion with APC a few years ago. I was surprised to discover (having clearly been a poor student in O Level Physics) that the amount of heat output by a rack of processors, storage etc was exactly equivalent to the amount of power that went in. I know, its so obvious it hurts. More recently, there are plenty of stories of office blocks being heated using computer equipment. The question - as I sit in a relatively warm room, no doubt due to the two computers pumping out hot air right now - is whether such a strategy could also be adopted by the "connected home"?
Which begs the next question - which is the more efficient heating device - the computer or the oil-fired radiator - and why? It would be funny if, at some point in the future, processor cycles were seen as a knock-on benefit of our silicon-based wall heaters...
Twits and Faces
2008-02-20
Thanks to Flock I've just about got on top of Facebook, and I'm appreciating Twitter more. I find the latter useful and fun, which is more than I can say for the former (on either count). Proof if any was needed, that different people need different tools. Well, duh.
On press releases and ambulance chasing
2008-02-21
A while back, I remember seeing a sketch by Eddie Izzard. The detail eludes me but roughly speaking it covered the cyclic nature of being cool. One could progress from totally uncool, to slightly cool, to cool, to - put one matchstick in the corner of the mouth - very cool, to - put another matchstick in the other corner - totally uncool again.
So it is with technology-related PR, and nowhere is this more starkly illustrated than in the press releases associated with IT security. I have written about how hard it can be to incite a sometimes apathetic audience into action about very real threats; equally, many IT managers will agree how difficult it can be to get funding for security-related purchases. IT security companies have a vested interest in both of these issues: they are obviously not working altruistically. However, in my experience the majority nonetheless do want to deliver value to their clients.
Such desires may be reflected in IT security PR, which often needs not only to explain what a company does, but also why it matters. Frankly, when a "bad thing" is reported in the media it can be gift for any company that offers products in that area – but what to do when there is no bad news to piggyback on? The answer is to put out awareness-raising press releases, to augment the more standard ‘customer win’, ‘expands in Europe’, ‘new partnership’ fodder. It is here, just as with Eddie Izzard’s sketch, that we find the line which should probably not be crossed.
What are the different kinds of press releases? I would grade them into four categories:
· Best practice activity. A vendor may have put together a set of guidelines explaining how to deal with an issue. While it is a fair assumption that it may reference their product or service, it may also contain some sound advice. Press releases saying that a vendor has documented some best practice are little more than treading water in PR terms, but they are innocuous enough.
· Publicising research findings. A security vendor may conduct a study to highlight the scale of a given problem. This is useful when although the area is known about, there is general complacency that the issue has already been dealt with, or that it only happens to other people. Indeed, this is often the kind of activity that we get called in to help with – anonymous surveys may be the best way to talk about an issue that nobody is supposed to have.
· General awareness raising. These tend to be more educational, to highlight that a problem or threat really does exist. A good example of this would be PR surrounding man in the middle attacks, which are a valid candidate for awareness raising. The only downside is that sometimes such press releases assume the audience knows what is being talked about, which is more than a little counterproductive.
· Publicising specific examples of where things have gone wrong. This is probably the worst kind of awareness raising press release. At best, it draws attention to an example of where the threat has been realised, or malpractice has been found in that, “I told you so,” kind of way. At worst, it can only be construed as ambulance chasing, using some unfortunate soul who has found themselves wanting, and attempting to bask in the reflected publicity.
Don’t get me wrong. In general, I like receiving press releases. I may not read all of them, end to end, but I am not embarrassed to admit that I cannot keep on top of everything that is going on, all the time. So, if I am told about a threat that I did not know existed, nor indeed, a product which in some way can resolve that threat, I can add this to my catalogue of knowledge. Equally, however, I make no bones about the fact that I detest 'ambulance chasing' press releases. While I concede that it can be useful to use such incidents as examples, they should be used as no more than a passing mention to support any of the other kinds of awareness raising. Consider the difference in the following two statements:
· “The HMFE were foolish, and should get their act together,” said Charlie Farley, vice president of security firm Ultrasecurix. “By using technologies such as ours, it would never have happened in the first place.”
· “Ultrasecurix would like to announce the latest iteration of our product. “It has been redesigned from the ground up to deal with the latest generation of threats,” says Charlie Farley. The many features include… which enable comprehensive protection. “Situations such as those am highlighted at the HMFE only serve to highlight how things are changing and the need to stay vigilant.”
OK, the latter requires the company to have actually done something, which should maybe be the prerequisite in the first place. If, however, you feel the need to put out awareness raising press releases, remember the first three kinds before settling on the fourth. The bottom line is, if you can't be constructive and add value in the first few paragraphs, then please don't bother at all.
Quick take - Microsoft releases binary document formats under OSP
2008-02-21
Phew - and about time too!
Xobni, like Flock, also rocks
2008-02-22
Xobni is the Nelson Email Organizer of the social generation. Wonder what happened to NEO? Still a great product, but fails the compelling test once desktop search is installed.
The great keyboard bake-off – Bluetooth versus Infrared
2008-02-24
Mobile devices are growing up, to the extent that the latest breed of XDAs and iMates are, essentially, small computers with integrated phone functionality. Despite this rapid evolution in capability however, if you want to type anything significant you’ll still need a good, old fashioned keyboard.
Here we look at two keyboards for Personal Digital Assistants (PDAs) and smartphones. One is the Freedom Keyboard from freedomkeyboards.com, which uses Bluetooth; the other is an infrared keyboard from ThinkOutside. These were tested with a Dell Axim X30 (Windows Mobile 2003) and a QTek 9100 (Windows Mobile Version 5.0) – behaviour may be different for Symbian and other operating system users.
So, here goes.
Out of the box experience
Both keyboards are supplied as a keyboard, a leatherette case, a couple of triple-A batteries to power the keyboard and a CD-Rom containing the drivers. I confess I couldn’t see the point of the leatherette case, which cleverly increases the bulk beyond that which is portable in a standard jacket pocket. Perhaps it is useful for those types of people who like zipping things up before they put them in their briefcases.
It always seems a bit strange to me that drivers for a PDA device are supplied on a CD-Rom, as it sort of assumes that the purchaser will be synching with a computer. This may not be the case if, say, they are travelling, or even if (perish the thought) they just don’t have a computer! The alternative is to download the drivers from the Web, but when I went to the Freedom keyboard web site and selected the drivers page, I was told, “This option will not work correctly with frames.” Checking the box, I realise I was at the wrong site – I should have gone to www.frekey.com. Now I know, but a link from the main site would have been handy.
The two keyboards are roughly the same size when closed, indeed each can fit in the other’s leatherette case to the extent that I can’t remember which goes in which. I think the ThinkOutside keyboard is a fraction thinner, but not to make any difference (unless you’re trying to squeeze it in the back pocket of your jeans). Looking externally, the ThinkOutside keyboard is in strong injection-moulded plastic, whereas the Freedom has rubberised, protective edging.
The ThinkOutside has a clasp that doubles as the PDA/phone support, and the Freedom has a simple latch. Neither is perfect on closure – the latch doesn’t seem to shut properly and the clasp can end up in the wrong position, I could imagine snapping it off when trying to force it shut. Speaking of snapping, there's a piece of plastic on the Freedom (next to the PDA stand) that looks like it will snap off at some point, but it doesn't affect the function so its not that big a deal.
Maybe, I muse, I should just be more careful. I open both keyboards, insert the supplied batteries and sit them on the desk in front of me. The ThinkOutside just opens in the right position, with an infrared “arm” that can be rotated to line it up with the infrared transceiver on the device. Less obvious was the Freedom, which required me to consult the manual (quelle horreur!) to find out I needed to get the stand out of its slot. No big one; more importantly, I find that the Freedom doesn’t quite sit properly on the desk. The central hinge is not letting the keyboard open flat, with the result that one end or the other is hovering a millimetre or two above the desk. I wonder whether this will mean I can't type properly, which, of course, would be a disaster. More on this later.
Finally, I look at the layout. The ThinkOutside keyboard has three lines of keys, whereas the Freedom has four. Advantage Freedom: separate keys for the numbers: in general this keyboard layout is much, much “cleaner” than the ThinkOutside, which has two function keys, and various key over-printings in blue and green. On the positive side, the ThinkOutside has larger keys, so it’s a trade off between clutter and key size. Again, more on this later. First let’s look at how the keyboards connect to my test PDAs.
Connecting to the device
Recall that Freedom uses Bluetooth, and ThinkOutside uses Infrared. I’m always a bit dubious about Bluetooth – too many bad experiences with poor software stacks and incompatibility between transmitters and receivers. Trying the Axim first, I confess thinking “here we go again” when I installed what I thought was the most obvious driver for Windows Mobile 2003, direct from the “Frekey” Web site, and it failed to connect to the keyboard. Freedom Input does offer an 8-meg download which contains all the drivers, past and present, and once I’d installed the one that was specifically for the Axim X30, I was up and running. I didn’t repeat the “Frekey” saga for the 9100, going straight for the Wizard.
Perhaps it serves me right for trying to be clever in the first place. No points deducted, but an extra house point goes to ThinkOutside. On linking to the Web site from either PDA, I was automatically prompted with a “compatible” driver. These downloaded easily, and I was typing before I knew it. With infrared, it was a simple case of enabling the connection. Not so good with the ThinkOutside was the assumption that the device would be less than a certain thickness. This was fine for the Axim, but not so fine for the rather chubby Qtek. The latter had to be either balanced on a metal bar, or opened which risked pressing the QTek’s own thumb-keyboard.
The Freedom keyboard was much more versatile – not only could the stand support both devices, but also it stand could be detached so the device itself could be positioned on a surface, while the keyboard could be put wherever made it easiest to type. Being able to separate the device from the keyboard was a distinct and unexpected advantage.
Both keyboards and both devices responded well to an “off-on” test, retaining the connection. The drivers cohabited quite happily as well, to the extent that I could type on one keyboard, then the other… exactly why I would need to do this is beyond me, but t was nice to know. Equally useless but interesting was the fact that I could switch devices on the ThinkOutside keyboard, a feature not possible with the point-to-point connection of Bluetooth which had to be reset every time I switched. Note there is an auto-reconnect feature when connecting the Freedom to the same device, which is going to be the normal mode of operation for the vast majority of users.
Overall, both keyboards offered wide ranging compatibility, offering the latest drivers as well as being backwards compatible right back to the first PDAs. If I was buying a keyboard en route, and had no PC with me I would be wary of the driver availability from Freedom, but this is a small point (advice: if you’re thinking of going for the Freedom, download the drivers before you leave for the airport!).
More important, however, is how suited the keyboards are to their core function: typing. Lets look at this.
Usability – the finger and knee tests
Typing is as typing does, as Forrest Gump once said off-camera when nobody else was listening. When it comes to keyboards, when all’s said and done typing capability is the only thing that matter the most. When it comes to the ThinkOutside and the Freedom, there’s very little between them when it comes to typing. Both have a set of conventional, laptop-type keys, so there’s no worries about the “dead cat” feel of rubberised keys.
Its worth mentioning that there is a learning curve, in terms of being able to hit the right key at the right time – this is going to be true for any small keyboard. I found it quite easy to use both the more standard-sized keys of the ThinkOutside and the smaller keys of the Freedom, though I do have quite small fingers. Potential users with hands like bunches of bananas may well be better to go for the ThinkOutside.
As mentioned, the ThinkOutside keyboard has many more key assignments than the Freedom. The extra key assignments on the ThinkOutside are helpful when they're needed, but they can be exceedingly unhelpful when not as their blues and greens distract from the white printing of the letters themselves. While the Freedom is far less cluttered, the disadvantage is a lack of obvious features - the Freedom does not show "Page Down" for example, though it is easy to work out that it’s "Function-Arrow Down".
Keyboards of this type always seem to suffer from a small amount of character-dropping, so you need to keep an eye on the screen. Both keyboards suffered from this, particularly with the space bars – something to do with the positioning of the microswitches. I say “space bars” as each keyboard has two, one on each side of the hinge.
Speaking of hinges, lets go back to the issue with the Freedom – that it doesn’t sit properly on a flat surface. The good news is that this didn’t prevent me from typing, which would have been a write-off. I did find it slightly irritating, but it was nothing that couldn’t be solved with a well-placed blob of Blu-tak. I do wonder whether this is an issue for all Freedom keyboards, or perhaps the hinge was stiff on the one I was sent; I didn’t try to force the hinge at all, in case I snapped it in two.
Of course, such keyboards are not always going to have the luxury of flat surfaces to sit on. When it comes to balancing keyboards on one’s lap, the Freedom keyboard wins hands down. When opened out it only has one fold, so it is much more stable as a knee-top. In addition, unlike the ThinkOutside, the Freedom doesn’t need the PDA itself to be balanced precariously on top. I found I could sit a PDA on my steering wheel (not while I was driving, of course!) and quite happily use the Freedom keyboard on my lap; This was not really possible, or particularly comfortable, with the ThinkOutside. Note however that both offer better balance than the concertina-type keyboards from a number of suppliers. These open larger and look good, but they do require a suitable surface.
Final thoughts
Overall then, which keyboard would I go for? There is no hard and fast answer, as both can handle the basics. If I were travelling in a way that couldn’t guarantee flat surfaces, for example road tripping, trekking or camping, I would probably choose the Freedom keyboard. If I wanted to take multiple devices with me and switch from one to the other with minimal fuss, I would probably take the ThinkOutside keyboard. Whichever I took, I would be able to get the job done.
While the Freedom wins on gadget vale (watch your kids’ faces light up when you type a message and it appears magically on the device – okay, I’m being a sad Dad now), I admit to a personal preference the ThinkOutside. Now I have got used to the cluttered keys I find I can work faster with it; I also wonder about such issues as Bluetooth interference with Wireless Ethernet, Bluetooth security and potentially battery life, though I have no evidence to prove whether the latter is valid: the batteries in both keyboards have been working fine for several months now. I’m not sure about the rules around Bluetooth on planes, which seem to vary between airlines, another plus point for the ThinkOutside.
To conclude, then. Not so many years ago, infrared technology was seen as a technology past its sell by date, to be replaced by “new and improved” Bluetooth. Here we are in 2006 and infrared is still holding its own: it remains a viable alternative and will no doubt continue to do so, as long as PDAs and other such devices support infrared connectivity.
There is a question whether such keyboards will be necessary at all, given the continued lowering in the entry price of laptops. All the same, there is a growing interest in keyboardless devices such as the Oqo and other Ultra Mobile PC's (UMPC's). These devices often come with an integrated keypad, but which is insufficient for any serious typing. For now, such keyboards as the ThinkOutside and the Freedom Keyboard offer a relatively cheap way of turning a handheld PDA or smartphone into a portable word processing device. Not everybody is going to want to stare at such a small screen for too long, but at a tenth of the cost of the average laptop, there are plenty of reasons to invest in one.
Geeking out: testing portable keyboards
2008-02-24
I wrote this review of Bluetooth and infrared keyboards a while back, and then promptly forgot to do anything about it, so here it is. A word of warning - I have had issues with the (increasingly locked down) drivers for the Freedom Keyboard. Still, while I'm loving my OQO (review to follow), I can still see a place for these things. I hope its useful!
Goodness gracious!
2008-02-28
Its been a while. But I'm back. And there will be books. Lots of books.
Incidentally, this made me laugh - you mean, music reviewers don't always listen to the whole thing? Shurely you can't be sherioush ;)
Presenting on Governance in Virtual Worlds
2008-02-29
For anyone who's interested in either topic, I'm going to be presenting on the role and impact of business governance in relation to virtual worlds, in a few weeks at the ISGIG conference in Pisa. What an irresistible topic - here's my outline so far:
There is (currently anecdotal) evidence that immersive environments such as Second Life are losing their mainstream popularity, as potentially are such social networking sites as Facebook. All the same, together with such technologies as telepresence, the potential for such collaborative technologies is great, in terms of how it enables stronger relationships to develop with the subsequent impact on productivity; virtual worlds also offer the opportunity to interact physically and collaboratively, for example to demonstrate a product prototype. But there are plenty of downsides – not least the potential for abuse which is leading many corporations to ignore, if not avoid such technologies. This presentation considers the benefits and challenges of socially enabled virtual worlds, gives examples of where organizations are using them for corporate benefit, while minimizing the governance risks and operational challenges they cause. Where are the boundaries between real and virtual worlds, and how do they interface with social technologies? What are the problems of doing business in a virtual world, and how is that affected by real word business and regulations? Also, if Second Life is indeed losing its sheen, what’s Third Life going to be like?
Unfortunately Second Life doesn't run on the OQO 01+ but if anyone's interested, you can contact Nathan Neumann, I'll be in there sporadically.
Blogging straight in at the long tail
2008-02-29
I was looking at my work blog stats just now. Hmm. Only 5 hits yesterday. "Is it really worth it?" I asked myself, as I scrolled down the page - to see that since I started it a couple of months ago, there's been over 3000 reads of the site. Not in a gush, but a continuous stream - which makes it all worthwhile really. That's 3000 more fragments of conversation that I would never have had, augmenting the many other tweets, IM's and other online communications that themselves augment RL.
And the trend is up, always up.
10 things I like about the OQO Ultra Mobile PC (and a few I don't)
2008-02-29
I've been road testing my new acquisition - the OQO Model 01+ UMPC running Windows Tablet. I've been hankering after one of these for a while, but it is only recently that price has dropped to a justifiable level (340 quid + VAT from Expansys). So, what's so good about it?
- It really is a real Windows computer. Not a PDA, or some other device running Symbian or Linux, but a fully fledged Windows PC. This isn't some Microsoft hugging statement, more a simple question of broad application support, specifically for voice recognition (see 3) and mind mapping. Bluntly, the things I want to do with this device, I can.
- I can get it out on the Tube. Indeed, I can get the OQO out just about anywhere. It is all very well checking a map on a laptop, but it is a bit of a drag having to walk the streets with a 15 inch computer screen open in front of you. Much of the challenge is logistical (see 8), but equally, the London Underground is not seen as a place for laptops - journeys are shorter, and the potential for theft is reputedly higher (see 7).
- It really does work as a voice recognition Dictaphone. This was the main reason for justifying the purchase of the OQO, as a proof of concept: I am very surprised that such a capability has not been tested publicly before. It's not perfect, but it does indeed work: I shall be writing more about this in a future post.
- It is a tablet PC. If XP Tablet edition is installed, the benefits that apply to tablet PC's also apply to the OQO. This includes quite reasonable handwriting recognition: some people prefer to write than type, and indeed it is a lot more friendly in meetings having someone scribbling on a tablet, then tapping behind a laptop.
- It really is very small. This may sound like in stating the obvious, but it is true. The advantage of size is that it can be taken places where a normal computer could not go: it can fit, for example, in a jacket pocket. Yes, you absolutely know it's there, but it's not half as obtrusive as a full-size laptop. So if, like me, you sometimes find yourself with that dilemma of whether to take a computer or not, for example to a meeting - then you still can, taking all your files with you.
- It can be taken on holiday. Yes, yes, I know, it should be necessary to take computers on holiday. However, those working in smaller companies don't always have the luxury of choice; equally there are plenty of uses of a computer that have nothing to do with work. The convenience of the OQO means that it can be put into the bottom of the case and forgotten until it should be needed.
- It more surreptitious than a laptop. Because of (4) it is easier, nay possible to put an OQO into the glove compartment of the car, and it is less of a theft-magnet in general than a fully fledged laptop. From a near distance it looks like some obscure games console.
- It can be used standing up, or while walking. My train ride home yesterday involved an hour's standing in a tightly packed carriage, but I was still able to finish off the day's affairs by completing a report and closing down my email. It does require two hands to use the keyboard or pen, however.As another example, a pretty standard thing for me to do on a flight is to get back up-to-date with my e-mail. With the OQO on Tuesday, I was able to upload my e-mail as soon as my plane had landed and the seatbelt light had gone off, which for me was a real boon as I could then go straight to my car in the knowledge that all those pesky messages had been sent to area.
- It can be powered by a portable battery. A couple of years ago I bought a 12V extension battery from Brookstone in the US, for the express purpose to act as a backup power supply for my gadgets when I was out and about. The extension battery is completely inadequate for laptop use, but it can power the OQO via the latter's own 12V adaptor input. Together with (6), this makes the OQO a much more suitable device for camping trips etc, when access to mains power may be sporadic.
- It looks good. This is very much "last but not least" - but I did get a buzz when the usually dour security staff at Gatwick struck up a conversation about it. Having technology as a talking point doesn't have to be limited to Mac fanboys, you know!
What's there not to like? Well. I wouldn't suggest the OQO as a desktop replacement - with the caveat that I have bought what is now an old model, the OQO is underpowered compared to what multicore desktops can do. Having said that, my virtualisation experiences have led me to believe in the model of smaller computers that are scaled to suit the workload, and the OQO 01+ is an adequate base for office and email use, running on XP. Even so, the screen size is a decidedly limiting factor when it comes to usability - I have found myself frowning when starting to use it, as though some part of my brain is trying to understand if the OQO is just a normal sized computer, but a little too far away.
A second issue is around power. The first OQO I was shipped had a faulty power supply, which I understand is a common fault; the battery when fully charged can power the device for up to 2 hours only, though there is a double capacity battery available (Expansys was shipping spare batteries for 20 quid each, so I bought two of these instead). Finally, a battery "feature" is that, if fully discharged they need to be plugged in for sometimes up to 24-48 hours before they will trip back into charging mode. Nice.
Having said all of that, as a proof of concept (to me) it is keeping its end up admirably. I would love to see an OQO-sized brick that could be inserted into a laptop or desktop form factor like a hard drive, and I am surprised, given its clear usefulness, that we do not see a wider audience for the OQO - I would speculate that this is because few have the luxury of two computers. From the research we conducted last year it was clear that PDAs wouldn't be replacing PCs any time soon - as costs continue to tumble I expect to see the UMPC form factor to reach a much wider market, not to replace the laptop, but to extend the web of mobile computing still further.
March 2008
This section contains posts from March 2008.
Going back to a dual boot PC
2008-03-03
D'y know, I feel a bit gutted about this but the screen freezes have got the better of me - and I just want to be able to use a computer without ... all ... the ... interminable ... pauses. So, I'm reinstalling XP on my primary machine, and sticking with Ubuntu Linux on there as well.
Essentially, if I want to use virtualisation in Linux, its got some kind of conflict on my machine with the graphics controlling software (not sure which bit).
No, I'm not going for Vista right now.
Computers that just work...
2008-03-04
... and do we need faster computers anymore?
Themes I'm thinking about. Interestingly, the former points towards Microsoft and Apple, and the latter points towards open source. I think. More to follow.
Will Hillary divorce Bill soon?
2008-03-04
A thought occurred to me as i read about Hillary Clinton's "last stand" against Mr Obama. If she lost the nomination, would she then - after a suitable time - give Bill the boot for being so generally unreliable as a husband in the past?
I know, thinking out loud. But I wouldn't be surprised.
Who has time to blog?
2008-03-04
... was just tweeted by Carter Lusher. Dunno - can't imagine how it gets done.
Pitching at briefings, and the evolving analyst space
2008-03-09
Here's a story. When I first became an analyst for Bloor Research in 1999, I remember going to a briefing with IBM at Bedfont Lakes near Heathrow, with none other than Steve Mills. I knew he was important, but frankly, I had little idea why - I had come straight from an IT consulting background, and vendor politics were as familiar to me as fly fishing on the moon.
Equally, I had little idea what to do in the briefing. 'Ask intelligent questions' was about all I could work out - but I did speak to some of the analysts from other firms, a couple of which I knew from my first ever analyst trip a few weeks before - and one of whom, it would be fair to say in hindsight, knew the game inside out and was treating it with the cynicism it no doubt deserved.
"Briefings are for selling to vendors," said this analyst, himself from quite a large analyst company. Sadly, this left me none the wiser - I'd never sold anything in my life, and I was rather more confident of my fly fishing skills. All the same, like any rookie analyst, I took the advice on board.
It was when I joined one firm as an associate on the basis that the money I earned would be proportionate to the business I brought in, that I started to think quite earnestly about pitching analyst services during briefings. From my standpoint, it was what everybody did, if they didn't have full time sales teams.
However, when I actually tried to do it, it was horrible - and I very quickly learned a number of lessons that have changed my world view for good (as I said on Twitter, the whole experience was both 'unsavoury and unproductive'). It was then I took an active decision never again to try pitching at briefings, which went down the same plug hole as writing follow-up emails happening to remark about services we offered. Since then, I have broadened my understanding still further of what it means to be an analyst - as a result, I have thrown away all such preconceptions and learned behaviours, and decided for myself what role I want to occupy in this, somewhat dubious corner of the IT industry. I'm not alone in this: Freeform Dynamics was set up on the premise that "there must be a better way" - and we have an internal mantra of "no pitching at briefings"; note however, if someone asks what services we offer, we will tell them.
Nobody's perfect - neither do I think we should be striving for absolute perfection. However, it should go without saying that all industry analyst firms should operate ethically and transparently. Equally, should analysts be making public the times that vendors have asked them to promote a product, for cash? I'm not sure about this - because I think the whole industry is on quite a sharp learning curve since 1999, for two reasons: first - that the dot-com bust derailed the gravy train in which such activities were (apparently) the norm, and second, that the collaborative nature of today's Web makes it very difficult for dodgy behaviour to remain "under the table". I'd suggest we give this interesting period a chance to play out, particularly there remain some clear difficulties, even when individual analyst firms are operating at a high standard of ethics. Consider:
- a vendor asks an objective analyst whether the vendor can quote what the analyst has written on their blog, about the foolishness of organisations that have failed to implement a certain technological mechanism. The quote is independently made, but the vendor's intentions are to sell the mechanism. - a vendor's PR company pulls a finding out of an analyst study which demonstrates why a specific technology or area would be of wide benefit - a body of analysts who happen to agree, individually, that a new technological area is of interest, succeed, as a body, of hyping up the area - we can see this happening right now in Software as a Service. - an analyst is asked to speak at a conference about a certain topic, and does so, objectively and independently. However the conference as a whole is sponsored by a vendor, and is geared around the markets that the vendor wishes to play in.
Phew - how do you steer through that kind of morass? Our answer, and fundamental belief at Freeform Dynamics is in identifying the win-win between vendor and user of technology, but that's for another post. But meanwhile, lets relate this back to briefings. At Freeform we recognise that in this day and age, everything we do and say is exceedingly public and rarely forgotten. So, in briefings for example, we would not give a piece of advice that contradicts what is said in one of our reports - the latter are freely available, so the advice could easily be checked. In fact, we do the opposite - as we learn from research, we strive to improve our knowledge, through reports, blogs, presentations and face to face meetings. This requires an underlying level of trust about our knowledge goals and aspirations, which would only be undermined if we took the view that briefings (for example) were a sales opportunity. Consider: "Great conversation about virtualisation we had - yeah, lets keep talking - and I'll be sure to keep you reminded about our sponsored seminar series!" - it just doesn't work.
This transparency is at the heart of many of the new breed of analyst firms - ourselves, MWD, Redmonk, Disruptive Analysis to name but a few. It would be blooming difficult to be both transparent and unethical, and the good news is that we're quite happy not putting in the additional effort it would require. There's far too much else to be done.
Just registered for Vollee
2008-03-10
Apparently my phone passes the test, so a link is on its way. Second Life on mobile phones, in case you're wondering.
April 2008
This section contains posts from April 2008.
Microsoft Bloat, Green and the Vista opportunity
2008-04-04
Microsoft’s always going to have a hard time presenting a convincing green story for desktop computing. Its not that the story itself is un-sound: power-saving features are useful as far as they go, and Microsoft as a company is keen to be a good corporate citizen. The elephant in the room however may be summed up in a single, horrible word – bloat.
Microsoft’s story has been a fascinating one, one of the great success stories of the IT industry. There have been several key bets made along the way, which Messrs Gates and Ballmer have stuck to doggedly. This is not the place for a full précis of the Microsoft story, but it’s worth highlighting one of the bets: Moore’s Law, the principle (to paraphrase) that processor capabilities would continue to double ad infinitum.
In practice, this has been characterised by the long-standing truth well known by anyone who has spent the past couple of decades in the industry: that if you want to take advantage of the latest Microsoft software, you’ll have to upgrade your machine. The conversation has repeated with the same regularity as Moore’s Law itself – the bemoaning of how slow everything is running, and the wry nod from those who have seen it before.
Of course, this self-fulfilling prophecy has been of huge benefit to both Microsoft and its hardware partners – companies such as Intel. I very much doubt whether the Wintel alliance was deliberately stuffing software into the operating system just in order to shift more processor units, but one thing’s for sure – neither side was calling ‘stop’. We have also lived through the office bloatware wars, where Microsoft, Lotus and WordPerfect duked it out to see who could out-bloat the competition. (Microsoft won, as we all know)
The attitude throughout from Microsoft – and I know this very well, having asked them on various occasions – has been, “If you want to take advantage of the latest innovations, you’ll need to use the latest technology.” I remember a very public debate I had with Martin Taylor, Microsoft’s ill-fated “Get the Facts” General Manager where he told me that most desktop users wanted far more than just email and word processing. It wasn’t true then, and it isn’t true now.
And so, to Green. While Microsoft might not have been underhand in promoting the “new and improved” – it’s a technology company, after all – neither can the company claim to being particularly green. Fundamental to this is the fact that the power consumption of a device is only a small percentage of its overall carbon footprint. Bottom line: replacing or upgrading a machine undermines any benefits that can be had from ‘new’ power saving features.
What can Microsoft do about it? Well, perhaps that operating system that has been derided as the most bloated of the lot – Windows Vista – could hold the key. At the heart of Windows Vista lies a perfectly sound operating system. There are two issues however – the first is in disk space taken up by installed, never to be used apps; and the second is in the memory requirements for unnecessary run-time services. It should not be beyond the ken of the bright sparks in Redmond to bring out their own tools to monitor what’s really necessary, and strip out anything that isn’t?
Sounds simple, doesn’t it? Trouble is, it goes right to the heart of Microsoft’s core philosophy, and fear – that people might stop buying its software if there is insufficient “new and improved” about it. That’s a fair worry – but it’s happening anyway, as we see Microsoft having to extend support (yet again, with hastily invented acronyms no less) for Windows XP. The same principles could be applied to Microsoft Office – which has already seen a usability overhaul with 2007, now, how about a performance boost? What additional benefits can be achieved offloading tasks to Windows Live services? Etc, etc, the list goes on.
It’s a changing world we are in. While Moore’s Law may continue to apply, many organisations are finding they have more than enough processor power on their desktops to do their day to day work. If Microsoft is really serious about greening the desktop, it has an opportunity to use its position to drive some fundamental changes. The question is, does it have the strength of character to do so? The alternative may be business as usual for Microsoft, but it certainly won’t be green.
May 2008
This section contains posts from May 2008.
Farewell, Sean Body
2008-05-08
I have one of Sean's books beside me, 'Long Time Gone', the autobiography of David Crosby. It's funny - when he lent it to me it was to give me an idea of what a really good music biography could be like, one which stood out from the usual album-tour-album pack. As he was my first publisher, I thought they might all be like that - helping new authors on, spotting non-mainstream potential and working closely to ensure everything could be as good as possible.
Having been around the block a few times since, I know that Sean was pretty unique in his capacity as caring publisher/editor - a hark back to a different era, in some ways. Perhaps because his first driver wasn't commercial (though he made an economic success of Helter Skelter Publishing, to be sure), he cared mostly about getting the good stories out there. These days, as I have all-too-often been informed, this heart has largely gone out of the publishing industry: too much of it is about achieving the quick peak of sales, getting the TV promotional slots, benefiting from the craze of celebrity that seems to pervade every aspect of modern life.
Most of all, Sean was prepared to give something a shot. Not least, me: he took a bet with whether I could write about Marillion, it was him that convinced me to write about Rush ("What's that - difficult second book syndrome?"). And of course, when Mike Oldfield called Helter Skelter and asked whether Sean knew anyone suitable to help him write his autobiography, wonderfully Sean put me in the frame - what a shame that, due partially to the onset of his leukemia, he never got to publish what he saw as a breakthrough opportunity.
Sean was a meticulous editor - it is only in hindsight that one can see his attention was already starting to waver, as we worked through the editorial process for Chemistry. Naturally gutted by his announced illness in December 2005, he spent the two and a half years that followed going in and out of hospital, all the while trying to get himself back to work. Perhaps he should have canned it all and looked after himself, but again, hindsight is a wonderfully convenient tool. Throughout the whole process I remained convinced that he'd pull through - he was a fighter and a triathlete, and not the sort of person not to get his way. But he didn't, this time.
It was with rum pleasure that I saw Sean merited an obituary in the Guardian. He was one of those people who never asked for credit or fame, quietly looking to achieve his own goals. It was particularly sad in the last period that I found it difficult (though not impossible) to contact him because from the writer's perspective I didn't want to give him any extra hassle, though as a friend I was wanting to be around. But still, I see from others that he had some lovely people around him, which ultimately, is all any of us could ever hope for.
I remember a conversation with Sean, from when I would occasionally pop in to the book shop off Charing Cross Road, or when we'd go over to the Jazz Cafe at Foyles. We were talking about all that modern technology, and how it meant you could work anywhere in the world. "I quite fancy just taking off to an island," he said, "I could run Helter Skelter from there, its just a case of being able to communicate and exchange documents and PDFs." Sean, I see you sitting on a sun lounger sipping some gloriously colourful cocktail, overflowing with fruit and paraphenalia. And I raise my glass to you.
June 2008
This section contains posts from June 2008.
IIAR Analyst Survey and other musings
2008-06-02
Delighted to see the results of the IIAR survey giving Freeform Dynamics such a big mention. Chuffed also to see Redmonk, MWD and David Mitchell get highly charted, good folks all. Unsurprised at Duncan's immediate, ill-considered riposte (anyone would think he hadn't been involved enough in the process...). Fascinated by Alan Pelz-Sharpe's unconnected but sound piece on the IIAR site, very good. Determined to blog more, here and elsewhere. Feeling generally pretty good.
Onward and upward.
Ben's Big Madagascan Adventure
2008-06-02
Don't they grow up fast - I don't know whether to be more stunned by the fact that (son) Ben is off to Madagascar on a Scouts trip or the fact he has to be 16 to do it. Only slightly less stunning is the amount of money he needs to raise so he can go - it's £2,500 all-in, and Ben has launched into fund raising with abandon. Car boot sales, quiz nights and sponsored walks abound.
If anyone's interested in sponsoring Ben as he takes part in a 26 mile walk around Cheltenham, or indeed otherwise donating toward his costs please contact me directly, leave a comment or indeed, click the button.

Thanks!
IP Address Management - a latent need, not a market bandwagon
2008-06-08
It always seems quite ironic to me when I read how industry analysts are accused of 'bigging up' vendor offerings, when I and my peers seem to spend so much of our time resetting the expectations of over-optimistic marketeers. Indeed, without such a position, we would offer a far less useful service - on occasion I have been positively surprised that certain companies have wanted to work with us at all, given the utter trouncing we have given their products or how they are taking them, like Beanstalk Jack and his cow, 'to market'. I should perhaps apologise (and I frequently do) for being so direct - we want people to get the best out of your technology, we really do, so we'd rather be straight with you.
As such, it can be quite a relief when something comes along that is so clearly, obviously useful to so many organisations. Like Internet Protocol (IP) address management, for example. I can't confess to know the whole space in technical detail, but here's the skinny from my perspective. It is a well-known fact that the number of devices that need an IP address to connect to the enterprise network, or indeed the Internet has rapidly outstripped the original numbering standard, of 32-bit addresses enabling a potential four thousand million addressable devices. Such things as Network Address Translation (where a local router/address server allocates IP addresses on an as-needed basis using a local subnet, and then translates between local addresses and a reduced subset of externally-visible addresses) have helped reduce the burden somewhat; as of course has the arrival of IPV6, which extends the number of addressable devices to 2^128 (a very big number).
However, a remaining issue is how to manage said pool of addresses. These days the number of required devices has increased dramatically, notably with the arrival of Voice over IP (VoIP) handsets, which are replacing traditional, analogue telephones. From an address management perspective, the Domain Name Service (DNS) protocol is the standard for allocating specific address ranges to specific subnets, but some organisations are ending up with a large number of DNS servers, which themselves have to be managed. The original protocols were never conceived to manage the address allocation, deallocation and reallocation process on such a scale - and don't facilitate the cataloguing of what address belongs to which department (Microsoft Excel is a more used, but still inadequate tool). Theoretically, organisations could of course allocate addresses statically, once and for all - but all it takes is an office move (requiring a number of devices to move from one subnet to another) and all hell breaks loose.
So - IP addresses need managing, and existing mechanisms aren't cutting the mustard. This is the breach into which are stepping organisations like BlueCat Networks (who I have spoken to), and Alcatel-Lucent, BT-DiamondIP and Crypton Computers (who I haven't - but these chaps have) - essentially delivering management tools and distribution mechanisms that really can cope with such huge numbers of addresses and offer quite some respite to those managing the IP network. It is notable that, when I asked BlueCat whether I could speak to a customer, they jumped at the chance and before long I was speaking with Investor AB, a Swedish organisation.
On the call I learned little that was unexpected: yes, the problem existed and was real; yes, it was for the reasons I understood; and yes, the deployment of BlueCat's address management solution had been a great help. What's there not to like, I said as we finished the call. And yet, I was left feeling a little puzzled at the end of the call. Notably, whether by agreeing with the problem and solution, I was in some way implicated in yet another attempt to foist unnecessary technology on an unsuspecting public. Particularly in this case - where the solution itself resolves an indisputably technical problem.
But however we might like things to look, the problem does exist and so does the solution. Just as the invention of carpets required the subsequent creation of carpet cleaners, so can today's overstretched networks benefit from address management. This won't be a panacea for all ills - it never is, and it should go without saying that technology can never be more than a crutch to poor operational processes or bad managers. I could add a string of caveats at this point but I won't - rather, I will acknowledge the fact that most network managers do have their heads screwed on pretty well, and defer to their ability to decide whether this would be an appropriate technology for them.
"That's not a product, that's a business strategy\
2008-06-09
I can't remember who said that to me a couple of weeks ago, but its one of my favourite phrases at the moment - it applies so well to so many things we're dealing with right now: SOA, Identity Management, Information Management, BPM. Give it a go, and see where it sticks.
Anyone know what this is about?
2008-06-09
You know, I'm flattered and all... but I presume this isn't me :-)
Master Process Management? Now, there's a thought.
2008-06-10
One of the most fun debates I have had in recent times was with a couple of execs from IBM and Cognos, and with a senior analyst from IDC, at last week's Information On Demand event in The Hague. The question was innocuous enough - "what do you think is the addressable market for Business Process Management?" "That's tricky," I replied, "as I don't believe that there exists a BPM market, as such."
There followed a great deal of debate, at the end of which I remained to be convinced that there was such a thing right now as a BPM market. That's not to say that BPM doesn't exist - far from it, it is an essential facet of many technical capabilities. However it is exactly this factor that makes it very difficult to define BPM as a market.
Where can we see BPM? The ability to capture business activities and use them as a template for service delivery exists in many technologies, as indeed it has done for some time. We have for example:
- Within content management and collaboration tools, BPM is becoming an accepted term for that element of the software that manages the flow of content across different business roles.
- Within enterprise application integration, there is a clear need to understand how applications and dataflows map onto business activities. Initial releases of Microsoft Biztalk, for example, came unstuck until they built in this element.
- Within software development, a logical extension of modelling business processes as part of requirements capture, is to the use the models to support solution delivery.
- Enterprise applications such as SAP have long been delivered through customisable workflows, which require a level of management from a business process perspective.
- Also, BPM works hand in hand with IT and business service management from an operational perspective, such that service delivery can be monitored and supported appropriately post-deployment.
BPM is clearly a highly valued element of IT. But can it really be considered a market - and if so, how should it be defined? Should it consist of the superset of all of the above, or just the BPM element, if indeed it can be separated out in any useful way? Or should it consist only of the "pure play" BPM vendors, those which have a heritage in one of the areas above but are positioning themselves by leading with BPM?
My debating stance, which happens to still match with my opinion, is, "none of the above." We discussed the comparison with the transport industry - throwing a car, a tank and a plane into a (figurative) bucket just because they all require an engine does not mean we can define the "engine" market. So, as with BPM, there isn't a coherent enough boundary to frame a market space.
The debate did not result in any shared epiphanies between the participants (though its always nice when analysts from different firms agree). For me, the thought processes didn't stop there: a few nights' sleep allowed things to ferment, alongside all those other things I picked up at IoD, not least a slightly flummoxed acknowledgement about the value of Master Data Management. Absolutely nothing wrong with MDM per se, but I still find it quite surprising that it took the industry so long to work out it would be useful to have a single, shared, defining structure for structured information assets.
To whit: following other conversations it occurred to me that the real pain with BPM remained how views of business activities could be shared across tools. IBM claim export/import capabilities between tools such as InfoSphere and Lotus, for example. But what lacks is the knowledge of which is the "master" view - an issue exacerbated when we consider how such information is distributed (and worse, locked in) to applications and software tools across the organisation.
Perhaps what we need, like with MDM, is Master Process Management - tools that enable representations of business activities to be catalogued independently of any application, and then translated between one application and another. The sign-off of a form in the content management system may also signal the acceptance of a new customer in the CRM system, but such information is stored in the heads of those using the tools, and delivered as a point-to-point linkage. How useful it would (and I'm speaking from experience, having been involved in several such activities) to capture such information and relationships once, and use the "single view of business processes" to feed all of the above applications, if not more.
But let's be clear. Apart from a couple of obscure vendors (pipe up, if you're out there), such capabilities do not currently exist. There's no MPM Gartner Magic Quadrant or Forrester Wave, and even if there were, few if any vendors would appear at all. All the same, if we did have MPM tools, I have no doubt that plenty of end-user organisations out there would be much better off. And indeed, we would have an addressable market.
IBM: SOA Far, SOA Good?
2008-06-13
It’s been a couple of months since I jumped on a plane back from IBM’s SOA Impact conference in Las Vegas. For those in the know, this conference is a re-named evolution of the IBM’s WebSphere conference series, and for those less so, WebSphere is IBM’s brand associated with its enterprise scale application server software and assorted gubbins to support transaction management, asynchronous messaging and so on – the “platform” elements of corporate applications. The 5,000 attendees comprised a fair whack of WebSphere developers and architects, and suitably impressive numbers of CIOs and other IT executives.
Why the conference is called “SOA Impact” and not anything to do with WebSphere is of course as much to do with marketing as anything. IBM has been quite forthright in pushing its SOA message, and even the very name of the conference offers one such opportunity to put SOA into the spotlight. The million dollar question is, is it right for IBM to do so? Before arriving at the answer (which, for all you skim-readers is yes, absolutely), it is probably worth a bit of background in SOA.
In this hype-ridden industry, the real trends can often be hidden underneath the layers of hype. One such trend has been the nature of software applications. For the 20-odd years I have been working in this business as a developer, quality manager, IT director, consultant and (only latterly) industry analyst software has been moving inexorably towards a distributed model. In such a model, chunks of software (note the avoidance of buzzwords here) do what they do well, and communicate with other chunks of software as, when and however necessary.
This is a very familiar scenario to anybody that ha been involved in IT over the past couple of decades. For years, organisations have been integrating their legacy systems, packaged applications, databases, rules engines, process control systems and so on using various forms of integration software. And it all works, after a fashion.
Not all of it is how we would like it to be, had we the opportunity to start afresh And this is where the evolution comes in. All those years of deploying and integrating applications have resulted in a pretty good understanding of how things should best be done. The first, most important and blatantly obvious aspect is that things should be planned out in advance, as an architecture rather than piecemeal components. That’s not rocket science, its common sense.
The second aspect concerns how the software elements – the applications and packages, or other discrete units of functionality – should communicate. In this there is an element of rocket science, in that many different mechanisms have been tried over the years. The branch that has developed most strongly has its origins in “object oriented” software design techniques, which begat component based development. It doesn’t matter so much what these things are; more important is the fact that they have evolved over many years before revealing the fundamental truth at their heart: that the best way to consider the interfaces between the software elements, is in terms of the services offered each element.
And so, we have an architecture which is service oriented: SOA. Do you see what I did there? At the risk of sounding like a broken record, this is not some invention of software vendors’ marketing departments, but a consequence of how things have happened in the past.
All of which has been seen as a bit of a conundrum in this fashion industry we call IT. One of the biggest issues IT vendors have had with SOA is that it is an approach, not a product: in other words, there’s nothing to sell. SOA has been confused with Web Services, and has suffered the indignity of being nothing more than a ploy to sell Enterprise Service Bus (ESB) software. But still, SOA is an inevitability – as sure as buildings need architecture, so does software.
Back at SOA Impact, then, IBM is indeed correct in promoting the architecture rather than any particular product line. However, IBM has possibly created a it of a hurdle for itself. Were this any other industry, SOA would be taken as timeless wisdom and we would all be free to get on with more current things. But this is the industry of the new – over-hyped promises of something far better than what went before are what sell technology. Can IBM really afford to stick with SOA in the long term?
The answer is, I blooming hope so.
In marketing and PR departments and among some IT journalists, I have heard more than once the idea that SOA is in some way defunct and we need to move on (indeed, we saw this with SOA 2.0 a couple of years ago, which prompted my esteemed partner at MWD Neil Ward-Dutton to implore that we stopped the madness). Such a standpoint is doomed however, as we are dealing with evolution not revolution: while the detail of SOA may not yet be ironed out, the concept is as sound as anything honed over the decades can be.
It does leave IBM with a bit of a conundrum, however. The company currently has its work cut out, and plenty of services upside, from promoting the SOA message. However, at its heart SOA is not particularly sexy and indeed, the better received it is, the less sexy it becomes as it ends up as part of the fabric. That doesn’t leave marketing much to play with.
All the same, the worst thing IBM could do is look for an alternative to SOA. Build on it by all means – with the caveat that it is early days, so it is important not to outpace what is still a fledgling audience. Perhaps one day SOA will become so well accepted that we shall stop talking about it. In the meantime however, of all the concepts that have been touted by this industry, SOA deserves to remain in the spotlight for some time yet.
Presentations and events update
2008-06-25
I was recently asked for some examples of events I have spoken at, so for the record this is what I've participated in so for this year:
Taking back control of IT, Webinar, 28 February 2008 (video stream - registration required)
Improving business productivity through effective content management, Webinar, 4 March 2008 (video stream - registration required)
Governance in virtual worlds, Pisa, Italy, 13-14 March 2008 (slides)
Which is more Important – Compliance, Security or Operability? (Panel Chair) - Infosec Europe, London, UK, 22-24 April 2008 (podcast)
Progressive IT, Sourcing and Architecture, Microsoft Architect Insight Conference - Windsor, UK, 28-29 April 2008 (slides/video stream - requires Windows Media Player)
How to sell virtualisation (Panel Chair), Channel Expo, Birmingham, UK, 22 May 2008
IBM Optim Internal Data Threat event, London, UK, 29 May 2008 (slides)
If you need any more information please do get in touch.
July 2008
This section contains posts from July 2008.
Three's a crowd, so what's four?
2008-07-17
This must be desktop operating system geek heaven - but even as I say that I realise 'm missing out on a whole bunch of 'em. To the point, I have recently come into the possession of a MacBook Pro, which is running OSX 10.4. With that, I've got XP running in a (donated, thanks) VMware Fusion virtual machine - which runs like it's native. Meanwhile, on my old Samsung laptop I've gone for a dual boot with Ubuntu Hardy Heron on one partition, and (also donated, thanks too) Windows Vista on the other. What of Solaris or indeed OS/2, I hear you cry.
Its an interesting set-up. A key question is interoperability - which I define as, "Being able to do whatever I want on any platform, without seeing the joins." I think that's a bit different to the interoperability Microsoft keeps banging on about, which sometimes seems more about keeping the more evangelical chatterati at bay (incidentally, my suggestion was to ask the silent majority what they thought - I believe there's far less anti-Microsoft sentiment out there than some bloggers might imply). But the world of Mac interoperability is questionable - iTunes will only recognise iPods for example. Is it a problem? I honestly don't know - the slickness that the fanboys love so much is a consequence of a tighter control over hardware, and no doubt software specs. Balancing such usability with interoperability is an issue we see in the large in corporate IT shops, and it is no coincidence that CIOs often talk in terms of "One throat to choke." Thinking out loud: would 'proprietary' be such a bad thing, if it just worked?
But I digress. Just one last thing to do is to re-install Lilo, then I'm done.
RSA Panel session confirmed
2008-07-17
Just got an email through from those nice folks at RSA Conference Europe. Here's the skinny:
Session Track: Business of Security
Session ID: BUS-207
Scheduled Date: Tuesday 28th October
Scheduled Time: 16:05 - 17:05 hrs
Session Title: Software and Security as a Service: the risks and the rewards
Session Classification: Strategic
Session Abstract: There is much buzz in the IT industry at present around Software as a Service (SaaS). As with any new trend in IT, there are a number of potential risks which need to be considered when looking at SaaS solutions – but things don’t stop there. At the same time, certain security services can also be delivered using the “as-a-service” model. This panel of security vendors and consultants considers both the risks and rewards of SaaS and security as a service, and delivers practical advice on what organizations should be thinking about today.
Moderator(s):
Jon Collins, Analyst, Freeform Dynamics
Panelist(s):
Gerhard Eschelbeck, CTO, Webroot
Eldar Tuvey, CEO, ScanSafe
David Stanley, MD EMEA, Proofpoint
Why I'm interested in Open Source
2008-07-17
Because its having a distorting effect on the rest of the industry. I'm afraid I don't buy the argument (nor would I have to, but you get what I mean) that all software should be free, as Richard Stallmann would so dearly like. Any more than I would agree that all music should be free, or indeed that my plumber should pop round tomorrow and fix the dripping bath tap. It's a laudable goal of course, as is world peace and the nirvanic state where everyone just gets on. But its just not going to happen, because various elements of human nature - good and bad - won't let it.
To me, and unlike what the "try the latest distro" Linux User cover disk would suggest, open source is far more about commoditisation than diversification. I find it hard to believe that there is a place for new operating systems which try to compete on features - as long as we build systems around the Von Neumann architecture, there have been operating system constructs around since 1969 (that's Unix, folks) and indeed before to support them. I don't want to ignore z/OS on the mainframe - but let's remember its precisely because the engineers of a few decades ago got so much right that they are so full of themselves now. Windows is also fine - its an OS which cuts the mustard, both on the desktop and on the server. Bt half the reason I believe that Vista tripped up, was that it did not offer anything sufficiently compelling to the majority, even if its security and manageability features far surpassed those of Windows XP. Didn't anybody tell Microsoft how hard it is to make a business case for security and manageability?
So, open source offers a commoditisation route: if something is algorithmically so straightforward now, and its a question of evolving it in line with the hardware, then open source offers the answer. No point in paying for something that is already done. There are several advantages: the source is openly readable, which makes it potentially more future safe than anything proprietary. Development continues, in an evolutionary manner, and is funded and resourced across the community, which also provides a proactive support base. Its a model which gives us the LAMP stack - thats Linux, Apache, MySQL and which every programming language you can think of that starts with P. And there is money to be made - but out of services, not so much the software licensing.
And here's the kicker. When it was realised that the real money was to be made out of services, that's what had the biggest impact on the rest of the industry. Red Hat started to rake it in due to the fact that corporations wanted to know they had the same levels of support as with their proprietary application base - a fact which triggered Microsoft's ill-advised "Get the Facts" campaign. IBM started to recognise the role of F/OSS (free and open source software) as on-ramps onto what were at the time more enterprise-ready platforms - Linux to AIX, MySQL to DB2 and so on. And Oracle just started to buy everybody it could get away with, as it always does.
Meanwhile we have Sun, which came surprisingly late to the party. Sun's going through an open source epiphany at the moment, which is just dandy - though I've been spending a lot of time thinking about just how successful they will be. Sun's heritage with software has been dodgy to say the least - it had a good start with the Catalyst catalogue and a pretty healthy software channel back in the Eighties, but that was in the days when the hardware manufacturers called the shots. Things started going a bit ropey in the early Nineties, when a number of big software plays (developer tools and network management) started to wither on the vine. Java came and should have been Sun's big success, but the Internet came next and took all the attention away. While Sun was being the dot in dot-com, it forgot to be anything else.
It could be argued, quite successfully I am sure (though I will not try to here), that Sun has turned to Open Source for two reasons. First, it had no other choice, as it was no longer seen as a credible player in the world of proprietary software and it had burned its bridges with the flat-rate licensing deals brought in earlier this millennium. Second, one place Sun does have a growing reputation among its own customers is in services. Today's open source models are all about building a services revenue stream, and I wish Sun success in that.
In doing so, Sun, IBM and indeed Oracle have embraced open source and integrated it into their business models. There's one last area of course that open source can be used, and that's as a competitive weapon - the only major company which is yet to embrace open source in the same way is Microsoft, preferring still to approach open source from the point of view of interoperability, not as an integral part of its software platform. Personally I think this is a mistake, but - let's be frank, what the bloody hell do I know. Microsoft's ultimate responsibility is to maximise its shareholder value, just as the rest of the majors. I have no doubt that they have done the maths, just as the others will have done.
Which comes back to the first point. If all software should be free then that's great, but I don't see IBM , SAP, Oracle or HP open sourcing any of its core moneymaking platforms. With good reason, from their perspective - its not in their commercial interests to do so. However it is in the commercial interests of some players to knock the competition for being proprietary, even while being quite happy to retain a significant proportion of the proprietary software market for themselves. Its a dangerous strategy - ask any of the bigger companies how they see the impact of open source on their own software base in a few years time, and they'd be hard pushed to give a straight answer. Fortunate for them that this industry has a very short memory, nobody will notice when they change their minds.
It's all good fun isn't it. Perhaps that's the biggest reason why I'm interested in open source: it's not the software itself, though that appeals to my geeky side; nor particularly wanting to consider the community driven development process, though that is a phenomenon in itself and worthy of attention. Nah - its watching the big guys duke it out in what is in fact a global game of paintball, with all the ducking and diving, short-lived alliances and backstabbings, and where the nature of the code maters little more than the colour of the paint.
Totally non work related - running for charity
2008-07-19
For anyone who might be interested (and for those who aren't :-) ) I'm running a half marathon on October 12th. Its not my first - I did that a couple of months ago, baulking at the idea of sponsorship in case I didn't finish. Which I did. So, this one is the Royal Parks Run in London. I've decided to run for UNICEF, what a fine bunch of people, and I quote, "working for children and their rights." I would very much appreciate any sponsorship, as of course would they - I want to raise a thousand quid so the way I see it, that's only 200 kindly souls donating a fiver. How hard can that be?
So, if you do feel like splashing out five pounds (but of course , don't feel limited by that!), you can donate here. Thanks for all your support, and for reading!
Blatantly looking for sponsorship
2008-07-19
For anyone who might be interested (and for those who arent :-) ) Im running a half marathon on October 12th. Its not my first - I did that a couple of months ago, baulking at the idea of sponsorship in case I didnt finish. Which I did. So, this one is the Royal Parks Run in London. Ive decided to run for UNICEF, what a fine bunch of people, and I quote, working for children and their rights. I would very much appreciate any sponsorship, as of course would they - I want to raise a thousand quid so the way I see it, thats only 200 kindly souls donating a fiver. How hard can that be?
So, if you do feel like splashing out five pounds (but of course , dont feel limited by that!), you can donate here. Thanks for all your support, and for reading!
Running round San Jose
2008-07-24
Getting ready for the Royal Parks Run... things went a bit dodgy at the end!
October 2008
This section contains posts from October 2008.
Royal Parks Run - done!
2008-10-20
Well! The Royal Parks Half turned out to be more fun than expected, it was a beautiful day and a lovely opportunity to see bits of London in a whole new way (through sweat-scrunched eyes mainly :-) ). My time was a gentlemanly 2 hours 12 minutes, which roughly equates to 10 minute miles, I was chuffed to bits with that.
Thanks very very much to all those who sponsored me (and anyone who didn't, there's still time!) - I have raised just over £500 for Unicef, which is fantastic!
You can see (and even buy, though that would be wrong) pictures here. No sleep till Bath in March!
Dome run
2008-10-28
Lovely - three miles and dome views with sunrise coulis...
Testing out the podcast thang
2008-10-28
Here's a podcast to go with this article. Its a bit quiet. Spot the infinite loop.
Social existentialism, and the Yourdon-Fry effect
2008-10-30
"I'm no stranger to celebrity myself," said the man. And in a way it was true - he had spent many years working closely with such types. Who was he? It doesn't really matter - he could have been a journalist, a PR guy, a lawyer, a waiter, a taxi driver or indeed, even a biographer. Each role provides no more than a context within which people can relate.
And then, something like social networking comes along and throws any such context out of the window. I confess - and now it's my turn - that while I am (indeed) "no stranger to celebrity", I still did get a certain buzz when I saw that Ed Yourdon was following me on the microblogging tool, Twitter. This guy is as close to a celebrity as a software developer can get - back in the Seventies he was among the luminaries of the time, talking about modularity, cohesion and coupling alongside Barry Boehm and Fred Moore, Tom De Marco and Larry Constantine. These fellows were way before my time - when I arrived on the scene in 1987 they had already entered into the collective consciousness, for all I knew having left the empty husks of their physical being behind them.
Which begged the question: when I 'followed' him, in Twitter parlance, was I up to something a little less salubrious than just wanting to 'join the conversation'? I'm now sufficiently advanced in years to have moved up the stack a little, and I have on extremely rare occasions seen signs of what it might be like to have a following. But - when engaging with the great Mr Ed, was I incorrect to have felt a little rush that maybe such a great man (still great, I should add, despite having been wrong on Y2K) might have noticed me touching the hem of his virtual coat?
From my own music experiences and others', I'm pretty comfortable with the general idea of being a 'fan'. Despite it admittedly being short for 'fanatic', some of the best artists and producers in the world have confessed (if that's the right word) to holding certain of their peers in awe - Alex Lifeson of Rush, for example, remarked he found it difficult to know what to say when he actually met his guitar hero, Jimmy Page. A little aspiration goes a long way in this short life we all have, and no doubt many a little league sportsman only got to the international stage through wanting to be like their idol. It's a human trait, which we all deal with admirably, for the most part.
The downside is two-fold. Before pondering too carefully the grammatical accuracy of the last sentence (not to mention this one), let me spell it out: first, the nature of modern celebrity can create idolatry where none should exist, through a complete absence of merit. Rare is the human who is immune to participating in such a thing: we watch Jade Goody as she succeeds and fails, all the while commenting how she shouldn't have been filmed in the first place. What hypocrites we are. Like watching a poorly concocted film which is designed to pull on the heartstrings but which still makes you cry, we are all victims of our own humanity, as malleable and ductile as a rare metal when it comes to being influenced by the press.
The second difficulty is caused by our penchant for hierarchy. "Why can't we all just get along," says the pacifist - but even if we could on the surface, the very nature of our aspirationally conditioned being lies just below. We could blame evolutionary drive and the survival of the fittest; or the hang-overs of feudal society; or indeed the dark side of our meritocratic society. The best rise to the top, that's the theory, and we all aspire to be there as well. Sometimes people even do reach such heady heights through their financial acumen, their innate skill and artistry, or their abilities as an orator. Other times, they get there through sheer determination and hard work, while others get lucky, or simply know where to stick the knife in. It was ever thus.
Which all makes the world-is-flat, peer-to-peer nature of social networking somewhat confusing. Strip back the layers of course, and for many it is not in the least about being social: that loose category of "famous people" are not, in general, trying to join conversations or make new friends with the masses. No - online tools are a marketing tool, and a very good one to boot. Communities can indeed be built, and harnessed to great effect - as proven by a number of artists (including Marillion, which is well known to be at the vanguard).
What's perhaps very interesting about it all is that, while there is an obvious imbalance between those forming communities and those participating in them, each side does have to give a bit more of themselves. For an artist, a broadcaster or an industry guru wanting to engage with a community for reasons altruistic or otherwise, this does require a certain level of two-way interaction. For some, this comes easier than others - without knowing the chap directly, I suspect from his previous form that this is the case for Ed Yourdon, who does appear to be engaging because he really wants to.
As too does one Stephen Fry - our very own, quintessential conundrum of an Englishman, himself checking all the boxes of what it means to work as a polymathic, and no doubt workaholic artist within this meritocracy. He's also (if a polymath can be 'also' anything) a dyed-in-the-wool technofreak, which means he has adopted such social networking technologies as blogging, podcasting and Twitter with gusto. But where does this leave the assembled masses, who in the past may have been sated by a quick dose of Fry on a Tuesday evening? It's a tough one - today, anyone that has heard of the man can not only link to him in some virtual way, but also send him a direct message in the knowledge that it may, not many instants later, turn up on his ever-present iPhone (and indeed, his Web page).
It's sorely tempting to see this 'opportunity to interact' as a real opportunity to interact, and I confess to have succumbed on a couple of occasions. There's the rub: I'm probably as human as the next guy, and indeed, perhaps more so. Even as I write this, I feel just a little of the uncertainty a certain, hypothetical cat may have felt, sitting in a certain, sealed box next to a certain, ill-placed vial of prussic acid. Thus spake Schroedinger: even the act of measuring, or in this case commenting, can have an impact on the situation itself. I rightly question my motives - am I just writing this very piece in the hope Messrs. Yourdon and Fry might read it, decide I'm an all-round good chap and immediately engage in correspondence? Do I think I might be raising my standing among those who agree with me, or am I indeed looking to chide those who are a little too blatant in their social activity? Is this really just a resentful backlash against those who have surfed the social waves to become famous in their own right, or am I deviously looking to achieve the same for myself?
Here's the truth of the matter: if this ramble is to illustrate anything, it is of the inherent imbalances that must exist in this new set of contexts, in which most of us will only ever be a bit player. As it is, the imbalances are legion. There's not just the fact that those standing on top of their own meritocratic mountains will find it difficult to take in all the many messages they receive from those further down, or indeed elsewhere in the range. For some it has proved impossible to respond to everything - as both Ringo Starr and Neil Stephenson have pointed out. As second point however (also made by Mr Stephenson) is that some people are generally far too busy actually doing the things that make one popular, to reap all the rewards of said popularity. Success is perhaps most of all a combination of both talent and hard work: however much one has of the former, all can fail if there is an absence of the latter.
Perhaps in a few decades time, we shall look back in hindsight and make sense of all that is happening now. Realistically, what we are experiencing is only the tip of the iceberg - while some of us are already videoing and posting their every move, all but a few points of the world's population is still going about their business without recourse to any such tools. Warhollian fame still requires TV - but the highly integrated, digital age looks set to supersede such old-fashioned concepts, as the our Bebo-centred youth culture is already illustrating. We are likely to see the online world in the same way our forebears saw metalled road surfaces - the elder generations may have used them to their advantage, but the youth have never known any different. In the short term, we can all be thoroughly indulgent, testing the waters and pushing the boundaries of social networking; no doubt as things evolve, so will a number of new contexts, within which we will all learn once again to relate.
November 2008
This section contains posts from November 2008.
A bit of a re-org
2008-11-01
I’ve reorganised my blogs, and this will be the last post you see on this blog. For day-job analysis I shall be posting on Freeform Comment. Meanwhile, you will find background reference to all things tech and the skinny on things analytic at IT Industry Outsider - I've migrated all posts and comments from here to there. Over at joncollins.net will be my non-technical alter-ego.
Hope that makes sense!
That's that done, then
2008-11-01
I've reorganised my blogs. Here you will find background reference to all things tech and the skinny on things analytic. For day-job analysis I shall be posting on Freeform Comment. Over at joncollins.net will be my non-technical alter-ego. Hope that makes sense!
Closing in
2008-11-01
Just testing... moBlog. Weather cold, fire burning.
Lack of problem solving at schools? I think not.
2008-11-05
Here's the thing. On my travels I have spent a reasonable amount of time with academics and industry leaders, in the course of which on various occasions they have bemoaned certain faults in our education system. It has been worrying, to say the least, to consider that the kids coming out of schools today lack the skills they need to play a full part in our society in general, and to act in technological and scientific roles in particular. Being a parent as well, I have been feeling a certain implication - my kids are in their teens, which puts them in the centre of the debate.
Quite deliberately, then, I have been asking their teachers and other parties what is the truth of the matter. To work back from the answer: industry and academia may have had a point. But there was never a golden age of education either - as one person put it, "I don't think the education system of the past was ever designed in any way other than to get people through exams." It may be - and this is an area I haven't yet fully investigated, so consider this uncorroborated - that it was the elitism of universities in the past that served to minimise the impact of what was an education system for the academic few. Who knows.
But to bring things up to date, you will notice I said, "may have had a point." From my dealings with schools as a parent, and more recently as a governor, I have seen a very different picture. The teachers I speak to in general are using a language and teaching style which is totally at odds with the idea that school is exclusively about passing exams - they describe different methods of learning, the importance of investigation, looking for alternative solutions and so on. Furthermore, they do so across the curriculum - from English and Design, to Maths and Geography.
So, where's the truth? When I put the question, bluntly, "Are we failing our children, and potentially our society," I have been informed that such practices are really, quite recent. Young teachers - those only two years into a career for example - confess that the way they teach now is very different to how things were even when they were at school. All the same however, the way they go about their business does seem to be infused with what can only be described as teaching problem-solving skills.
There are still challenges. Yes, and there remains unanimous agreement on this, our schools are still struggling when it comes to serving up budding scientists. Elsewhere, the side effect of all this educational positivity seems to be a veritable flood of jargon - perhaps a side-effect of quite rapid, and what has probably been workshop-driven change is that for the outsider, it can be difficult to engage without first knowing what is being talked about. This may seem a trivial point but it is important - our school system may be undermining its own credibility with the average parent, or indeed industrialist, by cloaking itself in terminology.
All the same, there is positive news to be had. Let nobody be unconvinced that there is a struggle taking place, to improve the educational lot of our kids. However it is one that the new educational approaches do look in danger of winning. And indeed, the recent demise of KS3 SATs (there's some jargon for you) may have unfortunate side-effects in the short term, but may well act as a further catalyst to progress. We may be yet to see the benefits of what's already been achieved, but the future looks bright.
Today, I shall mostly be...
2008-11-06
... At Zycko's partner event, (In)Spire. It's an interesting conference for an analyst to be at, because it represents where the rubber hits the road for many technologies - in the channel. Bottom line: its al very well to have a great product or a world-beating position, but often, its down to a wide variety of other companies to deliver on the promise. I'm currently sitting in a session from Neil M's old company Zeus, with an audience made up of value-added resellers working with organisations of various sizes.
Bottom line: its important to give people something they can really work with, and not just a good story. Sounds obvious but often forgotten.
And don't ask me why the (In) is (in) brackets.
Leaving Las Vegas... and not for the last time
2008-11-19
I'm sure there is a good bit of Las Vegas. As I sit here at Newark Airport, half way home, I am racking my brains to think what it might be. The past couple of days I've had the delight to attend what turned out to be a rather enjoyable conference (really), and I've spoken with some great people and had a pretty good time. If only I could say that Las Vegas had anything to do with the more pleasurable parts of the trip, but I just can't.
It took me a while to realise what was wrong, then one morning, when I was out for a half hour jog, it dawned on me. I ran past a bunch of twenty-somethings standing outside a casino, and one of them laughed, heartily and out loud. I suddenly thought what a rare phenomenon that was - there's beaming smiles up on the hoardings, and lots of faux-bonhomie around the tables but genuine, friendly laughter is a rarity.
What is it about that place, that man-made rats nest of gaudy and overblown structures, that saps the soul? I genuinely don't know. But there is something unnatural about the whole place. One only has to traverse the length of the canal system in the Venetian, a place where it's never night and never day, to get an idea of this. When I first went, I decided that the canal system would give a pretty fair impression of what US hell would look like - a nothingness that somehow resembles familiar places and ideas, and which never, ever changes. I have since been told that the inside of the Excalibur would be the UK hell, but I haven't yet had that pleasure.
Oh boy, I can't wait to go back.
On Jacuzzis and Peer Programming
2008-11-19
"Wow, when I was a kid we had to fart in the bath," said Eddie Murphy's character Billy Ray Valentine, when told he would have the use of a jacuzzi. They don't make films like that anymore, do they? Well, actually they do but a little nostalgia never hurt anyone.
So it might be with rose tinted spectacles that I remember my first programming job. Straight off the best green screens and teletypes that University could offer, I was presented with all the complexity and delight of programming on an Apollo workstation, running a weird hybrid of UNIX and Apollo Aegis. Or at least, we were - given the expense of such devices, I had to share the workstation with another rookie, Mike.
Mike and I used to work together on various things - we were trained together, set tasks together, solved problems together, wrote and debugged code together. One slight issue was that Mike and I were just a little competitive. We played off each other, pushing ourselves just a little in the process. I would say we knew we were doing it but we didn't - after all, we'd never had any other work experience than this.
It was only after a couple of months that we started to get what we were doing. I don't want to over-blow this, we were no supermen but we certainly picked up a reputation of getting stuff done. I couldn't even say that we were better than anyone - it just so happened that we were working in the kind of place that Tom De Marco might cite as an ideal working environment for developers. So we certainly didn't stand out from the crowd; on the contrary we fitted right in.
Which brings me, in a rather round about way, to the shared experience of jacuzzis and Billy Ray's nostalgia. These days we refer to peer programming as a somewhat new phenomenon - at least to the uninitiated. There are two comments I can make: first that it is not a new phenomenon, and second that speaking from experience, it works.
There's a lot of thread to tie up here. One, as already mentioned, is the danger of being over-nostalgic. But like many agile practices, peer programming is founded on common sense. While it will not be appropriate for every situation, either should it be rejected just because it is being pimped as something new.
December 2008
This section contains posts from December 2008.
News from the world of work
2008-12-01
This should go some way towards explaining why I've been so busy recently...
Interesting times, indeed. But exciting!
Thinking Outside the Bowl - Storage Expo 2005
2008-12-11
Hello and welcome to the second day of Storage Expo. Was anyone here today that was also here yesterday? Yes? As you know, industry analysts need to categorise things, and that put you in the category of people who need to get out more. Come on, this is storage!
But seriously (OK, I was being serious…), I’m not sure it’s particularly my place to stand up here and preach. The industry analyst's job is to understand what is going on in technology, and to help end user companies make informed decisions about how to use such technologies. I’ll be absolutely frank with you – I don’t believe that we’ve done that good a job of this. We used to – when it was down to things like databases or office automation packages or hard disk arrays, and it was quite straightforward to compare and contrast the features, or determine who was buying what.
Even as technology started to get more complicated, we analysts continued to provide a useful service – to cut through the complexity and explain things as they really were. A few years ago however we started getting a little too wowed by the marketing - everyone should have enterprise applications, we said, and all the enterprises bought the. Wow, that was clever, we thought, so we did it all over again with other things, like SANs. OK it wasn't just us saying it; then the Internet arrived and all hell broke loose. "E-business or no business," that was the mantra, sounded great but the trouble was it was just plain wrong. Sorry. There was a technological storm, where wave after wave of new concepts, gadgets, types of software, appliances and so on struck the shores of end user organisations, who were trying to take it all on at the same time as coping with the frustrations that were caused.
Rather than trying to help people cope, or attempting to slow things down in any way, the industry analyst positively encouraged this situation. Marketing models were developed for IT vendors to make the most of the situation – nothing wrong with that in principle, but let’s face it, the crossing the chasm model was designed specifically to help IT vendor companies sell more stuff. End users didn’t help – I’ll never forget a CIO show me a copy of “Management Today”, open on a one-page article written about the latest and greatest technology with a circle of highlighter pen and the words “When can we get one of these.” I think it was CRM, but I’m not too sure.
The crossing the chasm model – does everyone know what it is? Starts here with the early adopters, before dropping into the “chasm” and emerging as a mainstream technology. Over here … we have the laggards.
Trouble is – and I hate to break it to you guys – this curve is open to abuse. I know, I know, that’s terrible – but it’s true. Marketing departments the world over have not only been trying to second guess the curve but… would you credit this… they’ve been hyping up technologies to force them into the mainstream!
Impossible to believe, I know. Until suddenly, quite suddenly, a couple of years ago the end user community stopped listening. I think the last technology trying to get through the door as it slammed was location-based services, and weren’t they going to be great? The people I feel the most sorry for are the pizza companies – we were going to be able to find the nearest pizza place, anywhere in the world, their sales would have gone through the roof…
Since then we've had the bubble bursting, the downturn (never call it a recession) in our industry, the rapid deflation of technology share prices and frantic scrambling for some companies to keep in business. Those enterprise applications haven't turned mediocre businesses into global success stories, we’ve discovered what we knew in our hearts all along – technology alone can’t solve the problem. While technology can be part of the solution, there’s no such thing as a “technology solution”.
With all of this in mind, where are we now, now that the dust has settled? While there’s certainly a lot more positivity in the industry, the dynamic is different. We’re seeing a lot of consolidation work, many companies are re-organising their server and storage infrastructures and using them as a common basis for their applications. We’re seeing companies looking to extend their existing applications, for example to add collaboration facilities or better integrate them together. We’re seeing certain types of organisation in Europe building compliance features into their architectures – notably in the financial sector, or companies with strong linkages to the US. Indeed, the majority of today’s activity in larger companies involves improving what is already there, either by replacing it or extending it.
What does all this mean? We can see that the bulk of current IT activity is architectural in nature. Companies largely have the applications and services they need, and now they just want them to work better: surely, this is not too much to ask! Trouble is, current buying behaviour does not match any particular chasm model – as companies are trying to improve their lot rather than attempt some new way of doing things, it could be said that the chasm is already crossed for both infrastructure and for major applications, which let’s face it, together make up the majority of most organisation’s IT. We’re 80% there – now we’re just trying to work out the 20% to get things working together as well as possible.
Indeed, there are plenty of technologies that have well and truly crossed the chasm. ASP’s, for example – that’s application service providers – were another casualty of the bubble bursting. They’ve been quietly getting on with the job however, and are now well and truly mainstream. Consider this picture for example – can anyone tell me what it is? Now, if anyone can tell me where it is, then that deserves a prize! This is World of Warcraft, a massively multiplayer online role playing game, or MMORPG. You may think that showing this is no more than a thinly veiled attempt by me to turn a computer game into a taxable expense, and you’d be… no, I wouldn’t be so shallow. (nod) (shake) (nod) etc
World of Warcraft is also an ASP. Not only is it hugely popular, but it’s also virtually impossible to pirate, as it follows a subscription based model. And it works – extremely well. Many of the reasons cited for ASP failure, which were largely around service quality, have had to be solved for this to become a reality. Not only is it a growing phenomenon, it is proof if we need it that the chasm has already been crossed for infrastructure. The role playing genre may well be inside a tornado of its own, but that’s another story! Did you know there are now currency markets for virtual gold?
But we digress. While we’re on the subject of virtual things, this is possibly an appropriate moment to mention storage virtualisation. What do we mean by storage virtualisation? Essentially, virtualisation offers a layer, through software or through a hardware appliance, so that we can manage and provision our storage resources as though they were one, big, “virtual” pool. Virtualisation can exist in a number of places in the infrastructure, and I’m not going to go into the technical detail here. There is a hall full of people out there that have been girding their loins to talk to you about exactly that for a number of weeks now. What I will tell you – and this will probably spoil their pitch – is that virtualisation is not a product, not as such. There may be vendor companies that sell it, no end user company will ever be using it in isolation.
Of course, I don’t really need to tell you this, as you already know. However, someone does need to tell some of the vendors. Virtualisation cannot cross the chasm because it is an architectural construct. We already employ virtualisation techniques in a variety of ways on computers and around the network, we have virtual memory, virtual LANs and so on. The ability to virtualise storage is no more or less than the storage vendors catching up, and implementing open mechanisms so that their storage hardware can be accessed transparently. Rather than greeting virtualisation with delight, many end user companies are reacting with relief – “at last, you’re giving is what we needed all along.”
These architectural mechanisms cannot cross chasms, instead, they are steps along the way for a better managed, more efficient infrastructure. I call this the “Arathi” model, if anyone’s interested – you heard it here first – as you can see we are on the way up the mountain, but there is no single peak. If we were honest with ourselves, we would see technologies such as virtualisation as way points on our journey towards better storage , but instead we insist on hyping them up, treating them as goals in themselves. As a result, they appear as a series of false summits. Virtualisation is one of these – it may have certain benefits, Nothing is more exhausting or debilitating to a climber than a false summit.
OK, that’s virtualisation. What of ILM in all of this? Infrastructure Lifecycle Management, ILM, another term that is bandied about. What’s that all about then? Let’s get one thing straight. You can’t buy ILM. Its absolutely not a product, you’ll never see it in a catalogue or printed on a CD-ROM. You’ll never install it, or patch it, or debug it. So what exactly is it?
Let me put it this way. If virtualisation is a false summit, then for storage, ILM is the mountain. It’s not about hierarchical storage management and tiered storage, or efficient archiving, or full-index search mechanisms, each of which are just facets of a highly efficient, well-managed storage architecture. You can’t have one without the other – the high efficiency and the good management.
While from an end user perspective we might be looking at mountains, chasms and all that, from an end user company perspective I believe the model should look more like waves. This is a series of waves hitting the Great Barrier Reef, so you know.
There are, essentially three phases that we need to go through to reach the top, none of which can happen in isolation, but some of which are dependent on the others. There are essentially three, distinct phases that lead us towards the goals of ILM, and they need to be followed in order. Or rather, they will be followed in order, whatever we would like to think or our IT suppliers would like to think, for the simple reason that the dependencies between them go in one direction only.
What are these phases? Here we go. We have infrastructure consolidation, followed by resource management, and then service management. Infrastructure consolidation comes first, because you need something to manage after all. Of course it could be argued, one could get on and manage the existing, convoluted legacy environments, but you seem to have made the decision despite what us “experts” might say – you want to consolidate first, thank you very much.
Second comes resource management, or making the most of what you’ve got. Virtualisation fits here – it’s software designed to help hardware use more efficient – but hang on, haven’t we a name for that already? We have – its called “the operating system”. That’s all we’re doing here – reinventing the wheel. Sure, it’s a much bigger, globally distributed wheel, but it’s a wheel nonetheless. At the moment, all this clever technology for virtualisation, for hardware orchestration, for intelligent archiving and so on, all of it exists as isolated packages but they’re currently being integrated together. In five years’ time, its not that you won’t even recognise the packages, you won’t even see them!
Thirdly we have service management, which depends on the ability to manage resources. This is about delivering a service to the business – in other words, when real end-users need something, they get it, and they are held accountable for it. This is about resource allocation, SLA management, all these terms, essentially it is outward facing towards the business applications and their users.
Another way of looking at this is shown here. Did you see what I did there? From the business perspective, first, we need to get the basic hardware platform in place, then we need to deliver on how we operate these platforms, for our own benefit as IT people running an efficient operation – we may be able to automate certain aspects of these processes, using tools such as virtualisation software. Only then are we really going to be able to deliver an optimal service to our end users – the business – the people. In storage terms, achievement of all three equates to achievement of something resembling ILM, by any definition of the term. Should it include content management? Of course! Indeed ILM without content management isn’t really worth having, as it lacks the linkages into business information flows, which let’s face it, are pretty fundamental. You can’t know what to do with your data unless you know what its for.
OK so far? Right, what I wanted to do was pick up on some proof points of what I’m saying, from our research. There are a stack of studies I could have drawn on, I would stress if you have any questions at all or if you want to debate anything, please do get in touch, my email is at the end of this presentation. I should also mention, you can sign up for our reports for free, no obligation, just mail me or check the Quocirca web site if you’re interested.
Let’s put some meat on the bones, then. First, infrastructure consolidation: as we mentioned before, this is well and truly underway, as shown here by some research we conducted over the summer for EMC. Here it is – you can pick up a copy from the EMC stand. We conducted another survey for Oracle recently, on grid and virtualisation, and we had similar results. That’s about 90% that plan to consolidate, and two thirds that have projects underway – fair to say there’s a trend there.
When we look at software technologies to make the most of such consolidated infrastructures, we don’t get the impression that things are up and running just yet. A third of companies see virtualisation as majorly important, and most of the rest give it some credit, but its hardly a glowing endorsement, is it? Other research we’ve done backs this up – 60% of companies are seeing virtualisation as an option, and 40% are doing something with it, but these numbers are growing. It looks like a second wave trend to me.
As we said before though, all this clever stuff is only really worth doing if we have the right processes in place to take advantage of it. We haven’t – given the fact that a third of companies aren’t even doing backups correctly, if at all, what chance do we stand with higher level processes such as managing a virtualised storage environment? To go back to the mountaineering analogy, its like putting on crampons without wearing any shoes. The only results will be sore feet and blisters.
This is an important point. I was a bit harsh on vendors earlier perhaps, suggesting that they are just opportunist sales folks, taking advantage of an unsuspecting public. Surely not… but the truth is, they wouldn’t do so if we didn’t let ‘em. Let’s face it, we’d all love to believe that the marketing was true, even today, that our problems really would all be solved. We also all know that if we put our own houses in order, we’d be better able to serve our own companies. There are always reasons a-plenty – conflicting priorities, shortage of time and so on, all of which are valid, but the point remains – nobody should expect to be able to run a super-efficient storage infrastructure without putting the operational basics in place. It stands to reason.
To move on to the third circle, that of service management, I have a silly question for you: should we want to do this stuff anyway? As a question, it’s a trifle unfair – which CIO is going to say he or she wants to run an inefficient shop, totally ignoring business need as he does just what he wants? Sure enough, when we ask this kind of question, we get a unanimous, positive response. So, we don’t ask it very often. What we do ask is how important are certain types of technology to the business – email is the rising star, as a study we performed for Dell in July, demonstrated.
Email was very important, sure, as you can see – 90% of respondents thought so. When we asked how many sales actually were conducted via email, the response came back as 25% - that’s a heck of a lot. Indeed, it’s a bit scary – I’m not sure Exchange or Notes were designed as the ultra-resilient platform for 25% of the world’s sales transactions! And that’s without considering that the underlying servers and storage are up to the job.
Its just one indication of how important it is to get IT right for the business. Another example, based on our more recent research, was when we asked questions about what people would like to see improved in their existing IT. All of the questions were answered yes, as you might expect, but most interestingly, “Finding information” is top of the list of all issues. If that’s not a business issue, I don’t know what is – and we know this, we know through the time we waste ourselves, hunting for that email or file, matching up one customer record with another, rifling through our pockets for a business card and so on. This is not a problem to be solved with some clever technology, rather we need to be better organised about what we are storing, where and why. In short, this is a people problem.
So, we cannot succeed through technology alone. This is the bit where I start wrapping up – three more slides and we’re done. The way we see this is as the law of diminishing returns – it gets harder as we move through the phases. It is no wonder that we concentrate on buying new technology, because that’s something we know how to do – I used to run a network, I was that soldier, I know. Far harder is to understand how to improve how we do things, build better relationships with our internal corporate customers, particularly if there is no impetus from board level. This guy – Thomas Malthus – understood first that there are only so many resources to go around, and the more you dig into the resources, the harder it becomes to support the problem space as it grows, be it people, complexity, relationships, data quantities or whatever.
Given all of this, we’ve had a fair crack at trying to establish where exactly we are along the road to achieving these goals. Or should I say, how high up the beach? We asked a whole set of questions to cover each of these phases, and used the responses to generate some kind of index. Fascinatingly, as you can see, while the individual questions and answers were totally different for each of these sub-indices, the results have come out awfully similar – overall then, in the UK and Ireland, we can say that similar levels of progress have been made across all three phases. These are marks out of ten by the way – looks like we’re just over half way, but let’s not forget the law of diminishing returns!
Of course the index values by themselves mean very little; however it does give us a starting point for comparisons. Here’s one example, by industry – the only point I want to draw out, is that “other public sector”, by which we mean, non-healthcare, that’s government departments, councils and so on, is definitely falling behind, whereas utility companies are edging ahead. We can drill down and see why this is, I’m not suggesting we do this now but if you want further information you can pick up a summary of the report from the EMC stand.
So, what was all that about “thinking outside the bowl”? Weve had chasms, and bridges, mountains and false summits, journeys and waypoints. The point is, we’re not going to get there by thinking about technology alone. The IT departments of many companies realise this already, and the ones that use their business requirements to drive how they implement their IT, and who implement comprehensive operational processes to boot, will stand more chance of “getting there” than those that rely on technology alone. We do not need vendors, or anybody else for the at matter, to tell us what technologies we should be considering, be they dressed up in terms such as virtualisation, ILM or whatever. What we do need are companies that can work with us to understand the needs of our own businesses, and help us to define and deploy technologies that work for us, not for them.
With that in mind, it remains for me to thank you for listening, and to say I hope you enjoy the rest of the conference.
That was painless-ish
2008-12-29
Just upgraded to Wordpress 2.7. Following an "issue" with the site which turned out teo be me needing to top up my data transfer quota. Thanks Fraser :)
2009
This section contains posts from 2009.
January 2009
This section contains posts from January 2009.
Crossing the Agile Development experts
2009-01-26
Youch! Just before Christmas I posted an article on Agile Development on CIO Online. There's nothing like constructive feedback, they say, and this was indeed nothing like constructive feedback... so anyway. I collated a set of responses and found I had written too much for the CIO Online comments field, so I have posted my responses here.
Here we go.
My goal in this article was to distil the content of a research report in a way that would be usable at CIO level. For your information, I was trained in DSDM and worked as a software development consultant for a number of years before becoming an analyst. Indeed, given the amount of time I have spent trying to convince more traditional developers of the merits of non-structured approaches, I am very interested to experience what happens when one dares to suggest that Agile might not be the ultimate answer. I hope that helps set the context.
Thank you for your comments – let’s work through them. As a first point however, I hope nobody has an issue with the main thrust of the article, as stated in the conclusion: that while there is a place for Agile approaches, they should not be attempted without due care. To try to suggest to CIOs (or anyone else) that things are otherwise would be irresponsible in the extreme. Do feel free to try to catch me out on the rhetoric, but please do not lets obscure this, most important point.
@Mike – I am delighted for you. There is no doubt that Agile can add a great deal of value if things are done right. I would be interested to know what level of consulting, mentoring etc was required, and how the 11 years panned out - was it forming, storming, norming, performing for you? Apologies for the sprints/scrums slip of the pen. But I don’t “fail to see” anything – rather, we have been advised by people who have had difficulties on Agile projects, that the shorter timescales can make things harder, as documented in the report.
@Agile Dude – I’m using Agile as a word for the same reason this does. I have spent many years watching in-fighting between methodologists, indeed, I would think it would be fair to say that the U in UML, once derived, meant that people needed something new to fight about. I mention some experience above which, while not as comprehensive as many people I know, and plenty I don’t, puts me in a better position than many of my analyst peers. Your mileage may differ but I suggest that you talk to me first before drawing any conclusions about competence or otherwise from a single article.
Finally, and sadly to your point, we don’t offer any services in this space. I’m not worried about a backlash as much as I am worried about “experts” trying to suggest that Agile is an easy ride.
@Ilja Preuss – good point. But it requires strong managers, and not the types who think rebaselining is a project management technique :). As stated in the article, the key factor is to impose a suitable level of structure – whether agile or otherwise, this will win over (let’s call them) anarchic approaches. With regard to communications, indeed, this will be a factor but the continuous development/integration involved in Agile requires this communication more actively than does the gating/reviewing involved in (say) waterfall. It’s not me saying this but the audience we researched.
@Dave Rooney – It’s a fair cop isn’t it! Indeed, when I was running the software development environment for an out-of-control software project (which led me to write the article Craft or Science? Software Engineering Evolution back in 1994) I was led think that if only I could have taken a dozen of the bright sparks involved in the project and locked them in a side room somewhere, they might have been able to deliver something faster than the hundreds involved in the ‘main’ project. You point is “divide and conquer” which I think is totally valid.
Less valid is the remark “you would have done your research” unfortunately. I could answer, “If you had done yours you would have discovered bla bla bla...” but I will resist the cheap shot :-) Seriously, I’m very happy that there are good experiences of Agile out there. I don’t say Agile isn’t suitable for large projects, the people we researched did say it gets harder to get right for larger projects. This might be obvious to you, but that’s another way of saying it’s fundamental, and this won’t be obvious to everyone.
@Mark Levison – did you read the report itself, or take a look at the materials from which it was derived? If so I’m not sure I understand your use of the term “back yard”.
@Alistair Cockburn – shame on you for suggesting I twisted the results of my own research report, particularly given that the citations you draw from the report are indeed summarised in the article. I suggest you re-read the article and the report, and tell the CIO Online audience where exactly the article denies that Agile approaches are beneficial, and where exactly the research report disagrees that Agile should be treated with due care. I am embarrassed by your myopia, but equally, I do understand that it can be difficult to see beyond something you have been espousing so long, and generally so well. Frankly, if you had spoken directly to me we could have had a quiet discussion and most likely avoided this faux pas.
@Anil Oberai – Thanks for your considered response. I think key to what you are saying is that it is important to choose project stages and documents which will work best for your organisation. I have no problem with hybrid approaches, nor with the recognition that wholesale adoption is not always the wisest approach off the bat.
@George Dinwiddie – yeah, I’ll cop that one :-) but all the same, in my experience this one bears out – that people want to stick with the monolithic, or reach out towards a ‘brave new world’. There doesn’t seem to be much middle ground – but I believe there should be!
@Agile/XP Coach – God forbid I should ever take advice from you. No wonder you keep anonymous.
@Grant (PG) Rule – There was some good information garnered in the research about what metrics are seen as appropriate – and you are absolutely right that few saw resource consumption as a valid metric. I’m a bit wary of metrics for a number of reasons – not least that very few places I have seen have tended to implement them in a way that would satisfy their advocates. Which leads me with two questions – are such metrics a valid pre-requisite to project success, and if so, what is the relationship between having the right metrics implemented, and the delivery of project value? When I was a real developer, the metrics that seemed to make the most sense were those that were outcome-based, for example the number of problems fixed; however, artefact-based metrics (number of use cases, test scripts etc) have not always been quite so successful. This is not an area we have researched significantly, so I would be very interested in any steers you might have.
February 2009
This section contains posts from February 2009.
Recipe for improving operational IT – take two-thirds best practice, one-third tools
2009-02-05
At the end of last year we ran a number of polls and surveys in conjunction with The Register, covering aspects of IT from architecture to operations. The results make interesting reading but for us the whole exercise was a lesson in keeping things real. It can be very easy in this bizzare business to think one of two things – first, that everything is different from this time last year (SOA is dead, sure, you just keep telling yourself that), and second, that nothing’s really changed in the last four decades.
While there may be a grain of truth in both, the answers lie somewhere in the middle – which is why it is quite a privilege to be able to ask real people, in real jobs at the front line of IT, what is really going on. We know from larger exercises how most activity is incremental to what is already there – however the misplaced perspective from some quarters is that ‘keeping the lights on’ is in some way a bad idea, as if we should chuck out the lights and replace them with some ‘lighting management system’. I happen to think that lights work pretty well as it happens, and while we may want to switch the bulbs, I wouldn’t advise rooting around in the fuse boxes without good reason.
Behind the name ‘Freeform Dynamics’ is the store we set by the freeform text comments we receive in studies. In one of the afrementioned polls (which garnered 180 respondents), we asked the question, “What is the single biggest thing that you would change in your organisation to improve IT operational performance?” Confirming the lights-on theme mentioned above, technical responses tended to be around adapting or improving existing facilities to the changing needs of the business, for example:
- “Better integration of diverse applications on different Operating Systems, Vista/XP/2000.”
- “Heritage processes/tools that no longer model the way the organisation is structured.”
- “Introduction of to-disk offsite backups would free up 3 man days a week from tape backup duty.”
- “Improve the testing platform - it often gets neglected at the expense of production.”
But here’s the punchline – those who proffered technical improvements equated to only 22% of respondents. A further 40% or so gave us non-technical feedback which – to be frank – is exactly the reason we find ourselves so often trying to steer conversations towards policy, training, finance and the like. Trouble is with that stuff, it’s all too easy to fall into the motherhood trap – but ignoring it means missing out on the topics that will see IT succeed or fail.
It’s interesting to see how that 40% breaks down. 8% of them wanted more money – and don’t we all – but equally thought it was important to link IT cash to business needs. This links to another theme coming out of the comments – of actually communicating between IT and business. While blindingly obvious to many pundits, the fact that it is seen as an area for improvement is indicative that many organisations are still finding IT-business relations a struggle.
The feedback from the remaining 25% focused on the human side of IT – management and staff, and getting the right strategies in place. One thing is clear from the comments – that IT departments are still occupied by human beings, in all their varying levels of glory. Choice feedback included:
- “Replace all the idiots that make strategic decisions with either trained technologists or intoxicated lemurs (it's still an improvement).”
- “Fire dead-wood, hire skilled employees. Sadly skilled employees are hard to find, and budgets are tight to prohibit growing the team to compensate.”
- “Replace the senior management over IT with qualified people who understand IT operations and budget.”
- “Reduce number of managers hence reduce political infighting.”
- “Management must trust IT people more.”
What the comments tell us is that IT operations cannot be improved through technology alone. Even as the cry of, “Oh no, is that correct, Sherlock,” fades into the distance (or words to that effect) I feel obliged to remind anyone who might still be listening that, unfortunately, that is not how things are presented. Even while straw polls and surveys alike might suggest that good practice is far more important than technology, the former is just too dull to fill the column inches. Far more sexy are the latest gadgets, packages and other fads, but even as these are debated, scant attention is being paid to how exactly practice can be improved.
Let the final word come from one of our freeform commenters: “Positive effects on IT only come from good decisions, not tech.” Motherhood maybe, but until we get the balance right between practice and tools, it is unlikely to be dealt with any time soon.
His mother's genes
2009-02-06
If your son says, "I'm going to make a snowman like in Calvin and Hobbes," be afraid.
Snow Day 2: Igloo crazy!
2009-02-07
The snow here has been absolutely fantastic - real winter wonderland, memory making stuff. There have been village snowball fights, sledging and a phenomenal team effort to build an igloo. Take a butchers at this. More due tomorrow apparently.
Shorts: the Zune as a cautionary tale for Microsoft interoperability
2009-02-24
The Microsoft Zune.
Nice hardware. well thought through.
Works with Xbox.
However.
Requires own sync software.
Doesn't work with Windows Media Player.
Can't interact with Windows Mobile.
Two indexing mechanisms, on top of Windows indexing.
Online social tools still vapourware.
No integration with Mesh.
And, for the record, no non-Windows client. Ouch.
March 2009
This section contains posts from March 2009.
Cloud computing - myth or reality?
2009-03-07
If you're interested in cloud computing and whether there is any substance behind it, you may wish to take a look at this presentation. It was compiled following discussions across the Freeform Dynamics team, and is undoubtedly a work in progress - we shall update our understanding as this area of IT evolves and solidifies.
[slideshare id=1108269&doc=cloudcomputing0-4a-090305165524-phpapp02]
P.S. This is as much a test of embedding a Slideshare presentation into a blog post as it is a link to the content! Turns out its all very straightforward.
Pizza Burn
2009-03-09
Burn, baby, burn burn burn. Also known as: a teenager's guide to cookery. Step One: watch the oven.
Anyone know what this is?
2009-03-31
We found this fist-sized blobby thing in the pond over the weekend. If anyone has any idea what it is, please do share!
A bigger version is on Flickr.
April 2009
This section contains posts from April 2009.
Why Sun Microsystems makes me angry
2009-04-16
It's not often in this job that I feel genuinely cross about an industry situation, but I find that's the case with Sun Microsystems. before I start I should declare my hand - I used to run a Sun environment of a few tens of servers and a few hundred workstations. When I say "I used to run" to be fair I had a team of people doing most of the work, including Oracle DBAs, UNIX administrators, software tools people and the like, all of whom were pretty good at their jobs - but I did still get my hands dirty, mostly on the sysadmin side.
Sun was one of the very first companies I ever knew the tagline for - "the network is the computer" - something I first learned when I visited their offices to run some benchmarks on a cross-compiler available from the Catalyst catalogue. To be frank, twenty years on I'm still not sure what the tagline means if I think about it too hard - but it still sounds good. Perhaps Cisco has finally cracked it with its, "no, no, we're not a server company," Unified Computing fabric, but time will tell on that one! Something for another post perhaps, I'm here to talk about Sun.
Another declaration I should make is that I haven't got a monopoly on the facts. But I have, in one way or another, been following Sun's activities for the past couple of decades. For better or worse: while the hardware has frequently been impressive, and wile the level of innovation has been fantastic at points, equally, I have suffered the consequences of when 'open systems' meant 'anybody can get in unless you know how to fill the holes', the battles between BSD and System V, the dangers of having the wrong support contract, and so on and so forth. These are no rose-tinted spectacles, I assure you.
But today we see the company failing to agree a price for its own demise, having failed to convince the world that open source is 'the one way', and having failed to monopolise its clear advantages in development software, and yes, it makes me angry. Not because of these failures in particular, but because even while Sun has been under-performing in each of these, they all ignore what has always been the company's core strength - that of building world-class data centres and surrounding infrastructures. Sun's reputation and brand is built around IT architecture with all its 'ilities' attached - scalability, availability and the like. In this, Sun has not so much been giving away its crown jewels, more that it has left them to crumble into dust .
When and how did this happen? It's difficult to argue otherwise than the wind went out of Sun's hardware-centric sails five years ago, when the company became "the dot in dot-bomb". Too much inventory of second hand stock, e-customers literally vanishing left, right and centre, massive commoditisation of the market (cf HP/Compaq), proprietary vs 'industry standard' all played their part. Scott McNealy may not have retired at that point, but a little part of the spark died, and the company was late to many parties after that (the decision to adopt AMD chips for example).
I could talk about Java, and my agreement with a Sun exec a few years ago that it wasn't so much that Sun was in the wrong ball park, it didn't even get that there was a game on. But software has always been peripheral to Sun, stuff that runs on the boxes. This isn't what makes me angry however. What does is that a few people at the top of a company can be in denial about what great things it could be doing for its customers, and can set a strategy which not only ignores what people want from Sun Microsystems, it also ignores what the majority of customer organizations want from their IT. Here's a clue: it's not wall-to-wall open source, as the open source model is a means, not an end. Ultimately, as an ex-customer, I feel let down as the values I thought I shared with the company have been eroded to a point of irrelevance.
What should, or indeed can be done at this stage? If I knew I could be very rich of course, but that would also assume anyone at Sun Microsystems would listen - the company hasn't been famous for this in the past. I would start squarely in the mindset of IT architecture, and building efficient platforms that can deliver appropriate service levels to enterprise and mid-market customers. Isn't that what Sun does? I only wish I knew - it certainly isn't what it seems to spend its time talking about. I would drop the argument about open source vs proprietary - it's laughable, particularly given Sun and its own proprietary history. Sadly a Damascene conversion to open source and a few (arguably sound) acquisitions doesn't make an open source company in practice. Its customers don't want it from Sun, and Sun can't deliver it.
Not everything that Sun is doing is necessarily bad - the cloud-for-developers approach also seems sound for example. But any goodness is being lost in the noise of delusion. I genuinely hope that Sun Microsystems, once proud, returns to its former $200 billion status, for the right reasons - notably that it is delivering what customers want. And on this latter point then, finally, if I were Sun I would stop talking about how great everything is (when the whole of the rest of the world knows it isn't), knuckle down and start delivering.
How hard can it be.
Virtualisation - The State of Play
2009-04-16
I presented this at an IBM-hosted event a couple of weeks ago. Enjoy.
[slideshare id=1225834&doc=virtualisation090-5a-090331021532-phpapp01]
Reading The Big Switch #1 - the economist's world view
2009-04-16
Recently I've got into the habit of picking up (largely from the Library) a number of books of a certain genre, and ploughing my way through them - in general, it is the ones that are most compelling that will drive me past the first few pages. And so, I currently have, on the bedside table, amongst others:
- The World Is Flat by Thomas Friedman
- The Long Tail by Chris Anderson
- The Big Switch by Nicholas Carr
As well as providing the selection, an advantage is that such books can be seen as a set, as well as individually. Fantasy books for example often seem to start a short period after a massively destructive event, following the perpetrator when he/she is still small, as they pick their way through calamity; meanwhile, popular tech-business-economics books like to mingle anecdote with theorising (to the extent I have likened them to guys in a bar in the past).
To the point - which is the book that has reached the top of the pile - Nicholas Carr's Big Switch. Highly eloquent, but I can't help myself wanting to pick holes in it - not through pedantry or resentment I should add, more because in places it just appears plain wrong. "Computing is turning into a utility," it says on the back cover. "No, it isn't," I say. "Cloud computing is a revolution," it says. "Nope," I find myself responding - in the knowledge that in the many debates I have with my colleagues Dale, Tony and Martin on the subject, these are points on which we unanimously agree.
Don't get me wrong, I'm all for provocative titles that get people thinking, yadda yadda. But it's not just the conclusions I'm finding flawed, it's the justifications as well. Which wouldn't be such a problem if the chap wasn't so widely read - kudos of course, but what if people actually acted on what is proposed without thinking things through first? Given that by the time I have finished the book I know my flighty mind will already be moving onto even fresher fields, I know I will have missed the opportunity to compose a suitably well argued riposte (if that is what is required). So instead, here is the first in a series of posts about The Big Switch, which offer thoughts as I go along. I promise less introduction, more meat next time.
A. Technology shapes economics shapes society This appears to be a bit of a theme in the early sections of the book, both explicitly (page 22). Perhaps it's because I'm not an economist (so what do I know) but I don't believe the relationship is either linear, nor in the right order. "Technology shapes society shapes economics" would be more accurate, though society does drive technology (particularly in terms of military demand). Economics, from the layman's perspective, is the mathematics of society - the former may be able to model the latter, and even enable certain predictions to be made, but economics is the mirror, not the thing.
The reason I bring this up is not to illustrate my knock-kneed incompetence when it comes to economics (and to be sure, I would no doubt be left for dead should I meet a bunch of economists in a dark alley and attempt to argue my way out), but because it illustrates the framing of the debate when it comes to The Big Switch. Mr Carr is fundamentally one of a clan who believes economics offers the route to explaining most things societal (and equally clearly, I am not) - with the result that arguments will have an undoubted economic slant. Unfortunately there are many perspectives that need to be taken, of which economics is only one in this case. IT architecture is another (given which, I believe a better parallel for IT might be the evolution of the transport system rather than the harnessing of electricity), and business readiness is a third.
A point I will come back to no doubt (as it is the central premise here) is that while economic drivers for full-fat utility or cloud computing may be compelling, there are a whole bunch of other reasons that make it impossible today. Or indeed tomorrow, or in 10 years' time. We are currently struggling with the data privacy issues that are caused by single companies or government departments building increasingly complex information stores. Such challenges exist to be ironed out, it could be argued - or equally perhaps they are a demonstration of why we shall never fully consign our personal and corporate lives to the cloud. It's a similar debate about whether there should be a global super-government - of course it would be more efficient if it could ever work (which is moot), but would it be desirable?
While not knowin' much about economics, I do understand it isn't all about money - which is a good job. But to see economics at the centre of the debate - the fulcrum if you will - is as unfortunate as seeing architecture as the centre, or business processes or whatever (I remember having a conversation with someone at a certain large software company, who said everything was about 'management' because there was management everywhere). As Anais Nin was reputed to say, "We don't see things as they are, we see them as we are." Or indeed, as our vested interests drive us - it was Mr Carr who pointed out McKinsey's vested interests, but in his own debating stance he has been very clear that he has decided a certain stance to be true, and his own interest will inevitably to be defend his position.
From here I shall be moving onto more solid ground. The next blog in this series will cover the book's misunderstanding of client-server, or more importantly, what is really the cause of data centre inefficiency today. Stay tuned.
Dear Mr/Mme Car Park Manager
2009-04-23
Thank you very much for the parking fine I found on my car as I departed from Kemble Station this Monday evening. I would like to explain the circumstances around this fine, in the hope that you might reconsider.
As is quite common for me, I was planning to catch the 09:19 train to London Paddington. I left home with plenty of time, so much in fact that I arrived at the station shortly after Nine O'Clock. As I saw I had time on my hands, I chose to buy my parking ticket at the same time as my train ticket - in this way, I thought, I would save having to file two expense forms.
So, I drew up outside the station and popped inside to queue. The line was quite short, to the extent that I was not in the slightest bit worried. At first anyway - until I realised that the person at the front of the queue was buying an old person's railcard.
Starting to fret slightly, I sighed with relief as that person was dealt with - and then the next, who was also very slow. My nerves were further agitated at the sight of a car park attendant out of the window.
The next person in line was about to be treated when I heard the announcement for the 0907 to Cheltenham. To my horror by this point, the ticket officer gave his apologies and left the booth, to deal with the train. The car park attendant was still outside, and I did start to go speak with him but before long (though it seemed an age) the ticket attendant returned.
I calmed down somewhat as the next person was dealt with and the attendant (having changed into a summer jacket) drove away. There was then one or two people in the queue, which were dealt with before me - but by now the time was approaching 0915 and the speakers were announcing the arrival of the London train.
Arriving at the counter I asked for a return ticket to London but I was so flustered I completely overlooked asking for car parking as well. Having got my ticket I ran out, got in my car, drove to an appropriate parking space (fortunately there was one nearby) and then ran back to catch the train. Admittedly by this point the parking charge was the last thing on my mind.
I hope you understand that it was certainly not in my interests to park without a ticket, nor would I ever intend to. However I also hope you will take these mitigating circumstances into consideration.
All the best, Jon Collins
May 2009
This section contains posts from May 2009.
Quick take: Will Borland find a good home with Microfocus?
2009-05-08
Having written quite recently on Borland, I felt obliged to comment on its recently planned acquisition by Microfocus. While the press release talks about 'market opportunity' and the like, its also important to think in terms of what it means for software tools customers. A couple of starters-for-10:
1. Borland, having sold off its developer tools division to Embarcadero and kept its application lifecycle tools, has just been picked up by Microfocus 2. Microfocus, having moved from being 'the COBOL company' to being 'the application modernisation company', has also just bought some testing tools from Compuware
Putting both points together, Microfocus is starting to have quite a comprehensive portfolio when it comes to software development management. But there is a deeper question here. Borland, for all its problems, was doing pretty well in terms of reaching out to the agile development community. Meanwhile Microfocus' core audience remains organisations that have a larger pool of legacy applications, COBOL or otherwise. Undoubtedly, there is an overlap between the two communities - but it may not be that great: research we have done in agile development draws out one audience, while traditional development and maintenance continues on a somewhat different track.
Where's the hinge? I believe it lies in SOA - or at least, in the drivers towards an IT environment in which the principles of SOA make sense. This convoluted statement is somewhat deliberate given our consistent finding that many organisations may be doing SOA-like activities, they just don't use (or even dislike) the term.
A story. A few years ago I did some work with a large, traditional organisation - my task was to train up staff on modern development techniques. Trouble was, I arrived to be told that the CIO had blocked any new application development - any 'developer spend' needed to be focused on enhancing existing systems to deliver against new business requirements. While the CIO later relented, it certainly focused how I was to approach what I had been tasked to do.
After a series of workshops, the result was not awfully different from what might be considered a legacy modernisation exercise - understanding the new requirements, mapping them against services provided by the legacy applications, and (through gap analysis) prioritising development activities. Essentially, the first part of the exercise enabled us to identify what needed to be done, at which point 'development' could take place in a way that made the most sense.
And what was that way? Certainly, waterfall approaches would have been a poor fit given that there was already a great deal of functionality available - while some (re-)development was necessary, as much focus needed to be placed on a more integration-led approach to software delivery - something which today we might call an enterprise mash-up. Offering a far better fit at the time were timebox-led development approaches such as DSDM - such approaches have of course led to what we call agile approaches today.
From a Microfocus/Borland perspective, then, there is the potential for a good fit between traditional and modern, legacy and agile. Taking into account questions of scalability, availability, security and the like, application modernisation can lead to a platform of functionality to be used across the organisation, which can then be built upon using agile techniques to respond to specific business requirements. I've not talked much about the (Compuware) testing piece here - though this is clearly an important element of any integration-led development story. But this is a good start. If Microfocus/Borland looks to help its customers in such a way as to balance old with new, then the acquisition will have my vote.
June 2009
This section contains posts from June 2009.
Virtualisation and security - the two-edged sword
2009-06-01
All new innovations in IT are a double edged sword - with the benefits come challenges and unintended consequences. Not least server virtualisation, which does have a number of security advantages over running software directly on servers. While it's worth considering these, it's also worth weighing them up against the challenges, particularly given the relative immaturity of the technology.
To be fair, virtualisation has been around ever since the dawn of computing - what is an electronic computer other than a virtual environment? I did get into trouble a few years back for crying foul when Microsoft claimed, "We've been doing virtualisation for many years," but to an extent they were right - as soon as there is layering or abstraction in a computer system, we have something that could be termed 'virtualisation'. So, we have virtual memory, virtual disks, and indeed virtual machines.
It's this latter version of virtualisation that's garnering most interest currently, and to be more specific still, virtualisation when applied to X86 (i.e. commodity) servers. Until this side of the millennium, server computers didn't really have the horsepower to run multiple, virtual machines (mainframes did of course, but were still a bit pricey - a factor which is notably changing). Now, with multi-core processors that build in virtualisation hooks (essentially, enabling instructions to be run by the virtual machines in a fashion which makes them pretty much as fast as running on physical machines), server virtualisation has crossed into the mainstream.
From a security perspective, virtualisation has a number of advantages. The first, almost a by-product, is how virtualisation adds to the fundamental security principle of 'defence in depth'. The virtualisation layer provides an additional level of abstraction which needs to be cracked if the core application is to be reached. In this way it's a bit like Network Address Translation (NAT) in that it keeps core applications one step further away from the bad guys.
Virtualisation also offers what's referred to as a 'separation of concerns', That is, different workloads (i.e. applications) can be run within their own virtual machines, such that if there is a problem with one, then others should not be affected. Building on top of both of these concepts, security features can be built into the virtualisation layer - in principle (see below). However, virtualisation does have its security downsides. I've already mentioned the additional virtualisation layer - this can either exist as a hypervisor (for example that from VMware or Microsoft) or as an extension to an operating system kernel (for example using KVM in Linux). For the additional layer to be effective, it needs to be secure - in some ways more secure than the operating systems and applications it hosts, given that if it gets hacked or goes down, they all go down.
Without dwelling too long on specific vulnerabilities (there's a handy summary of some here), suffice it to say that the presence of an additional layer adds to the security burden rather than reducing it. Not only is it necessary to secure the hypervisor, but also the management tools that go with it (which may be, for example, susceptible to brute force attacks to attempt a login). There are a number of ways of mitigating these risks, both in terms of patching against specific vulnerability, and also defining building security into the virtual architecture with appropriate use of firewalls and other protective measures. Baking such capabilities into the virtualisation layer is still, admittedly, a work in progress as illustrated by recent announcements such as VMsafe.
There are some additional risks resulting from the increased flexibility that virtualisation brings. For example, a virtual machine may be moved from one, highly protected server, to another far less protected server with out it being absolutely clear that anything untoward has happened. This scenario becomes even more likely if there is insufficient controls over the provisioning and/or management of virtual machines. A virtual machine could even be moved off-site, onto a third party server (at a hosting site or 'in the cloud', to coin a phrase.)
Perhaps one of the biggest security risks at the moment is that organisations are deploying virtualisation without always considering the security implications. At a panel I hosted at Infosecurity Europe a few weeks ago, one security pro in the audience explained that in his organisation virtualisation was being brought in primarily for cost reasons (nothing wrong there), but also that the rush towards savings was made without taking security into account (e.g. by costing it into the business case). Security comes at a cost, and like fault tolerance and other risk management approaches, it never works quite so well when it is retrofitted; the knock-on effects of rushing towards virtualisation may also include the aforementioned proliferation of virtual machines, resulting in a more complex (and therefore riskier) environment.
This factor is borne out when we consider recent Freeform Dynamics research suggesting that less than a quarter of organisations feel they are operating at 'expert level' when it comes to virtualisation - the impact is that knowledge of security best practice for virtualisation will still be lacking for many.
In conclusion, then, it is important to remember that these are still early days. Virtualisation undoubtedly has its benefits, not least from a security perspective. However organisations adopting virtualisation today would do well to ensure they do not increase the level of security risk they face. A simple risk assessment at the start of any virtualisation deployment, together with an appropriate level of vendor and product due diligence from a security perspective, could be the stitches in time that save a lot of heartache later.
Around the lake
2009-06-01
Continuing in the 'exotic places I have jogged' series, here's three laps round the lake at Disney World in Florida. Glad I went out at 5.45AM as it's warming up already (or was that just me!) 21'49", roughly 8.6 minute miles.
September 2009
This section contains posts from September 2009.
European analyst of the year
2009-09-09
Well I certainly wasn't expecting that! A couple of weeks ago, Freeform Dynamics and yours truly were announced as award winners in the annual IIAR survey of analyst relations professionals. Here's the stickers:
From an EMEA perspective, Freeform Dynamics was ranked as number 2 after Gartner. I was selected as EMEA analyst of the year, and number 2 (after the illustrious Ray Wang) globally.
I know there's all sorts of ways of measuring these things - and I also share a number of views with those who say there's a wider picture of influence (though Vinnie, there's more to technology decisions than procurement decisions!). That being said, the results are not to be sniffed at - and it's a distinct pleasure to be viewed in such esteem, so thank you everyone who voted!
October 2009
This section contains posts from October 2009.
The Bathtub Curve and other over-simplistic ideas
2009-10-06
Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It's funny how it happens - at one stage (for me anyway) it was Eli Goldratt's Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy's Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it's the bathtub curve.
The bathtub curve? Yes, the bathtub curve. It does seem worth an explanation, since I've had to explain it every time I've mentioned it. "You know," I say, "When you first implement something there are lots of faults - like snagging in a new house - and over time things settle down... but then after a while the number of faults starts to rise again?" And of course, everyone agrees, because it's a very familiar picture for anyone working in IT.
I was first introduced to the bathtub curve when an old colleague of mine was working on the railways (or at least for the railway companies as a consultant) to try to help them save money. His findings weren't pleasant reading. As UK railway companies (and before that British Rail) had tried to sweat the assets - rolling stock, track and the like - to the maximum, things had inevitably ended up right up the wrong end of the bathtub. Maintenance costs were huge, downtimes and delays frequent and so on, as indeed they still are. Trouble was, investment was only ever in one area - "We've funded another thousand miles of new track," someone would say. But that would be without fixing the rolling stock, which would wear out the track more quickly, and so the cycle would continue.
There's been plenty more written about the bathtub curve, so I won't dwell - but I did want to relate the challenge with maintenance in general, to IT in particular: that is, when to make investments and upgrades? There will never be an absolute answer to this: while some recent data suggests that a three-year rolling cycle may be appropriate for desktop PCs for example, for a mainframe that may be the amount of time required between reboots. Ultimately, the bathtub curve gives us, for any 'closed system' (in IT terms, set of infrastructure assets that need to operate in harmony) a relationship between capex and opex. Capital expenditure will be required on a discrete basis, to replace older parts, upgrade software, and so on, whereas opex covers the costs of more general servicing, support calls, diagnostics and so on.
Wy is this important right now? We understand from out conversations with all but a few IT decision makers that "attention is turning to opex," which roughly translates as "we haven't got any cash for capital investments, but we do need to keep the engine running." Which is fine, of course - but only if an organisation's IT is still running along the bottom of the bathtub, as measured by SLA criteria and the costs of service delivery. Perhaps the most important question to be able to answer is, "How long have we got before things start getting expensive?" A simplistic question perhaps, but inevitable if things are not treated before time runs out for them. As another adage goes, "A crisis is a problem with no time left to solve it."
The Bathtub Curve and other over-simplistic ideas
2009-10-08
Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It's funny how it happens - at one stage (for me anyway) it was Eli Goldratt's Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy's Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it's the bathtub curve.
The bathtub curve? Yes, the bathtub curve. It does seem worth an explanation, since I've had to explain it every time I've mentioned it. "You know," I say, "When you first implement something there are lots of faults - like snagging in a new house - and over time things settle down... but then after a while the number of faults starts to rise again?" And of course, everyone agrees, because it's a very familiar picture for anyone working in IT.
I was first introduced to the bathtub curve when an old colleague of mine was working on the railways (or at least for the railway companies as a consultant) to try to help them save money. His findings weren't pleasant reading. As UK railway companies (and before that British Rail) had tried to sweat the assets - rolling stock, track and the like - to the maximum, things had inevitably ended up right up the wrong end of the bathtub. Maintenance costs were huge, downtimes and delays frequent and so on, as indeed they still are. Trouble was, investment was only ever in one area - "We've funded another thousand miles of new track," someone would say. But that would be without fixing the rolling stock, which would wear out the track more quickly, and so the cycle would continue. Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It's funny how it happens - at one stage (for me anyway) it was Eli Goldratt's Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy's Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it's the bathtub curve.
There's been plenty more written about the bathtub curve, so I won't dwell - but I did want to relate the challenge with maintenance in general, to IT in particular: that is, when to make investments and upgrades? There will never be an absolute answer to this: while some recent data suggests that a three-year rolling cycle may be appropriate for desktop PCs for example, for a mainframe that may be the amount of time required between reboots. Ultimately, the bathtub curve gives us, for any 'closed system' (in IT terms, set of infrastructure assets that need to operate in harmony) a relationship between capex and opex. Capital expenditure will be required on a discrete basis, to replace older parts, upgrade software, and so on, whereas opex covers the costs of more general servicing, support calls, diagnostics and so on.
Why is this important right now? We understand from our conversations with all but a few IT decision makers that "attention is turning to opex," which roughly translates as "we haven't got any cash for capital investments, but we do need to keep the engine running." Which is fine, of course - but only if an organisation's IT is still running along the bottom of the bathtub, as measured by SLA criteria and the costs of service delivery. Perhaps the most important question to be able to answer is, "How long have we got before things start getting expensive?" A simplistic question perhaps, but inevitable if things are not treated before time runs out for them. As another adage goes, "A crisis is a problem with no time left to solve it."
Chemistry in paperback
2009-10-12
After a long delay, due to factors beyond anyone's control, a paperback version of Rush-Chemistry in the works. Please note that this is not a full update, it still ends when it ends (at the end of the R30 tour in October 2004). All reported errors and typos should now be fixed, as listed here: RC Addendum v0.3b.doc
Update: this is now available for pre-order at Amazon UK.
Losing a device - what's the big deal?
2009-10-19
(This talk was originally given at a CBR dinner club event in October 2009)
It shouldn’t come as a surprise to anybody, to know that mobility does bring with it a business advantage. Last time was asked that question a couple of years ago, two thirds of companies of all sizes consistently said ‘yes’ – and we can be reasonably confident that the figure will have risen since then. In the same study, a staggering 80% of companies permitted access to corporate systems from company-owned laptops and/or PDAs. And meanwhile, from the same research, we found that a quarter of companies saw the risk of exposure through device theft or loss was ‘high’ (and a further quarter as ‘medium’).
From our own experience, laptop theft is like car accidents – everyone knows someone who has been involved, but it always happens to someone else. As for phone loss – is there anyone who hasn’t lost a device at some point? I can think of hotel bar, roof of car, casino bar… is there a theme developing here? Actually, we do know from some more recent research – about 15 percent of respondents had personally suffered accidental loss or theft of ‘bag, keys, wallet/purse, phone, laptop’ in the six months that preceded the study. (And incidentally, 25% of organisations said they had suffered theft of corporate equipment in the same period – so it all adds up)
But really, the loss of a device doesn’t matter so much. Sure, there is the capital cost – you wouldn’t want to do it too often. From a corporate perspective, we know consistently from our research studies that the biggest IT-related risk involves the loss of business-critical information.
The problem only starts to surface when the two worlds collide. When I lost my Motorola flip-phone in the Fairmont San Jose, the most annoying thing was that I still had a whole bunch of connectors, chargers etc that immediately became redundant. When I left another device in the Venetian, it also contained my entire contacts, task list, recent email conversations and various other files. Fortunately I had put a PIN on the device – or rather, our hosted Exchange service had enforced a policy which set a PIN for me. Of this, more later.
But for now we have to face the fact that mobile devices are astonishingly capable of storing quite huge quantities of data. And it’s not just mobile phones either. When I was speaking to an IT Manager at IP’09 last week about disaster recovery policy, I asked him how much data was involved – expecting the answer to be in the Terabytes. “A few hundred Gig,” he said… that is, the amount of data that could quite comfortably be stored on an iPod.
It was so much easier in the mainframe world – have you ever tried to lose a mainframe? But these days, we have the combined effects of the gadgetry becoming increasingly easy to lose, coupled with the fact that devices can store exponentially increasing quantities of data.
The ‘so what’ becomes clear when we define what we mean by ‘business critical’. Private (and increasingly, public) organisations case about one thing the most: money. So, information loss matters for one of two reasons: if information is lost, either it’s going to prevent the business from making so much money, or it’s going to cost the business money to deal with the impact. There are mitigations for the former – organisations have a plethora of data protection mechanisms available to them (and if you want to know more, let me know and I’ll send you a copy of our next book when it comes out).
But the latter is harder to mitigate against. If a company ‘secret’ is released into the wild, it can be copied; if customer data is released, there is a compliance cost as well as reputational damage. Having said that, it does astonish me how blasé people are with their information – to be fair, me included. I believe T-Maxx actually increased sales having lost all those credit card details.
So, what to do about it? Well, there are a variety of technological measures that can be brought to bear – either to lock down devices, prevent information from being released without authority, audit where it has gone, remote-destruct the data and/or device (with just a whiff of Mission Impossible) and so on.
We know that this area is underserved – roughly half of the organisations we surveyed felt well protected against the kinds of inadvertent breaches they suffered as a result of theft or loss, compared to external attacks. While we know data leakage tools and technologies are not implemented to the same level as malware protection, this is also an indication that technology alone is not the answer. When we looked into this we found little between the challenges of dealing with the existing security infrastructure (i.e. plugging the holes), implementing appropriate policies/processes, or indeed making the necessary cultural changes to get the right tools and mindsets in place. These areas are all hard, though not insurmountable.
What the research also highlighted, were the places to start however. The absolute ground zero is understanding risk – we could paraphrase “knowing the price of everything and the value of nothing” to be “seeing dangers everywhere, but risks elsewhere”. In this context, the only important risks are business risks – that is, those that can impact an organisation and its dependents.
The second lesson we have learned is around policy. Draconian rules don’t work – they are tough to implement, difficult to enforce and impossible to keep up to date. So put away the superglue. We advocate a ‘minimum necessary’ approach – for the simple reason that if you can’t implement that, you don’t stand a cat’s chance in hell of implementing the maximum possible. Simple example: PIN numbers on PDAs (I said I’d come back to that!).
Finally, we look for cause and effect in our research, and one factor which surprised us in its effectiveness was awareness raising – in some cases breaches were being reduced by an order of magnitude in organisations that had some kind of awareness programme in place.
Security tools are just that – tools of the trade. All for it – but organisations who think that tools alone will solve the problem, are cruising in the fast lane with a blindfold on. When the crash happens, it won’t matter how good your airbags are.
November 2009
This section contains posts from November 2009.
Towards dynamic systems management methodologies
2009-11-11
I do take my hat off to the people who first put together the IT Infrastructure Library, which (like the end of the cold war) celebrates its twenty-year anniversary this year. It’s one thing to learn, both on the job and with little support, the ‘golden rules’ and best practice principles of any discipline. It’s quite another to have both the gumption and skill to document them in a way that makes them usable by others. And I should know – the time I spent working in the methodology group at a large corporate were a fair illustration of how tough this can be.
So, when the books that made up what we now refer to as ITIL were first released, they must really have hit the nail. First adopted by public organisations in the UK, they have since become one of the de facto standards for large-scale systems management. Their authors can feel rightly proud: as indeed can the authors of other best practice frameworks that have, through force of adoption, been proven to it the spot.
However, there could be a fly in the ointment, and its name is dynamic IT – the latest term being applied to more automated approaches for managing the resources offered by our data centres. I know, I know, this is one of those things that people have been banging on about for years – indeed, for at least as long as ITIL has been around, if not longer. So, what’s different this time around?
There are a number of answers, the first of which is virtualisation. While it is early days for this technology area (particularly around storage, desktops and non-x86 server environments), it does look set to become rather pervasive. As much as anything, the ‘game changer’ is the principle of virtualisation – the general idea of an abstraction layer between physical and logical IT does indeed open the door to be more flexible about how IT is delivered, as many of our recent studies have illustrated.
The second answer has to be the delivery of software functionality using a hosted model ('software-as-a-service', or SaaS for short). No, we don’t believe that everything is going to move into the cloud. However, it is clear that for certain workloads, an organisations can get up and running with hosted applications faster than they could have done if they’d built them from scratch.
I’m not going to make any predictions, but if we are to believe at least some of the rhetoric about where technology is going right now, as well as looking at some early adopter experiences, the suggestion is that such things as virtualisation and SaaS might indeed give us the basis for more flexible allocation, delivery and management of IT. We are told how overheads will be slashed, allocation times will be reduced to a fraction, and the amount of wasted resource will tend to zero.
We all know that reality is often a long way from the hype. If it is even partly true however, the result could be that the way we constitute and deliver IT service becomes much slicker. IT could therefore become more responsive to change – that is, deal with more requests within the time available. In these cash-strapped times, this has to be seen as something worth batting for.
But according to the adage, the blessing might also be a curse, which brings us back to the best practice frameworks such as ITIL and what is seen as its main competitor, COBIT. In the ‘old world’, systems development and deployment used to take years (and in some cases, still does) – and it is against this background that such frameworks were devised.
My concern is how well they will cope should the rate of change increase beyond a certain point. Let’s be honest: few organisations today can claim to have mastered best practice and arrived at an optimal level of maturity when it comes to systems management. Repeatedly when we ask, we find that ‘knowing what’s out there’ remains a huge challenge, as do disciplines around configuration management, fault management and the like. But in general, things function well enough - IT delivery is not broken.
The issue however is that as the rate of change goes up, our ability to stick to the standards will go down. Change management – i.e. everything that ITIL, COBIT and so on help us with, has an overhead associated with each change. As the time taken to change decreases, if the overhead stays the same, it will become more of a burden – or worse – it might be less likely to happen, increasing the risks on service delivery.
To be fair, methodologies aren’t standing still either – indeed, ITIL V3 now builds on the principle of the service management lifecycle. But my concern about the level of overhead remains the same: ITIL for example remains a monolithic set of practices (and yes, I know, nobody should be trying to implement all of them at once!). There’s part of the framework called ITIL Lite, designed for smaller organisations, but to be clear, the ‘gap’ is for an "ITIL Dynamic" for all sizes of company. In methodological terms, the difference would be similar to DSDM and its offspring, compared to SSADM in the software development world - fundamentally it's the difference between to-down centralisation, and bottom-up enablement.
Perhaps the pundits will be proved wrong, and we’ve still got a good decade or so before we really start getting good at IT service delivery. But if not the question I have therefore is, how exactly should we be re-thinking systems management to deal with the impending dynamism? We could always wait for the inevitable crises that would result, should the dynamic IT evangelists be proved right this time around. But perhaps its time for the best practice experts to once again put quills to a clean sheet of paper, and document how IT resources should be managed in the face of reducing service lifetimes. If you know of any efforts in this area, I'd love to hear about them.
Infrastructure convergence - the two sides of the coin
2009-11-18
Let's be fair - IT isn't the only industry fraught with jargon, but it can certainly hold its head up high among the leaders of the field in terms of gobbledygook. The minefield of acronyms we all have to suffer is worsened by the astonishingly bad practice of overloading individual, sometimes quite innocuous words and combining them with new ones, which in turn are subjected to unnecessary and distracting debate. And so we are subjected to hearing such things as, "That's not a business process," or "Adaptive virtualisation through best of breed solutions." Members of the Plain English Campaign must be constantly shaking their heads in desperation.
Realistically however, it's nobody's fault. I put it down to the fact that we're working in such a new sphere of human development that existing language just isn't sufficient to support the dialogues we need to do our jobs. It doesn't help either that the industry is stuffed full of geeks (I'm one of these) and armchair philosophers (and these) in equal proportion, but that too is a symptom of the times. Take away the people that are inventing all the convoluted phraseology, and you'd take away the innovation as well.
And so to convergence. There's a word. It may have existed before the IT revolution - "The massed forces of Napoleon's armies converged on the plain," for example - but we've taken it and made it our own. Convergence means different things to different people, and given that it looks like it is becoming a very important word indeed, it is worth exploring a couple of these meanings.
IT is all about convergence. Convergence pressure comes from the top down, as a counter to complexity. My dubious understanding of evolutionary theory tells me that it is as much about diversification, as survival of the fittest. Innovation is another word for the relentless drive by vendors to release new products, and providers to release new services, in the hope that some of them will become as popular as Windows, Google or the iPhone. Deep in the infrastructure as well, plenty of new-and-improved technologies deliver all kinds of clever benefits, but only add to the complexity of the infrastructure. Understandably, then, IT environments start to hit issues of fragmentation, of complexity management, of interoperability.
We're seeing it right now with virtualisation for example - lots of benefits, cost savings etc but we're only starting to see some of the issues (e.g. virtual server sprawl, back-end bottlenecks) that ensue when virtualisation moves out of the pilot and into production.
Meanwhile, convergence also comes from the bottom up. New technological advances tend to get subsumed into the infrastructure or application architecture - which is why we see waves of merger and acquisition activity throughout the history of IT. But it's not just about making different things work together - it's also recognition that certain technologies, which may start independently to solve separate problems, eventually need to come together in some ways. And so in the telecomms world we have that wonderfully obscure acronym, FMC, which stands for fixed-mobile convergence - bringing together traditional telephone infrastructures with mobile infrastructures. We're also seeing the convergence phenomenon in the data centre - or more importantly, in how the different devices in the data centre communicate with each other, that is, storage, servers and communications devices. IT has always been about processing information and moving it around - and historically, the three types of device have evolved along their own, discrete-yet-interoperable paths. But right now the industry is coming to terms with the fact that there can be only one data movement standard that all devices share - without getting into the fuzzy words too much, this is called 10 gigabit Ethernet. The timing for the convergence of data centre technologies couldn't be better, given what we're seeing with virtualisation. Note that it's not just about everyone saying, "let's all use Ethernet," rather, the 10GbaseT standard has had to be defined to support a wide variety of requirements imposed by the data communications, application latency and storage throughput needs of modern IT environments.
In other words, the data centre convergence we're seeing is not only an inevitable step given the evolution of the underlying technologies, but it is responding to a real need caused by the fragmentation of today's IT. It's important to see both together - as there have been all kinds of technology convergence that have come at the wrong time - i.e. not responding to a significant enough need - and that have fallen by the wayside. Examples include policy-based management of security, and perhaps even FMC which will remain a slow-burn until it becomes an necessity. But for data centre convergence, the time could well be right.
[Written at Fujitsu VISIT 09 conference, during a keynote by Dan Warmenhoven, Chairman of NetApp - who famously said "Never bet against Ethernet!"]
December 2009
This section contains posts from December 2009.
Looking back... and forward
2009-12-21
This year has not been easy for many. By all accounts it has been a polarised recession: some of the largest companies have fallen, while for others, business has been booming. The stock markets plummeted only to rebound; organisations have offered themselves up for sale and then decided, actually, they could go it alone; redundancy packages have been announced and then withdrawn; in the UK, house prices are once again higher than they no doubt should be - with the net effect of leaving individuals and companies alike with that best-just-get-on-with-it feeling.
In IT, perhaps, we have seen it all before. This is my tenth year as an analyst - and I seemed to spend most of the 'middle years' introducing every article and paper with the words, "as we emerge from the downturn...". Now here we are again: budgets are being squeezed, Opex is generally the focus over Capex, and vendors and end-user organisations are having to demonstrate value in everything they propose. But ROI is no longer being punted as yet another "next big thing" - it is merely something that needs to be taken into account because it would be foolish not to.
Speaking personally, one thing is for sure, this certainly was an interesting (in the Chinese sense of "May you live in interesting times") year to be taking over in charge of a company. Freeform Dynamics may be small but - just like any other company - we still have P&L to consider, bills to pay, mouths to feed. Looking back, I believe 2009 will be the year that we grew up as a business - across the team we learned to be more disciplined about what we do and how we do it. We remembered what it was we were set up to achieve, and I can now explain to family members what it is I do without them looking politely puzzled, or simply glazing over.
It's been a hard year, but equally, it has been a good year for Freeform Dynamics. As we've grown we've learned more about how to engage with the audiences that matter to us most, that is, the community of mainstream IT decision makers and professionals. At the same time our writing skills have been developing, both through sheer force of having to do lots of it and with the help and support of David Tebbutt, who brought a much-needed level of editorial discipline to the company in the two years he was with us. Another high point was undoubtedly being voted second analyst firm in Europe by those people who manage analyst relationships. Strip everything else away and all it meant was, "The people we like to talk to." Which, of course, means a great deal.
What of next year? When it comes to predictions, I'm afraid I can't share some of our competitors' opinions that Cloud Computing will be "The big thing" to happen in 2010. It occurred to me quite recently where the confusion lies - that in this wonderfully innovative and yet charmingly backward industry, changes that happen to IT vendors are often misinterpreted (transference, maybe?) as changes to how organisations do business. If we turn the telescope around, yes, sure, I have no doubt that Cloud Computing is already having quite an impact on the vendor space and will continue to do so. But while changes in the engine might be profound, we are a long way off achieving what end-users might consider to be a smoother drive. Back in 2002 I called the journey towards the Universal Service Provider "the revolution that never was" - and while I remain confident of the destination, it will take more than a few articles in Management Today, a hastily-organised conference and a Twitter hashtag to get us there.
My money is on virtualisation - an area of IT where the reality and the vendor hype do appear to be achieving some level of parity. Most organisations are still, in general terms, only trialling virtualisation. We can say both that the best is yet to come, and the fan has yet to meet the fecal matter. This will be a big year for infrastructure - getting the hardware layer working as a layer, moving data and execution capability around, building in the qualities required to underpin acceptable levels of service delivery, from data protection to risk mitigation. Very boring, very necessary, and undoubtedly where much of the action will be.
I'd also spare some thought for Green IT. We shall continue to see the debate broaden beyond power and cooling, and come April of course, "Green" will become a compliance issue. On the radio the pundits are talking about the failure of Copenhagen, but at the same time, individual countries have now moved a lot closer to deciding their own strategies. IT undoubtedly has a major part to play, not just in sorting itself out but particularly in how innovative use of technology can help reduce the overall footprint, and we shall be continuing to watch this area with interest.
Meanwhile, service provision in general is, as implied by Cloud, going to continue in its rapid evolution (that's one for the punctuated equilibrium fans). The machinations of vendors are quite fascinating - companies with their heritage(s) in hosting, telecommunications, software, hardware, internet search and so on are all deciding, rightly or wrongly, that they are running after the same ball. It is like watching a flotilla of tankers trying to turn around and head in the same direction, towards a goal that remains, thus far, a mirage - some parts of the model are proven (managed services, outsourcing, certain contractual vehicles and so on) but others (Platform as a Service, anyone?) remain highly speculative.
Then what is this goal? IT is not an end in itself, it's a set of mechanisms that help individuals collaborate better, and companies deliver services to their customers. Much depends on integration - at both a hardware and software level. It is all too easy to work out potential answers, but far harder to establish what will really make a difference given the mish-mash of technological slurry that already exists. At Freeform Dynamics we shall continue to peel back the layers of understanding, as seen through the optic of what's really going to make a difference to mainstream IT - the centre of the bell curve. Our core business remains research, from which we shall continue to deliver down-to-earth, practical guidance aimed at real people, not theorists. It's a simple enough vision: if you can't see the wood for the trees, who would you rather meet - an elf with a magic map, or a woodsman?
From our perspective, we'll be starting the new year with a team at full strength - we're all delighted to welcome Dale back, now he has finally been able to say goodbye to what at times was a truly debilitating illness. We have some announcements up our sleeves, and we shall continue to grow our team of analysts, both to strengthen our core areas of coverage and grow beyond them. My heartfelt thanks go out to everyone in the team, for sharing the passion about what we are trying to achieve and working so hard to deliver on it.
A huge thank you also to everyone we have interacted with. It's a complex web, isn't it - the touch points between technology producers, consumers, commentators and advisors will no doubt continue to blur. 2010 will be "interesting times" for many, and we have learned enough over the past few years to not take anything for granted. But there remains plenty to look forward to.
Adobe Bridge CS3 not responding?
2009-12-30
I was having an issue with Bridge CS3 in Mac OSX Snow Leopard - it was using half the CPU (i.e. 100% of one core) but didn't appear to be doing much else - before eventually crashing. Having browsed around a bit last night and this morning to little avail, I thought I'd share the solution I eventually stumbled upon - to delete the cache. The official link is here, which includes information for both Windows and OSX users.
Thanks also to Planet Neil for the tip-off.
2010
This section contains posts from 2010.
January 2010
This section contains posts from January 2010.
Quick take: top ten worst passwords
2010-01-21
This makes interesting reading - it's a report, sponsored by those good people at Imperva, about password worst practices. It's got some sage advice too, for anyone who wants to know how to get a bit better at setting passwords.
Her's the top 10 - personally I'm surprised that swear words don't figure, but that's small consolation.
1. 123456 2. 12345 3. 123456789 4. Password 5. iloveyou 6. princess 7. rockyou 8. 1234567 9. 12345678 10. abc123
Take a look at the full report here.
P.S. There may be a longer take on whether Imperva *should* have conducted the analysis, but perhaps that one's for history to decide!
Brighton Marathon
2010-01-28
Well I never. In a moment of spring madness, I agreed to join Marillion's Mark Kelly to run the Brighton Marathon in April. The chosen charity is Water Aid, which has a pretty straightforward remit - access to clean, safe water should be a right, not a privilege. If you feel like sponsoring us pop over to our Justgiving page.
It's also worth highlighting a Marillion song, the last line is "A tap with clean water"... Full lyrics here (selection below)
And here's the song.
Common cold. Dirty water. HIV. Common apathy. Common crime. Perfect nonsense to the next generation
Have we caught up yet? Is it time? Well I say it is. I say it is. Deaf and dumbed-down Enough is enough
Gimme a smile. Hold out your hand. I don't want your money I don't want your land I want you to wake up and do something strange I want you to listen I want you to feel someone else's pain
A tap with clean water
Thinking aloud about Fair Trade chocolate and Kraft
2010-01-28
Hurrah for Green and Blacks to announce that all of their chocolate was going to be fair trade by the end of the year, I thought to myself as I munched a bar of Maya Gold and read the Guardian article to find out it was the first chocolate bar to be certified Fair Trade. Perhaps is was the cocoa but I couldn't help wondering about the timing - G&B's is owned by Cadburys, that has announced a similar move (Dairy Milk already, others to follow) - and the latter has of course just been betrothed to Kraft foods.
Now, I'm no politician but I can almost imagine the board room conversation where either or both brands decided it was now or never to go fair trade, given that the big nasty conglomerate would be unlikely to take such a bold step. It would also be difficult to unravel without adverse PR and loss of sales. So, we have the public announcement, whether Kraft likes it or not.
A little bit of browsing later and it seems that the nasty conglomerate has already made a start. Suchard hot chocolate and Kenco coffee already boast rainforest certification. Now, all we need is for Toblerone, Terry's, Cote D'Or and Daim to do the same, and we'd really be cooking.
February 2010
This section contains posts from February 2010.
What personality is your IT department?
2010-02-11
Over the years I've spent quite a lot of time thinking about the idea of IT maturity models - which are great in principle, but, to be frank, a bit of a blunt instrument when it comes to gauging reality. Many IT organisations are quite comfortable where they are in terms of maturity for example - providing 'good enough' service levels without necessarily striving to be best-in-class. Meanwhile, the theory that a large proportion of IT depts remain unable to crawl out of the in the primeval swamp of IT maturity is no more than that - a theory.
Are there better ways to consider how IT departments are structured and how they operate? To be sure, different options exist. A couple of years ago I was working with the ideas used in personality testing, to see if I could come up with a similar model for IT. One hypothesis, which I believe to be true, is that most IT departments will remain much the way they are unless they are subject to some external change - replacement of the CIO for example, or merger with another department. Until such times, it is more important to understand what an IT department is, than what it might become.
The personality of an IT department is multi-dimensional. Based on research we gathered with readers of the Register web site (report), coupled with some telephone study work, we were able to derive a number of characteristics of IT organisations, namely:
- Organisation – whether the IT function considers itself to be a technical or service department (TECHNICAL vs SERVICE)
- Process – the level of formalisation/repeatability of IT development and operations processes (INFORMAL vs FORMALISED)
- Approach – whether projects and deployment generally take place individually, or mostly with an eye on a broader strategy (DISCRETE vs PROACTIVE)
- Dialogue – how interactive are the communications between IT and the business (REPORTING vs ENGAGING)
None of the above is necessarily wrong. For example, an IT organisation may need to be Technical if the business is highly technology-driven; equally, some companies may benefit from a Discrete approach if lead times are the driving characteristic. Smaller organisations, or branches may have different needs to larger ones, and so on.
Such characteristics offer 16 potential combinations, but not all of these will be generally applicable. Combinations we believe will be more likely are as follows:
TIDR - The classic “traditional IT organisation” - IT exists for its own purpose and there is little or no relationship established with the business. Unlikely that IT has a particularly good reputation, and fire fighting will be common.
TIPE - This is “don't watch the mouth, watch the feet” as the IT organisation appears to be doing everything right in terms of business engagement, but it still doesn't appear to be moving forward in terms of maturity. This is due to to unwillingness to change from old practices and organisational models, though IT may be highly competent technically.
TFPR - The IT department feels it can do no wrong, and sure enough it is running a tight ship. However there is a feeling from the business side that something is lacking, as IT wants to engage on its own terms and sometimes gives the impression that it knows what is best for the business. IT wants a place on the board, but the business doesn't feel comfortable with this.
TFPE - While just about everything is in place, IT still thinks like a bunch of techies. This is fine for businesses that are themselves technical, but it may mean that the cost of technology is not fully understood. In other cases it is likely to show itself in a lack of responsiveness as IT works to solve the biggest technical issues rather than understanding what's the most pressing business need.
SFDR - While the IT function is doing what it can in terms of external processes, it lacks the necessary communications between development and operations. This may be because of political, or geographical reasons; in any case, the result is that ops falls back onto simple reporting, rather than being part of the dialogue itself.
SFPR - While everything is going generally right, IT may well be disappointing when it comes to engaging with the business, resorting to only sparse reporting on progress and performance. This may well be because the business itself does not understand why it should have to engage.
SIDE - There is a high level of engagement with the business, which is admirable. However, perhaps due to a traditionally fragmented approach to IT implementation, lots of legacy or otherwise (e.g. mergers), the environment itself is not in the best state, nor is how it is managed – so backups may be done inconsistently, or there may be security issues for example.
Such a model offers a useful starting point for any IT organisation, in terms of what is going to work - level of process, technology adoption and so on - as well as injecting a level of realism into proceedings. We all have aspirations, but we're not going to get any better unless we understand ourselves first - that's the principle anyway. Similarly, for vendors, the opportunity exists to create offerings based on how things actually work, rather than any over-aspirational view, or one-dimensional perspective on IT maturity. And meanwhile, models such as this may offer more realistic paths towards best practice, if and when we move into a more dynamic IT world.
More detail is available on this if it would be interesting and useful to anyone, including broader definitions of all 16 personalities. For now, I'd be very interested in feedback on the principle of the model itself - and whether or not it would work for you. So, if you have any thoughts on this, please do let me know.
March 2010
This section contains posts from March 2010.
A bit of whimsy - Beyond the Seaweed Farm
2010-03-24
You can only imagine my surprise when... OK, that's a lie. Here's a little something I was writing last summer, based on the few snippets suggested by the liner notes to what was to become Porcupine Tree's earliest releases Tarquin's Seaweed Farm and The Nostalgia Factory.
It's also given me the opportunity to have a first foray into e-books. This one is in ePub format.
If you missed the link, here's Beyond the Seaweed Farm.
The full http is http://www.joncollins.net/wordpress/wp-content/BTSF.epub
And you can also read it online here.
Enjoy!
The cover is William Hogarth's The Cockpit. It seemed appropriate.
April 2010
This section contains posts from April 2010.
Brighton marathon - oh my goodness!
2010-04-18
Well, well. Today I ran a marathon. Who would have thought it - and like so many things in my life, once again I have discovered just how much is possible if you set your mind to it. I'd love to say that it was a breeze, that it was tough but fair, that it was anything other than what it was - possibly the most physical thing I've ever done in my life. But that's the reality. If childbirth is worse, I'm surprised the human race has survived as long as it has.
So many memories...
- the marvellous sticky toffee pudding the night before - thanks Bola and Chris! - the hack into Hove to pick up Mark at 7AM - the serendipitous arrival at the train station, to find a taxi siting there like it had been specially laid on - the chance meeting of one of Mark's mates Jake, a total gent who gave me my own personal tour of the sights for the first 15 miles - the genuine pleasure of running on a beautiful day until about mile 17 - the delight at seeing Liz and Mum along the way - the uneasy feeling that all those small pains were joining into one big pain - the absolute agony of the final 8 miles, and my gratefulness to all those who called out the name I had printed on my shirt - the complete inability to climb the steps back up to the front, after the finish - the taxi, shower, tea and cake
As for times, Mark came in at 4.19, and me at 4.45. It'll do - while I was hoping for a better time, I had underestimated how much pleasure I would get from just finishing.
Above all, the whole thing raised £3,000 for Water Aid, which is just fantastic. Thanks so much to all who contributed, friends, work colleagues, Marillion fans, you kept me going on more than one occasion. Thanks (really) to Mark for the whole idea, and the biggest thanks to Liz for all her help and support, not to mention driving home two folks who probably should have known better!
May 2010
This section contains posts from May 2010.
IT looking forward
2010-05-07
Video recorded at Microsoft tech days, 16 April 2010, following (and referencing) David Bishop at Microsoft's "Vision of the Future" presentation.
Enjoy!
Looking back: Software is Art?
2010-05-12
The traditional craftsman uses handed down principles and rules of thumb as the basis of his work: experience gained over many years of apprenticeship. The engineer proves his designs at every stage with scientific laws and mathematics. Craft and engineering differ in their techniques but they have a common goal remain the same : to provide efficient, useful artefacts.
In Computer Programming as an art, Donald Knuth summarises this by saying, "Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces object of beauty."
The knowledge base that we have built up over thirty years is agreed to be incomplete. Reliance is often placed on software craftsmen, commonly known as ‘gurus’ or ‘hackers’ (in the programming world, a hacker is seen as a programmer of high esteem whereas it is the cracker who breaks into computer systems) – the reputation of some gives them almost hero. Robert Baber suggests that when software development progresses to be a true engineering discipline, such characters will disappear to be replaced by ‘members of professional bodies’. If this is going to happen it is a long way off yet.
And what of software itself? Is software an art form? Knuth refers to ‘beautiful programs’, and it is true that an elegantly structured section of code may be a pleasure to look at, at least to another programmer! This point should not be taken too far: a motorway intersection may well be an object of beauty to another construction engineer, but it is probably not for the rest of us. Artistic qualities do however have practical value for software : an elegant program is reasonably likely to be a well-written, maintainable one. It would be worth considering the addition of a little ‘art’ at all stages of the software life cycle. For example:
1. In the specification phase, a common language document is used to show the producer’s understanding of the clients’ needs. A well-written specification will increase the chances that such requirements have been noted; a readable specification may well ensure that both parties read it at all.
2. Software architectures can be elegant or impractical. An elegant architecture is one which ensures that its subsystems work together smoothly and efficiently; an inefficient architecture will be the source of an unnecessary performance overhead.
3. Algorithms can be fluid, smooth in motion or clunky and slow. A well-written algorithm will be more efficient than a badly written one.
4. The code itself can read like a good book or a child’s first essay. If the code is not readable it will quickly become unmaintainable, especially if its author is no longer available to explain it. Code should be self-documenting. If it is not clear exactly what a given code section is doing then it should be rewritten. In the maintenance phase, the code should also explain how it has been modified, by whom and when.
The quality of the product is dependent on the experience of the software developers – and their management – at their craft. Such craftsmanship must be learned. And as we have already seen, there is not always the time or resources available to provide such training. Peopleware makes the point that developers should be considered as experts rather than resources, and should be given enough space and facilities to permit them to excel. In DeMarco and Lister’s experience, it is this which ensures the optimal productivity of development groups.
Today’s software developer is somewhere between a craftsman and an engineer, using both science and common sense in his work. Tomorrow’s developer may well be the perfectly trained engineer proving everything as he goes. But he is not there yet. At least for the moment, gurus are a necessity and ‘software beauty’ keeps our minds on the goal.
[[And the punchline: this was written in 1994, as part of a bigger piece entitled Craft or Science - Software Engineering Evolution]]
Dear Colin McCaffrey, I hope you enjoy the dime
2010-05-13
Dearest Colin McCaffrey,
I don’t suppose you’ve heard of me
Nor had I heard of you before today
I found your tunes on Spotify
And listened to them, by and by
So thank you for the pleasure that they gave
But Spotify, I hear them say
Ain’t a business model to really pay
Much money to the artists that it streams
I sure do hope the ads give you
Something approaching revenue
But frankly, what I ‘paid’ won’t fuel no dreams
I wonder how things are for you
In Vermont, when the check comes through
Do you tear it open, in case it’s your big chance?
Or is it to your bank account
Where an electronic small amount
Registers without a backward glance?
So Col, here’s serendipity
I mis-spelt someone’s name, you see
And I might not have done, another time
But if I never “listen again”,
I’ll remember I once did, back then…
I’ll think back to the words you wrote,
The sugar blues and guitar notes…
And whether you did benefit
From all that clever Internet…
So with all my sincerity
I wish you luck and finally…
I hope that you enjoy the dime
I hope you enjoy the dime.
How much "innovation" is just keeping up appearances?
2010-05-20
Focus on business challenges, not technology innovations
The IT industry really does bring out the best and worst traits of human nature. Were we always quite so excitable about the latest big thing? It is difficult to tell, as historical records don't tend to preserve the glee reserved for all things new and improved, whether or not they have any long-term advantage.
This is particularly relevant given that we are perhaps in one of the most inventive periods since the dawn of humanity - at least, that is how things look from the inside. While the jury is out as to the usefulness and ultimate value of many of our creations, the speed of "innovation" has been accelerating consistently over the past 500 years, such that we have now reached a point where it is impossible to keep up with everything that is going on.
It is a very human thing, however, to want to keep up appearances. In the arts it is important to have an opinion on the latest show, film, or book, and our high streets are full of the latest must-have items. You can see where I'm going with this can't you - yes indeed, our cuckoo tendencies to accumulate shiny things also spills over into how we view, and indeed select and procure IT systems.
We all know this, but many go with it anyway. Marketing departments in IT vendor companies spend their time working out how to make even the most humdrum of technologies look like the best thing since, well, the last best thing. Phrases like "paradigm shift" and "game changer" are used over and again, even though both speaker and listener knows that if the paradigm had shifted as often as predicted, we would have run out of games to change long ago.
Business leaders are subject to the same pressures - after all, in the words of the Matrix agent, they are "only human". It was only a matter of time before a CIO would say to me that his boss had asked him when could he get some of that cloud computing. The fact that the question doesn't make any sense is neither here nor there: businesses want to demonstrate they are up with the corporate Joneses, just as CIOs themselves want to have a few leading edge projects on their CVs. Analyst firms as well can be no better, as indeed, if things weren't quite as exciting as everyone was making out, would we really need analysts to make sense of it all?
Don't get me wrong, there are lots of new and exciting things made possible through the use of technology IT. It has brought the world closer together, opening up whole new ways of communicating and collaborating, and so on and so on. The danger however, is that we are so busy looking beyond where we are for the next big thing, we don’t give ourselves the time to make the most of what we already have. Organisations don't always need the latest and greatest technology to thrive, and there is a big difference between being flexible as a business and simply changing because that is what everyone else is doing. Far more important is that their requirements are clearly understood, and that the right tool is selected for the job in hand.
As my old boss Steve, a seasoned programme manager used to say, "What's the problem we're trying to solve here?" Okay, his language was a bit more colourful than that but the point still stands. As we look at the waves of so-called innovation and try to decide whether they have any relevance to our businesses, let's first and foremost focus on the challenges we face, and how best to deal with them. In a couple of years time, when the dust has settled on the latest hyped-up bandwagon, if the challenges still remain then we won't have been doing our jobs, even if the agenda item of "keeping up with innovation" has been achieved.
June 2010
This section contains posts from June 2010.
Conference keynotes, and the gulf between simple and complex
2010-06-07
Another day, another keynote. This time it's Microsoft TechEd in New Orleans, and Bob Muglia is on stage being all thrilled about all those new and improved aspects fo IT. Three weeks ago it was Ajei Gopal at CA World. Today, elsewhere in the US IBM is kicking off its Rational conference, and I believe from my Twitter feed that SAS has a gig, somewhere.
It's important to remember that most normal people don't spend their lives attending IT conference keynotes. the real attendees at such conferences will have paid through the nose to come, and likely there may have been a little lottery between team mates about who should go. I know that back in my days as a punter rather than an analyst, attending conferences was very much a one-off, not just for cost reasons but also because I would have been too busy actually running stuff. (As an aside, it's one of the reasons analysts exist)
So, most people in the rather humid hall I am sitting in right now will not have the luxury of comparison between multiple keynotes - and indeed (based on the "time " thing) may not have had the luxury to stop and think about some of the things that are being presented. But, for better or worse, I do - and so it is that I inevitably start to think about how this keynote compares with those in the current batch, which appear like waypoints stretching back through the history of conference time.
Now I wouldn't be so trite as to give marks out of ten for individual keynote presenters - though I am reminded of some of those dodgy talent competitions in the Seventies. Actually, even better than that, I wonder what Simon Cowell et al would make of Messrs Muglia and Gopal. The hand would slice, the brow would furrow and the head would shake, before the flabbergastingly obvious, albeit reasonably accurate commentary. "Haven't we heard this before?" he would ask. Well, perhaps, but Simon never did understand the idea of paradigm shifts.
Strip away the repetition, the pseudo-excitement and exec cameos, and there is generally some good, interesting stuff in the average keynote. This one is no exception - new product announcements, capabilities that are now probable (whereas in previous keynotes they were just possible), yadda yadda. Right now for example, data visualisation is having its time in the sun - and to be fair Microsoft has a good story to tell around its SQL Server tools.
However, one area that's pretty fundamental to IT never seems to get a mention at keynotes. All these new-and-improved capabilities are based on a core premise that they can exist in some kind of isolation from whatever else is in the IT environment. The irony is of course, that they never can - even small companies have existing technology investments, and anything new will have to work with all that old, and seemingly inferior stuff. This means both an integration impact to get new working with old, and a migration impact as the lucky elements of IT get rejuvenated and or replaced. (Hat tip to Roy Illsley for mentioning licensing as well, just before I posted this.)
Even if it were possible to adopt a green-field approach, the sheer complexity of IT very quickly comes back to bite anyone (particularly in marketing) who underestimates it. I was in a conversation a few weeks ago with a systems engineer at the Symantec Vision conference (yes, there was a keynote there as well, from Enrique Salem). The engineer went through a quick summary of the complications of failing over a production environment, in terms of servers, networking and storage, the level of hard coding still required, and the potential for error if any part of the systems were even slightly different. It was a welcome reminder of just how complex things really are, in most, if not all IT environments today.
I won't go on as the keynote is due to finish soon. But as well as "star quality" or whatever else we feel should be on the keynote scorecard, let's have a row for "ability to deliver in existing environments". Perhaps it is the absence of this as a metric which has led to so many great ideas, presented at keynotes past, turn into dust - or worse, be rolled out again a few years later under a new terminological banner. IT is complex, and will remain so however hard people try to present it as something that is becoming simpler. And the sooner industry figureheads can take this on board and talk about it accordingly, the better.
September 2010
This section contains posts from September 2010.
The great cloud joke
2010-09-06
Once upon a time, in the cloud kingdom of Cloudalon, there lived a cloud king. One cloudy day this cloud King, who was cloudily named Cloud Cloud the fifth, called his cloud son, the cloud prince Cloud Cloud the sixth, over to his cloud side.
"My cloud son," the cloud king said to cloud prince Cloud Cloud the sixth, "in another cloud kingdom a very short cloud distance away there lives another cloud king. This cloud king has a cloud princess that I think that you should marry. Here she is, the cloud Princess Cloudina Cloud of Cloudonia." Cloud prince Cloud Cloud the sixth, upon seeing the cloud princess Cloudina Cloud of Cloudonia, agreed to marry her. And so, on the next cloudy day, in the cloud garden, Prince Cloud Cloud the sixth and stood by the cloud altar and watched his cloud bride-to-be, the cloud princess Cloudina Cloud of Cloudonia, march down the cloud aisle wearing a cloud wedding dress and carrying a bouquet of cloud flowers. Just as the cloud princess Cloudina Cloud of Cloudonia reached the cloud altar, however, an evil cloud magician appeared and cast a cloud spell on the cloud princess. In a cloudily moment, the cloud princess Cloudina Cloud of Cloudonia had vanished.
"What have you done?" cried the cloud prince Cloud Cloud the sixth.
"I have sent the cloud princess Cloudina Cloud of Cloudonia to a cloud cave in the cloud mountain Mount Cloudtop. There, in her cloud cave, she is guarded by the cloud dragon Cloudfang. The cloud princess Cloudina Cloud of Cloudonia is cloudily safe there, but the cloud dragon Cloudfang, will not let her rejoin the cloud kingdoms of Cloudalon and Cloudonia."
"You are cloudily insane," the cloud prince Cloud Cloud the sixth said to the Cloud magician, but the cloud magician had vanished.
"What are you going to do, my cloud son?" the cloud king Cloud Cloud the fifth of Cloudalon asked his son, the cloud prince Cloud Cloud the sixth.
"I am going to take my cloud horse, Cloud Lightning, and my cloud sword, Cloud Death, and go slay the cloud dragon Cloudfang and rescue the fair cloud maiden the cloud princess Cloudina Cloud of Cloudonia."
"May the cloud God speed you well on your cloud journey," the cloud king Cloud Cloud the fifth of Cloudalon cloudily blessed his cloud son, the cloud prince Cloud Cloud the sixth. With that, the cloud prince Cloud Cloud the sixth got his cloud sword, Cloud Death, and his cloud horse, Cloud Lightning, and rode off to the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia.
The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, rode his cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, the cloud horse Cloud Lightning got tired, so the cloud prince Cloud Cloud the sixth carried his cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the cloud mountain of Mount Cloudtop and the cloud cave thereon, in which lived the cloud dragon Cloudfang and his cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Finally, the Cloud prince Cloud Cloud the sixth reached the cloud mountain Mount Cloudtop. There, in a cloud cave on top of the cloud mountain, Prince Cloud Cloud the sixth of Cloudalon could see the cloud smoke from the cloud dragon Cloudfang who lived in the cloud cave in which the cloud princess Cloudina Cloud of Cloudonia was a cloud prisoner. Our cloud hero, the cloud prince Cloud Cloud the sixth of Cloudalon, climbed the cloud mountain Mount Cloudtop and slew the cloud dragon Cloudfang as the cloud beast slept cloudily. The cloud prince Cloud Cloud the sixth of Cloudalon rescued the cloud princess Cloudina Cloud of Cloudonia. But their cloud adventures were not yet come to their cloud close. They still had to get home cloud and sound.
So...
The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, and the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, rode the cloud horse Cloud Lightning over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, the cloud horse Cloud Lightning got tired, so the cloud prince Cloud Cloud the sixth carried his cloud horse Cloud Lightning and the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, over many cloudy miles along many cloud roads and through many cloud fields. He crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Eventually, The cloud hero of this cloud story, the cloud prince Cloud Cloud the sixth of Cloudalon, got tired, so the newly rescued cloud heroine, the cloud princess Cloudina Cloud of Cloudonia, carried the cloud horse Cloud Lightning and the cloud prince Cloud Cloud the sixth of Cloudalon over many cloudy miles along many cloud roads and through many cloud fields. She crossed many cloud streams and many cloud mountains, though none of them were the same cloud mountain of Mount Cloudtop which in the cloud cave thereon the cloud prince Cloud Cloud the sixth slew the cloud dragon Cloudfang and rescued the cloud prisoner the cloud princess Cloudina Cloud of Cloudonia. When the cloud prince Cloud Cloud the sixth crossed these cloud mountains, he trudged his way through cloud snow. Cloud sand lined the cloud deserts he crossed, and there was cloud water in the cloud oases.
Cloud alases and cloud alaks, though, for it seems our cloud heroes, the cloud prince Cloud Cloud the sixth of Cloudalon and the cloud princess Cloudina Cloud of Cloudonia got lost on their way home, for they wandered into the cloud kingdom of an evil cloud king, the evil cloud king Cloud Cloudonovov of Cloudovia. This evil cloud man had the cloud heroes, the cloud prince Cloud Cloud the sixth of Cloudalon and the cloud princess Cloudina Cloud of Cloudonia, arrested and taken to be driven away and thrown into the cloud dungeon. Just before the evil cloud king Cloud Cloudonovov of Cloudovia put them on the cloudy transport, however, he said....
“Cumulon, im bus!”
With many thanks to http://www.sccs.swarthmore.edu/org/swil/JoelPage/purplejoke.html
October 2010
This section contains posts from October 2010.
Onward and upward
2010-10-08
As you may have seen by now, I have taken the decision to take a step back from working at Freeform Dynamics.
When I first became an analyst some ten years ago now, little did I know what the journey would have in store. As far as I was concerned with my background as an IT manager, here was an opportunity to share the lessons and experiences I had learned with a much wider audience. Joining Freeform Dynamics at first glance appeared to be more of the same – of course I would get the opportunity to continue working with Dale and Helen, but being an analyst was being an analyst, right?
Freeform’s model was simple – to offer firms custom-designed research at a more accessible price than previously had been available, distributing it using media relationships and the still-nascent social Web. The ‘trouble’ came as a deeply shared goal emerged from our research programme – to focus on how organisations were really using IT, rather than how anybody might want them to. From within the company, this seemed a most natural thing to do, though over time it became apparent just how out of kilter this sometimes was with the broader landscape of IT marketing.
All the same, we plugged on, using the funding from vendor-sponsored studies to broaden our own understanding of how IT is done. We learned, we debated, we discovered, we tested hypotheses and opinions and we fed it all back into the machine. The partnership with online news site The Register gave us unprecedented access to those at the coal face of IT – from senior decision makers to programmers, operations staff and deeply technical specialists. Working with The Register has been a fantastic litmus test, as its audience does not hold back if what we are saying does not resonate!
Being small also made us nimble and we have been able to flex with the times, riding out both the downturn and the arrival of social media to our advantage. Early on, we had taken the active decision not to write white papers – these are vendor-sponsored short documents that actively promote a specific product or approach – and to this day I cannot get my head around how they are anything other than paid-for advertising.
However, we did spot a gap in the market for orientation guidance, that is, simple to understand, balanced explanations of a technology area – what was it, where it was useful, and how to start thinking about it. As the market for hard-core research dried up during the downturn, we identified alternative revenue streams initially founded on basic primers, which led to working with Wiley on the 'For Dummies' imprint. This in turn gave us the idea of setting up our own ‘Smart Guides’ – these in turn have been very successful, and much-coveted.
Freeform Dynamics has grown from success to success, even though it has gone through the same tough times as everyone else, and despite internal challenges such as dealing with Dale’s illness. When he offered me the CEO role two years ago I knew I had no choice – it was an offer I couldn’t refuse, not just because of the opportunity, but also that I couldn’t see any clear future to the company if I said ‘no’. And I haven’t regretted a single moment, it was one of the best decisions I have ever made.
Internally however, we all knew it was not going to be forever. I would love to be the kind of person who could take a role and stick with it for twenty years, but I’m not – on the upside (this is what I like to tell my wife), it’s the breadth of experience that makes me what I am. I initially told Dale and Helen I would join for two years, and then we would see what happened; while I in no way feel I have outstayed my welcome, I have had to ignore or otherwise put off a number of other opportunities and activities. Even as part of such a sterling management team, running Freeform Dynamics has been a full-time job in every sense.
In practical terms, the timing of my departure was never going to be ideal. At this point however, we have defined a structure, approach and organisation which enables the company to lead with its own agenda, rather than being buffeted by the agendas of the industry; we have a more pro-active approach to growing client relationships; we have a renewed focus on research as budgets start to free up, and finally, a reputation that is second to none as a ‘voice of reason’ for the industry. The mainstay of the exit plan was to leave things in good shape, and I believe that is exactly what we have achieved.
I shall be spending the next couple of months dealing with some immediate side projects – yes of course, there is a book in the works, and I’ll be working with Dale and Andy on some research we have been doing into more consumer-related aspects of technology. I’ll also be taking the opportunity to stand back, do some research and have a proper think about it all, something that just hasn’t been possible until now. I’m delighted to remain an active affiliate of the Freeform team, and I am not ruling out anything for the future.
Above all, I am immensely proud of everything we have achieved as an organisation. The Freeform team is undoubtedly one of the best I have ever worked with, not just in terms of competence, which equals any analyst firm out there, but also philosophy. Rather than forming an ivory tower so common among analyst firms large and small, Freeform Dynamics occupies the same space as the people it ultimately serves – those who define, procure and deploy IT systems and services for their organisations. To my very core and whatever happens in the future, I will always be proud to be a card-carrying Freeformer and I look forward to whatever opportunities may arise to work with Dale, Helen and the team again.
Onward and upward indeed!
Rush-Chemistry now available in paperback
2010-10-24
As mentioned previously, I'm delighted to announce that Rush-Chemistry is now available in paperback. This book first came out in hardback in 2005, but is only just becoming available as a soft-cover due to the sad death of my friend and mentor Sean Body, who ran the publishing company Helter Skelter. The book has been re-proofed and all of the typos, inaccuracies etc. (as listed in the addendum) have been fixed.
UPDATE: If anyone wants buy directly from me (signed or otherwise), I have ordered a number of copies from the publisher so let me know by comment below or by emailing/Paypal to jon-at-joncollins-dot-net. I will send the books out as soon as they arrive!
Prices including P&P for are as follows:
- UK - £11
- Western Europe - £15
- US/Rest-of-World - £18
Rush – Chemistry is the complete history of the world’s favourite Canadian rock band. The book follows Geddy Lee, Alex Lifeson and Neil Peart from their schoolboy days right up to the global success of their thirty-year anniversary tour.
Here’s the official text:
Against a background of disinterest from the media and a refusal to compromise their music, Rush’s success was by no means guaranteed. Since the beginning, only the determined efforts and downright stamina of the band members and those around them were sufficient to counter the wall of silence. Sharing a single-minded determination to take on the system and win, Geddy, Alex and Neil have never rested on their laurels. Pushing themselves to achieve technical excellence, never avoiding the challenge of taking on new musical influences, through huge changes of fashion and major personal tragedy, the entity we know as Rush has endured. Thirty years on, the band is still creating new music and packing arenas and stadiums around the globe.
Meticulously researched over three years, Chemistry draws on over 50 new interviews with those closest to the band. As the most detailed biography of Rush ever written, this book pulls together the threads and investigates the reasons that have enabled this band to succeed against the odds.
November 2010
This section contains posts from November 2010.
Running the eBook gauntlet
2010-11-01
As I start editing a new book edition, I find myself forced to think about eBooks once again. After all, the printed page is so last year - isn't it? Whether or not print will ever go out of fashion (no doubt topic for another post), it's equally clear that electronic books are here to stay.
As an author, then, what's the quickest way to getting into electronic print? The first caveat is that while things are undoubtedly simpler than they were a few years ago, eBooks should be seen as a work in progress. Before Amazon and Sony's eBook readers first came to market, there was no agreed standard for the eBook file format. Adobe PDF was the most commonly cited but it didn't offer what is now seen as fundamental for offline e-readers - the ability to change text size, with pagination adjusting accordingly.
Today's eBook readers have standardised around a limited set of formats. EPUB is the 'most' standard, as adopted by a number of e-readers including Apple's iPad. The Amazon Kindle uses a derivative of the 'legacy' Open eBook format, known as AZW. Excuse the acronymitis but I hope it provides a point of reference if you see the terms elsewhere.
Writers want to write, and any moment they spend faffing around with technology is at best a distraction, and at worst a blocker. In traditional models, publishers have handled most of the gubbins, but have themselves acted as the main blocker to publication - while this has had the benefit of a level of quality control, it hasn't been the sharpest of instruments. Surely, if an author has a (good) short story that was never likely to be published anyway (or if it was, only to be buried in an anthology), isn't a better route providing a simple workflow with minimal intervention from anyone?
Googling on "how to create an epub ebook" yields long-winded lists of tools such as the one here (from the producers of Stanza - but it hasn't been updated in over a year) or here. JediSaber has a tutorial on how to make an EPUB eBook, but it is not for the faint hearted! While many tools are free, there's also the option of shelling out for a tool such as Adobe InDesign - but unless CS5 is suddenly fantastically simple, that won't be for the faint hearted either.
This isn't helping so far is it? OK, here's the rub. I'm an author, I want to write a book using a 'standard' word processor such as Microsoft Word. Then I want, without sweat, tears or hacking XML tags, to see said book appear on my e-reader device. For the love of God, isn't there a simple way of doing this right now?
Funnily enough, the simplest route today appears to lean on that old stalwart of electronic publishing, the PDF, then using a PDF to EPUB converter. There appear to be a number of options, here, here and here. And here for Mac. Which may just offer the answer.
For authors who also want to make money from their writing, it may also be possible to miss out the middle man and let the service provider (such as Lulu or SmashWords) do the hard work. I'm just creating an account on SmashWords, which according to Joe offers the most straightforward way of creating a downloadable ebook. More news when I have it.
Epilogue to Suburbia eBook and Audiobook
2010-11-05
Well that worked! Experiments and a bit of homework with various electronic book mechanisms led me to Smashwords. As an experiment, I've uploaded the text from a short story I wrote a few years ago, and it's been converted into a variety of formats which is pretty cool. While I've made it a paid item, you can read the whole thing here.
I've also recorded the audiobook version, for future experimentation - notably with Podiocast.com. Watch this space - and click here to listen!
Non, merci!
2010-11-06
Et que faudrait-il faire ?
Chercher un protecteur puissant, prendre un patron,
Et comme un lierre obscur qui circonvient un tronc
Et s'en fait un tuteur en lui léchant l'écorce,
Grimper par ruse au lieu de s'élever par force ?
Non, merci. Dédier, comme tous ils le font,
Des vers aux financiers ? se changer en bouffon
Dans l'espoir vil de voir, aux lèvres d'un ministre,
Naître un sourire, enfin, qui ne soit pas sinistre ?
Non, merci. Déjeuner, chaque jour, d'un crapaud ?
Avoir un ventre usé par la marche ? une peau
Qui plus vite, à l'endroit des genoux, devient sale ?
Exécuter des tours de souplesse dorsale ?...
Non, merci. D'une main flatter la chèvre au cou
Cependant que, de l'autre, on arrose le chou,
Et donneur de séné par désir de rhubarbe,
Avoir son encensoir, toujours, dans quelque barbe ?
Non, merci ! Se pousser de giron en giron,
Devenir un petit grand homme dans un rond,
Et naviguer, avec des madrigaux pour rames,
Et dans ses voiles des soupirs de vieilles dames ?
Non, merci ! Chez le bon éditeur de Sercy
Faire éditer ses vers en payant ? Non, merci !
S'aller faire nommer pape par les conciles
Que dans les cabarets tiennent des imbéciles ?
Non, merci ! Travailler à se construire un nom
Sur un sonnet, au lieu d'en faire d'autres ? Non,
Merci ! Ne découvrir du talent qu'aux mazettes ?
Être terrorisé par de vagues gazettes,
Et se dire sans cesse : « Oh, pourvu que je sois
Dans les petits papiers du Mercure François » ?...
Non, merci ! Calculer, avoir peur, être blême,
Préférer faire une visite qu'un poème,
Rédiger des placets, se faire présenter ?
Non, merci ! non, merci ! non, merci !
Mais... chanter, rêver, rire, passer, être seul, être libre,
Avoir l'oeil qui regarde bien, la voix qui vibre,
Mettre, quand il vous plaît, son feutre de travers,
Pour un oui, pour un non, se battre, - ou faire un vers !
Travailler sans souci de gloire ou de fortune,
À tel voyage, auquel on pense, dans la lune !
N'écrire jamais rien qui de soi ne sortît,
Et modeste d'ailleurs, se dire : mon petit,
Sois satisfait des fleurs, des fruits, même des feuilles,
Si c'est dans ton jardin à toi que tu les cueilles !
Puis, s'il advient d'un peu triompher, par hasard,
Ne pas être obligé d'en rien rendre à César,
Vis-à-vis de soi-même en garder le mérite,
Bref, dédaignant d'être le lierre parasite,
Lors même qu'on n'est pas le chêne ou le tilleul,
Ne pas monter bien haut, peut-être, mais tout seul !
December 2010
This section contains posts from December 2010.
Reading up on VR Spheres
2010-12-08
No US road trip would be complete without a visit to Las Vegas, and so it was this summer we found ourselves in the deliciously unsalubrious Excalibur Hotel. Serene, sophisticated, refined, the Excalibur was none of these, a mish mash of worn-carpet kitsch and tired tourists negotiating wheely-bin sized suitcases around the garish ranks of slot machines.
Standing innocuously near the entrance were two black nylon spheres, a small braided rope separating them from the public. "Back later" said the sign, and we did indeed come back ten days later, following a round trip of the Grand Canyon. With a couple of hours to kill before our flight we found ourselves inexorably drawn back to the spheres and the virtual reality game they controlled.
As Ben played, I got chatting to the cashier-cum-founder of VirtuSphere. Turns out the principle has been around for a while, but technology is only just catching up, what with bandwidth requrements for 3D wireless headsets and all. Put simply, each sphere operates like a giant mouse ball, only in this case a human is being the mouse, encaged within the ball. To paraphrase Morpheus, the future is not without a sense of irony.
A bit of research tells me that VirtuSphere is not the only kid on the block. Here in the UK, the University of Warwick has also been developing a 'cybersphere' for the past decade. No idea who patented it first (I'm sure we'll find out) but it does beg the question about whether the virtu-cyber-sphere thing could have more mainstream uses, once the cost of entry drops beyond a certain point. I'm sure I'm not the first to consider how a transparent sphere might be used alongside an Xbox Kinect, for example.
And of course, the Inter Orbis connection with the two spheres did not go unnoticed...
Print on demand an unqualified success
2010-12-13
Of course. There are hundreds of thousands of people who know this already. But it doesn't appear to be common knowledge in my peer group that print-on-demand is a perfectly viable way of creating something that looks, and feels, to all intents and purposes, like a book.
When I tested out lulu.com a few weeks ago, I had the advantage of owning a pre-formatted set of PDFs - the print-ready files for Separated Out. Lulu is a US-based service, so I uploaded the files, hit the big print button in the sky, and waited a month before the Amazon-like package dropped through the letterbox. Lulu's version was printed on cheaper paper (which I had selected), and the cover was very slightly out of kilter. However it was a book, to all intents and purposes.
I don't underestimate the skill required in layout, font selection and so on, as well as editing - this is not the end of the publishing industry. Given such services however, quite why we still have the term 'out of print' is beyond me. As well as, "Not available in this country."
Final note: I asked friends what UK-based services might exist and I was recommended to two: Blurb.com and AuthorsOnline. I also found PrintOnDemand Worldwide.
Virtual worlds redux
2010-12-13
(post prompted by looking at PlaneShift, and wondering how things had progressed since I last played a MMORPG).
There seem to be three universal truths: the first that some things change faster than expected; the second that others change slower than expected; and the third, that we usually get them the wrong way round. As the esteemed Richard Holway was reputed to have said, "I'm great at predictions; I'm also p***-poor at timing." As I dug through one or two old blog posts about virtual worlds, then, I was struck not just by how long ago I wrote them, but also by how little has changed since I did. Which isn't to invalidate any of the principles, more to illustrate just how hard it is to gauge just what is round the next corner. It's worth remembering, for example, that Facebook was only just being launched beyond educational establishments at that point.
Here's a quite interesting presentation from 2008 which would be worth revisiting from today's context.
Update: Further reading up about Habbo led me to this post about Anonymous. It's a small virtual world.
2011
This section contains posts from 2011.
January 2011
This section contains posts from January 2011.
Found in a field - a piece of pot
2011-01-26
Just handed over a piece of pottery to the Corinium museum - not quite roman treasure but it still brighened up a dog walk!
According to Alison, the curator, "It appears to be a post-medieval rim sherd from a large vessel. It could even be fragment of Ashton Keynes ware dating to the 18th century. A later 17th-century assemblage of Ashton Keynes ware from Somerfield Keynes, Gloucestershire."
Alison also provided the following background from Ed McSloy:
"Archaeological work ahead of housing development at Somerford Keynes, Gloucestershire found a pit containing an assemblage of late 17th-century pottery, almost exclusively products of the Ashton Keynes kilns. The site lies in the adjoining parish to Ashton Keynes, Wiltshire, well-known as a producer of glazed earthenware pottery, and particularly important in the supply of utilitarian vessels to Cirencester and Gloucester between the 16th and 18th centuries. The composition of the group mirrors closely that of the urban markets and, with the addition of a 'chicken feeder' form is a likely representative of the kiln repertoire from/in this period. The dominance of Ashton Keynes products in this group, which includes a number of seconds, suggests that local domestic requirements for ceramic could be met almost entirely by the nearby kilns."
February 2011
This section contains posts from February 2011.
Jack Crymes RIP
2011-02-04
As part of a recent re-organisation I was running through my addresses, to find that John M. Crymes Jr. died in May 2009. Jack provided technical assistance to Rush on the Canadian leg of their ‘Exit Stage Left’ tour. Here's an updated obituary from his university alumni site:
"John M. Crymes, Jr. (EE ’68) of Marin County, Calif., died in May 2009. He studied electrical engineering at the University of Virginia, qualifying in 1968. A pioneer of professional remote audio recordings, he designed and built the world’s first mobile audio recording truck in 1974. He worked with, among others, Bob Dylan, Bruce Springsteen, Eric Clapton, Paul McCartney, Natalie Cole and the Grammy Awards."
One day I'd love to write the book about all the engineers who quietly went about their own jobs, supporting the superstars without need or expectation of any part of that fame. So many stories.
The day that Apple died
2011-02-21
But February made me shiver, with every paper I delivered...
I have never believed that any but a tiny-but-psychopathic subset of IT executives actually set out to achieve world domination. The majority, whatever their aspirations, are destined to do reasonably well, to achieve good things, to shine for a while before one of the corporate monoliths buys them out, gives them an SVP and a pat on the back. They stay for a while then head down to San Diego, spending the next few years scrubbing the deck on their own yacht before they get bored and try to do it all again.
Every now and then, however, someone gets lucky. The seemingly alchemic combination of timing, functionality, design, cash flow and damn good marketing creates a perfect storm, and the wannabe global corporation suddenly finds itself in the position it had dreamed about, hoped for, even planned for. Cue the champagne, but don't reach for a second glass as you discover that the even bigger challenge compared to "getting there" is "staying there".
There's another phenomenon at play. All those mid-sized IT companies sail as close to the wind as they possibly can when it comes to getting one over on the competition. Sales tactics, marketing games and architectural shackles are perfectly valid tools - if you want to win, you have to do what you can to ensure the competition loses. It's all very well until, well, until you actually do win. And then, almost before you take your first sip of the Piper Heidseck, you are exhibiting monopolistic behaviour.
At which point, you have a choice. Do you ride it out as long as possible, making hay while the sun still shines, building a cash pile in the knowledge that when the party is over you can, duly chagrined, rebuild your reputation and keep going at the level you will have achieved? Or do you... oh never mind. There is no option B of course - as you quickly discover, with the markets baying for ever-increasing levels of shareholder value. With no other choice you press on, and over time you even start to believe that you're doing nothing wrong.
We've seen it several times before. Microsoft played some fantastically underhand games in the 1990's, destroying any chances that competitors in the browser, office automation or music spaces would get a look in. They're still paying for it now - a reputation, once tarnished, is very difficult to scrub clean. It was with some sense of irony that we watched the court cases reach a conclusion, years after the 'crimes' were committed. The world had already moved on, and new competitors had changed the landscape to such an extent that the judgements no longer had any relevance.
We could talk about Oracle, which has managed to stay just beneath the radar of corporate scrutiny, despite having systematically erased all but one of its competitors (The only explanation is that the company is just too boring to garner significant media attention, and thus judicial interest). Or Sun Microsystems, which played all the games and did very well for a while, but then bet its core hardware business on the dot-com boom, and was irretrievably brought to its knees when the whole thing imploded. Rough justice again - the company was ten years too early for the cloud "revolution", and would no doubt have been a major supplier to the acreages of data centres springing up today.
Today however, no company is more in the spotlight than Apple, that darling of the media, the upstart that held onto its vision, with its maverick CEO and oh-so-secretive-but-thrilling image. There is a point in every company's history that can be seen as "defining" - a point of no return, however well business is doing financially. In Apple's case it was last week's news (I fail to find an exact date, but I think it was Thursday... UPDATE: The Readability blog marks it as Friday 18th February) - that the company was implementing a subscription model which took 30% of media companies' revenues should subscriptions be purchased from within an app.
The outcry has been deserved, and genuine. They f*cked us over, said one exec. Various reports confirm that the powers that be are looking to investigate the legality of the move - and perhaps in 15 years' time, with intense (and expensive) lobbying from all sides, they might reach a definitive conclusion. But that's not the point. For all its sexy products, its developer ecosytem or its carefully thought out approach to interface design, Apple has - like the guy in Run Fat Boy Run (in the green) who trips his hapless friend - shown its true face.
It's not the first time. The arrogance of the response to the iPhone 4 design flaw. The raising of the portcullis against Adobe Flash, rather than a dialogue on how to make it better. The constraints on apps in the App Store, and the lock-in experienced by each and every iTunes/iPod/iPad consumer, taking things to levels way beyond what Microsoft dreamed of, but could never reach.
No doubt people will stick with Apple, at least for a while - from a subscriber perspective, little has changed, and the company does make rather nice devices. Be in no doubt however, about the stranglehold the company now has on a whole segment of the market, not to mention on media reporting in general. The shame, perhaps, is that in the future, the name and cheeky logo will be linked to a company which went out of its way to f*ck its suppliers. Given the growing levels of bad feeling that pervade a company still (financially) on its way up, it's unlikely there will be too many people rushing in to help as its trajectory starts to tip.
Leaving the last word to Don McLean.
I met a girl who sang the blues And I asked her for some happy news But she just smiled and turned away I went down to the sacred store Where I'd heard the music years before But the man there said the music wouldn't play...
Quick music industry (on the take) take
2011-02-21
I was alerted to the article “The death of the music industry” by @featuredartists on Twitter earlier today.
My reply:
“@FeaturedArtists Interesting, upcurve 89-95 looks incongrous as well. Returning to realistic level vs failure to monetise? NB shipments only”
It was only a few hours later that a better answer revealed itself – oh the serendipity, as a friend on Facebook highlighted an article on ThisIsLondon. To quote:
“So dry your tears at the thought of all those struggling record companies. The truth is that when a previous new technology called compact discs came along, they drove up margins to milk fans as they repackaged back catalogues. This made them so rich and complacent that they turned down the opportunity presented by Napster – which they could have bought for £10 million to dominate the digital market – and allowed Apple to steal their industry.”
Nuff said. for now.
March 2011
This section contains posts from March 2011.
It all began with the bright lights…
2011-03-25
It doesn’t seem at all wrong that as I write this, I should be on my way to a music convention in the name of Marillion. Going back a few years now, as the millennium approached I remember buying the album ‘Marillion.com’. I was bowled over by the inside cover – hundreds and hundreds of passport photos, faces of the fans on whom the band had become entirely reliant for its income. Here was a symptom of something far deeper than selling recordings to casual listeners. Whatever was going on, I knew, I wanted to be a part of it.
Meanwhile some big changes were taking place in my working life, as I stepped away from hands-on IT and became an industry analyst. These were exciting days for technology: the ‘.com’ had arrived, and what a splash it caused. The good times were not to last of course, as for a thousand reasons, the over-inflated bubble of expectations collapsed and many who had gambled on the promise of e-commerce lost their shirts.
The visions of online nirvana weren’t necessarily wrong, just architecturally incomplete, technically premature and economically unsustainable. Despite the dot-bomb, many of the principles have continued to be developed – and while some new brands such as Amazon and eBay survived to become household names, many other, older organisations have reaped the rewards of their evolving online presence. This evolution continues.
As I once again start out on a new journey, things don’t seem all that different to how they looked 12 years ago. Today’s commentators talk about cloud computing like it was in some way new, and social networking like it is an end game. Neither is true – both are just way-markers, brightly coloured handkerchiefs tied to sticks along the road. Look back and they are clearly visible, still fluttering. Look forward, and they are harder to discern particularly as both marketers and pundits try to re-direct traffic towards their own, agenda-led targets.
Technology is shaping the future for sure, but I don’t believe that humanity will fundamentally change – either to become something it is not, or some augmented version of itself, homo steroidiens. Rather, IT will have succeeded when it fades quietly into the background and enables people to be more what they are – bringing up their families, watching and playing sport, and indeed, listening to music and sharing the experience. Such things are fundamental to being human.
Technology can help, but all too often it still hinders. One challenge is to put the emphasis in the right place, on the goal, rather than the toolkit. That’s why I chose to step away from looking at technology per se, and why the next stage of my own journey will involve more of a focus on how technology enables communities to exist, in all walks of life. Best practices learned by musicians and artists will be just as valid to governments, aid agencies and private companies – and vice versa of course.
Of course I will still have an interest in all things IT – just as a craftsman needs to know which chisel to use (though these are new crafts, and nobody can claim to be more than an apprentice). At the same time, the best thing about communities is that they encourage participation, and I am thoroughly looking forward to rolling my sleeves up, building, facilitating and above all joining in. More very soon, but for now I will close the computer and go enjoy a weekend with wonderful people and great music, which is ultimately what it is all about.
April 2011
This section contains posts from April 2011.
Unusual Suspects
2011-04-09
To coincide with our Summer Garden Party bash, we're having some mugs made with Dazz Newitt's homage to Messrs Wilkinson and Glover, as appeared on the cover of Separated Out. It would be good to get an idea of quantity before I put the order in - so, if you are interested please let me know by emailing jon at joncollins dot net or as a comment/message, then I'll know how many to order. All I need are names, quantities and emails for now!
Pricing shouldn't be more than £10, including UK shipping for those who can't make the event. Additional costs will apply for international shipping. Any profits will be donated to Nordoff Robbins, a music therapy charity, so it's all in a very good cause as well!
Click on the picture for a bigger version - and apologies for my dodgy photoshop skills ;)
May 2011
This section contains posts from May 2011.
Should we trust the (social) network?
2011-05-24
In the wake of the Playstation Network debacle, what should we be thinking about trust? Humans are inherently tribal, and the roots run deep. Benjamin Disraeli might have decided to be "on the side of the angels" rather than apes when faced with Darwin's (r)evolutionary theories, but you don't need to wade through the Origin of Species to see examples of humanity functioning at a more primitive level.
Adults, like babies, can become bad-tempered when they are hungry or tired; people parade themselves like birds for the continuation of the species; stressful circumstances cause the secretion of hormones prompting fight-or-flight behaviours that can be difficult to over-ride. Indeed, recent research such as that quoted in the New Scientist suggests that what we believe are conscious actions are registered in the brain shortly after they are enacted, undermining our very sense of free will.
The relevance to modern technology and how we use it for community building is that, however thickly we spread layers of social networking, collaboration and mobile sharing tools upon our daily lives, we are beholden to the the same drives and needs as we have always been.
To take one popular tool, the question is not, "What would primitive societies make of Facebook," but, "What would Facebook achieve for primitive societies?" To some this may imply an interesting exercise handing out iPads to amazonian tribes, but primitive societies exist far closer to home - for all our plastics and silicon, we're living in them.
Discussions around community building, collaboration and social networking tools frequently arrive at the topic of trust, like it is something that can be created. While it has long been recognised as a facet of well-formed societies, trust is not something that can be just willed into existence. Confucius is reputed to have said that rulers need three resources - weapons, food and trust - which implies the need for a fourth characteristic - that of a ruler.
In today's theoretically flat-structured networks of relationships, there isn't always going to be somebody in charge so trust needs to develop in other ways. People can be grouped around a common theme - storage managers, say, or fans of gregorian chants - who may be prepared to forsake a little uncertainty about their peers. Once a group exists, its very existence can imply trustworthiness - rightly or wrongly. In addition, trust begets trust, which is why peer referrals are so important. "Let me introduce you" assumes a level of pre-vetting on the part of the introducer.
Trust may be difficult to gain, but it can be quite straightforward to lose. Sony may well ride the storm following the hacking of the PlayStation Network, which was restored this week (which illustrates another trust factor, "The devil you know"). Other organisations such as the HMRC were dubiously comfortable in their governance-inept position, as UK tax payers have nobody else to work with. Outside of the public sector, smaller firms with less-well-established brands - and here I'm thinking about the large numbers of wannabe providers of community and social tools - may not be so lucky. Once bitten, twice shy.
Gary McKinnon and the next 24 hours
2011-05-24
Before I start, I should say I’m all for justice. Find the bad guys, get the evidence, have a trial before jury, and if they’re found guilty, bang them to rights. Justice isn’t always such a simple, binary thing however – and our ancient legal system has evolved over the centuries to take into account not everything can be as clear cut as people would like.
And so to the case of Gary McKinnon, the erstwhile hacker who has spent the past ten years on extradition row since he dared to break into a number of US military and NASA computers in 2001 and 2002. A few elements of Gary’s case are pretty clear cut: he openly admits that he accessed US military systems and had a good look around.
Where things are less clear are whether he undertook “the biggest military hack of all time” as the US authorities would have it. The whole area is subject to debate – which is why we have courts of law, evidence, juries and all that very important stuff.
So, why not just get Gary on a plane and in front of Judge Judy at the first opportunity? Things aren’t quite as simple in this context either. For a start, Gary has been diagnosed with Asperges Syndrome, a very real condition which could explain both why he didn’t fully appreciate the impact of his actions, and why he would not come through the court process psychologically unscathed.
There’s also the very real potential for mistreatment. Guantanamo and Bradley Manning’s current conditions both illustrate how US incarceration can stoop way below the level that what our own government and people would consider humane. Our historical record may have dark spots as well, but that doesn't mean we should just go along with others.
The point is not whether or not he did access US computer systems – it’s whether someone with a medical condition should be handed over to a foreign authority with such a reputation. I would say no, and a number of far older, wiser and more legally astute people have said the same – such as Justice Mitting, who granted a Judicial Review into the lawfulness of Gary’s extradition.
Gary should be tried for his actions, but he deserves a fair trial in the UK that takes into account the complexities of computer hacking, and how thinking has evolved over the past ten years since he was spotted. When Gary accessed US systems, he did so by running a simple script that looked for blank passwords – and he found plenty. It is unlikely that such weaknesses would still exist today.
Meanwhile, plenty of examples of far more malicious hacking, for financial or other gain, have emerged which put Gary’s own actions into perspective. From TK-Max to the PlayStation Network, these are cases which deserve weighty sentences should the perpetrators be caught. The fact that Gary has spent the past decade with a dark shadow hanging over his head should also be taken into account.
I met Gary in London in 2006, when we both took part in the Infosec Hackers’ Panel. Now as then, I thought to myself, he’s just a bloke who happened to be caught at a time when computer defences were weak and his curiosity got the better of him. Five years later, with Barack Obama in town we have an opportunity to put a full stop on the extradition and end this lengthy saga.
Along with the large numbers of people who have already voiced their support, I appeal to anybody who has ever opened a drawer to see what was inside to flag Gary’s case over the next 24 hours. It might make all the difference.
Mugs arrive Tuesday - get your orders in now!
2011-05-28
For all those who expressed an interest in procuring a mug bearing "Unusual Suspects" design as featured on Separated Out, the good news is a couple of boxes of the blighters will be turning up on Tuesday. So get your order in now and you should be sipping your tea courtesy of your favourite characters by Friday! Combined profits from these and the Summer Garden Party in two weeks' time is going to Nordoff Robbins' music therapy charity, so it's all in a good cause as well!
Mugs will be available at the SGP, or for those not able to come there's a Paypal link below the - ahem - mug shot. If you want any option not in the drop-down list please email me and I will let you know how to order.
EDIT: Just sold the last one!
June 2011
This section contains posts from June 2011.
Apple iTunes Match and the law of unintended consequences
2011-06-08
I can't imagine I was the only one who furrowed both brows on Monday, when Steve Jobs announced "one other thing" in the form of the iTunes Match service as part of its iCloud announcements. The best explanation I've found of it is here, with all its uncertainties - but in a nutshell, for $25 per year, you can upload any music from your hard drive, whatever its provenance, and access it from any iTunes-enabled device.
This is not some fanciful idea invented without consultation. A couple of weeks ago, reported Kevin Parrish at Tom's Guide, Warner and EMI had already been signed but Universal and Sony were still in negotiations. By Monday all four labels were on board, and the RIAA released a statement saying, "When a service comes along that respects creators’ rights and ignites fans’ appetites for their music collections, it’s a win for everybody.” There's no statement yet from the UK BPI, but it would be surprising if they didn't follow suit.
What does it mean in practice? According to Greg Sandoval at CNet News, "Details about the agreements are few, but here's how the revenue from iCloud's music service will be split, according to the sources: the labels will get 58 percent and publishers will receive 12 percent. Apple will take 30 percent." Given iTunes' 225 Million subscribers, that could be a lot of money flowing back into the hands of labels and publishers if people buy into the model.
Of course, those that want to spend on iTunes Match don't have to be music pirates that have accessed the majority of their music on BitTorrent. Most, if not all will have ripped music from CDs - a model which is stil, theoretically, illegal in many countries. Still others will have accepted an album on a USB stick, or will have been burnt a copy on a CD. If anyone has obtained their digital music collection entirely through online purchase, speak now. No, I didn't think so.
Effectively, then, Apple's model legitimises the reality of how things are done at the moment, whatever may be seen as law-abiding according to the morass of legislation around copyright, IP, licensing and so on. It remains to be seen whether the T's and C's of the service offer an effective carte blanche - "By subscribing to this service you will be no longer be considered as a threat to the well-being of the recording industry," or similar words.
Apple's model is not unique in its - ahem - uniqueness when seen next to music streaming services such as Spotify and Last.fm, as well as newcomer Amazon. Each offers online services of various forms to multiple devices, for tens of dollars a year.
So, what could be the unintended consequences? Much attention is being paid to the legitimisation of piracy, but there's little to indicate this will lead to more illegal sharing. It seems unlikely in the extreme that the labels have allowed Apple's T's and C's to close the door to future litigation against the Pirate Bays and indeed, single mums who undertake illegal filesharing - particularly given that the iTunes Match model is yet to be proven.
Indeed, even if it did, would consummate pirates trust "the man" to honour the terms of the deal? As asks Music Week, "Will some form of fingerprinting technology be used to filter [pirated] tracks out?" Perhaps the question they should ask is, "Would such technology be used to identify recipients of illegally shared files, and incorporate mechanisms to scan the hard drive not just for music, but also for evidence of such activity?" As we all know, record labels have form in this area.
Frankly, I think any consummate pirate who took Steve Jobs' offer on without being 100% sure they weren't going to be hoodwinked would be completely insane. Like it's going to happen anyway. "Yeah, sure, just hand over the weapon, I won't shoot you." Bang. "But I thought you said you wouldn't shoot?" And as for it being a license to pirate - well, come on, really. "Great, I'll now acquire thousands of new tracks and upload them, and you'll watch me do it!"
Far more likely is that iTunes Match is a step on the way to more comprehensive media delivery services offered over the Web. In this regard iTunes Match looks like a poor cousin of Spotify - with which, for ten quid a month, you can listen to what you like, where you like, online or offline. OK, you could go for the cheaper option offered by Apple, and listen only to the stuff you happened to have on your hard drive at some point in the past. Want anything else and you'll pay by the track.
In other words, if you're already bought into the idea of a cloud-based music service, then what's to stop you going large? No doubt Apple knows this - it's unlikely that it will stop here. Trouble with streaming models, as with radio, is that they appear to offer a much lower return to the artists than record sales - appear, that is, because they are not subject to the same level of transparency as sales and performances.
Meanwhile, we are yet to see anti-brands of the form of Napster, Kazaa, Pirate Bay, MP3.com and so on appear, offering low-cost or no-cost streaming services from countries in which it is harder to enforce legislation. Even if any such services were 'clean' (i.e. they weren't also dens of malware and identity theft), they would offer no support to artists whatsoever. For the avoidance of doubt, I would recommend giving any such services a wide berth.
The bottom line is that, while we are undoubtedly lovers of the tactile, Apple's iTunes Match service is one more nail in the coffin of buying music as physical product. It may great for the music industry and telecommunications companies but I fear for artists tied into long-term contracts that majored on record sales; or worse (and I'm speculating here, but only a bit) that have been drawn up more recently to the advantage of labels, in the knowledge that revenue will comes from subscriptions to the channel, not payment for individual content. If it is indeed a transitional model, both legislation and contract terms need to catch up fast to ensure iTunes Match can be the "win for everybody" that the RIAA claims it is.
Reflections on Still Canal Waters - Summer Garden Party 2011
2011-06-14
The only thing we lacked was the numbers. There, I’ve said it. Thirty-forty people came, that’s a few over the number of tickets we sold; plus some local friends popped along for the afternoon to have a beer and listen in to the acoustic sets.
But, despite the turn-out, the Summer Garden Party was fabulous. Magnificent. Stunning. Those who know me, know I rarely get effusive, but this was one of those occasions. It isn’t just me – here’s a selection of reactions:
"Twas excellent, will have to bring a tent next time!" "That was the most awesome day/night I've had for a very long time. I don't use the word lightly…" "Please, can we do it all again!" "Absolutely fantastic! One weekend, two brilliant - but very different - GPs …" And that was just the artists! So, what happened to make the 2011 Summer Garden Party such a great event?
The afternoon kicked off in the garden with Fergy, one man with an acoustic guitar and a bucketful of gentle charisma. To me, he epitomises everything music should be about. “But I only know three chords,” he says, somewhat embarrassed. Yes indeed, Ferg, but you have that indefinable quality known as ‘soul’.
Howard Sinclair was up next. I must get some of the set lists as specific song titles elude me – I know The Beatles were in there, and a number of other hits as well as some of Howard’s own compositions such as Nine Tenths. Howard's a talented guy, and always worth listening to.
All of this in a beer garden at The Tunnel House, one of the most beautiful settings you could have for an outdoor performance. The beer was Potwalloper, locally produced over the border in Wiltshire – or if that didn’t tickle your fancy there were three or four other brews on draft. The garden itself was full at lunchtime, then emptied and filled again as the evening tide of local folk came for a pint.
For the final afternoon set, Rich Harding and Simon Rogers performed covers of Radiohead, Pink Floyd and others, as well as some Also Eden tracks, finishing with a rousing(ish – it was acoustic!) rendition of Fish’s The Company. “Thank you very much,” said Rich, “I’m now going for a lie-down.”
Of course, the fact Rich was even performing beggars belief – just a week before he was having yet another bone graft operation following his near-fatal motorbike accident last year. It wouldn’t be too far from the truth to say that a generation of Welsh medical students will qualify having used Rich as their worked example. Of this, more later.
Time for a barbecue, and a quick hat-tip to the staff and management at The Tunnel House for being so accommodating, helpful and friendly. Nothing was too much trouble. Meanwhile, in the barn behind closed doors, preparations were underway for the evening performances.
It’s difficult to describe the atmosphere of the barn. A great little venue, for anyone who knows Riffs’ bar, it’s like that only a bit wider with the bar at the back rather than down the side. From the punter’s perspective, on the night, with thirty-forty people stood up and dancing around, it was “critical mass” – any less and it could have felt sparse, but it was enough people to party.
Jo McCafferty had travelled all the way down from Aberdeen to perform. For anyone who doesn’t know her stuff, think a Scottish singer songwriter, Dido with an edge, singing of the joys and disappointments of life. A great singer, a genuine gem who has toured with Steve Hogarth and Midge Ure to name a few. It’s a good job Jo’s voice was so radiant, as the lighting was not – at least the way it was initially set up. A beautiful set, anyone who doesn’t have a couple of Jo’s CDs in their collection is missing out.
And so, to the Skyline Drifters. Five people who last played together seven years ago – Dave Woodward on guitar, Ade Holmes on drums, Tony Turrell on keys, Tony Makos bass, and, yes, Rich Harding on vocals. Seven years, one rehearsal the night before, and in a stone barn in the middle of the Cotswold countryside on 11 June 2011, five musicians blew the bloody roof off.
I can dig out the set list if anyone’s interested – but it was, in a word, ‘esoteric’. It kicked off with Robbie Williams, then mixed Pink Floyd with Queen, Iron Maiden with ELO, and yes, Marillion with Fish. Ade drummed like it was the last chance he was going to have, Tone’s hand was flying round the neck of his five-string bass, and Tony’s keyboard rig (and his playing!) would have put Asia to shame.
Two highlights stand out – Comfortably Numb, where Dave’s bandmates stood back in awe as he pulled off one of the best renditions of his namesake’s solos that has perhaps ever been heard. Even this was transcended by the sheer joy of Mr Blue Sky. And then the laughter at Tie Your Mother Down (Rich reading lyrics with a torch), the passion of 100 Nights… it was all there.
Finally, a double-bill encore of Hooks in You and Market Square Heroes, both of which had the crowd bouncing. Then the lights came up, the adrenalin drained and Rich had to almost be carried off stage. Rich, I take my hat, coat, shoes and socks off to you. It’s not just your talent – the range of material you can tackle, and the way you change your style to suit. Short of Steve Jobs, I’m not sure I can think of someone with more strength of will.
What a night. Snatching the best kind of victory from the jaws of the mundane. Things could have been so different – any one of the thousand tiny details could have tipped things off the edge (dare I mention staging? ☺) – but they didn’t. To be fair, we chose the right crowd – what a great bunch of friends, who really get what it means to party.
We’ll be meeting up – the planning team – in a few weeks to have a cold, hard assessment of the Summer Garden Party. The big question is why more people didn’t come. There’s no right or wrong – nobody should be expected to turn up just because of their musical affiliations, friendships or geography. But the fact is, despite our best efforts to inform people (I hope we kept one step away from cajoling), we were very lucky to have just enough numbers to make the event a success.
Right now, I don’t know if it will remain a one-off. If it does, I think I speak for everyone who came that it was a privilege to be among such fine company, such great, talented musicians in such a great place. There’s an element of magic sometimes, when everything comes together and just works, if I have any sadness it’s only for the people who I know would have got such a kick out of it too.
I have already thanked everyone – but I repeat my unerring gratitude to all that made it such a success – organisers, artists, participants, venue. I’ll leave the last word to the guy who was on the bar in the barn on Saturday night. When I went in to pick up the staging on the Sunday, he looked at me with a big grin, shook his head and said, “You guys know how to rock.” Yes, yes we do.
Of no interest to anyone whatsoever
2011-06-20
Apologies for this - but I've stumbled across the command that enables an SSH session to run on my SS4000-E storage server. With thanks to kevinsloan, the command is:
https://IP-address/ssh_controlF.cgi
From this I've discovered it is indeed a Falconstor IPStor disk server, which is based on Debian Linux apparently - according to this chap. I was hoping to be able to Wake on Lan (out of sheer laziness - and to ensure network backups take place)... but it might even be possible to configure the box as an iTunes server.
Update: instructions in German, here.
How the winter evenings will fly by.
The future of the publishing industry?
2011-06-30
Thanks first of all to the organisers of the IC Tomorrow "Meet the Innovators" session organised with the Independent Publishers' Guild at the BMA this morning. A quick-fire series of entrepreneurs had eight minutes to present their propositions, where possible reserving two minutes for Q&A, so there was no room for slack. Pitches were straight to the point – this is what we do, this is why, this is what we need, to an audience of representatives from smaller publishing houses and literary agencies, and a minority of observers such as myself.
While the offerings were diverse they all started from the same perspective - acknowledging the multi-device, digital, collaborative world we’re moving towards. Most showed facets of products and services that exist already – “Spotify for eBooks” said one tweet, “MMORPG for education” said another. One was built around sharing goals with peers; another, getting new authors to market and disintermediating traditional publishers. Innovation came from how capabilities proven in other domains were being applied to the world of the written word. The ability to search online literary databases from a Kindle for example, or to rent eBooks, or perform semantic analysis, or link QR codes to online resources, or to build communities of readers, each is innovative in its own way, even if it replicates capabilities already used elsewhere.
The fact each inspired feelings of recognition, rather than any Dragon’s Den-esque “I wish I’d thought of that,” is as much an indictment of the state of publishing as an illustration of what could be possible. Don't get me wrong, I was genuinely impressed by the efforts that have gone into bringing such capabilities closer to fruition - it will be very interesting to see which ones make the leap from concept to mass market business model, and I in no way underestimate the effort involved.
Equally however, there is an undeniable game of catch-up taking place. Publishing industry nnovation in the broadest sense is hampered by the fact that the industry is only just getting going. Let's face it, it's only recently that the idea of digital book versions has really been pushed by publishers. While all evidence suggests it is being widely accepted by readers, mass-market digital music has been around for 25 years or more so we really shouldn’t get too flushed with excitement about mass-market digital books.
This isn't the place to go into detail about why this is, but it's worth mentioning that eBooks are still a work in progress. Inherent flaws remain – the lack of an agreed eBook standard between Amazon and everybody else, for example, which is confusing readers and adding unnecessary cost into the content production process. This also means that valuable skills are being used inefficiently, in the the publishing equivalent of re-keying as books are (manually) reformatted to suit the different platforms. There's a way to go.
These publishing industry developments are necessary, both in terms of both catching up with other parts of the creative and media industries, and arriving at a point where digital written content is seen by all publishers as the norm, rather than an adjunct to print. However, in the future the chances are we will come to see them as getting to the starting gates, providing the foundation upon which real innovation can take place. We started to see signs of this in one of the presentations, the final one as it happened, which came from an organization called Space Bar Interactive.
In the presentation, a digitized book was recognized as one element of an interaction between the content creator and the content consumer: while an important element, it was subordinate to the interaction as a whole. We can perhaps see a similar phenomenon with the likes of JK Rowling’s launch of an interactive web site, which brings together elements of the original (printed) books, the world portrayed by the films and additional content, all to create a more rounded – dare I use the word (JK does) – experience.
While this bodes well, the state of the industry begs the question: what would pre-Gutenberg storytellers, educators and journal-ists make of the abundance of tools and capabilities now available, if they were suddenly transported into the now? Difficult to say but I believe they would see print as just one tool, probably inferior to their preferred approach of direct interaction. They might favour the podcast as a way of capturing a tale for example, using print only as a way of archiving. Or perhaps those who preferred putting pen to paper might still do so, nonetheless profiting from the many different ways that ‘paper’ could now be transmitted to their readership.
All options for communicating a story were, are and will remain valid, but the difference is their relative position in the hierarchy of preferred publishing mechanisms we have currently. Print holds the number one slot in the hearts and minds of publishers, even if digital books are currently outselling print copies by two to one on Amazon. That’s not to say publishers are wrong to hang onto print: just as face to face interaction will never be outmoded, nor will the human desire to have a more tactile reading experience.
The only inaccuracy is the idea that one will supersede the other as king of the hill: print is currently there by nature of the fact that there was no other option, and digital will win for a while merely on the strength of pent-up demand, but in truth both models are equally valid. Just as today's pitches suggest, future winners will be those who make the right choices about clever combinations of content, formats and media choices based on the needs, desires and contexts of their audiences, and not on some arbitrary “we’ve always done it that way,” or “it’s the future, get with the program” meme.
The bottom line is that there is plenty of innovation to be had – but first we need to build a foundation of capabilities and an understanding of the valid part each can play in how writers and authors can engage with their audiences. As a final point, it didn’t go unnoticed on Twitter that most of the people in the room were using pads, that is, pen and paper, to take notes on what the presenters were saying. While this could be seen as a comment on how quickly (or otherwise) the publishing industry is adapting to new technologies, there is a wider point: that no single mechanism, new or old, will ever be suitable for all needs. If we are to be blessed with multiple choices, the skill will be making them work together, whether digital or paper-based.
July 2011
This section contains posts from July 2011.
A Million Tiny Gestures
2011-07-15
Like everyone else in the UK I would imagine, I have been thinking hard about the ongoing News International situation and its ramifications. What with the relationship between politicians, the media and society at large firmly in the spotlight and with personal stories intermingling with demographic shifts, there's a lot to get one's head around. Some of it is undoubtedly good, not least the unearthing of illegality wherever it should be buried. Also laid bare has been the inappropriate influence of a few powerful people on successive governments. We can only hope that the institutional cowardice of the past can be replaced by a bit more gumption on the part of our elected representatives.
I confess also to having felt a certain unease in just how simple it can be to broadcast views, however. This discomfort is entirely hypocritical: when the subject of Britain's forests came up for example, I was one of the many who was able to express my deep concern through sites such as 38 Degrees by the mere click of a button. Campaign sites have become like a giant remote: couch potatoes can collectively email their MP's with the same ease as changing TV channels or voting for their favorite pop star. Twitter is even simpler: my suggestion (one of many) to boycott the paper required 140 characters and a carriage return.
While those in office may feel a bit miffed at the volume of messages they are starting to receive, both sides (and indeed, those who make it possible) are missing the point to see such communications as a bigger version of what has gone before. We can use terms like "campaign" or "petition" and see the quantity of collected identities (no signature required) as in some way comparable to a quantity gathered by letter writing, complaint calls or people in the street. The results are not comparable however: of course you'll get a bigger response, if you make the mechanisms easier.
In other words, however excited some may get about how they are changing democracy through the provision of yet another campaign site, they are not. Something bigger is afoot, which may be seen as the conglomeration of all such sites, together with social sites such as Facebook, and the offline interactions that they reflect.
This last piece is important. How easy it is to think that #NOTW was brought down by Twitter, for example, ignoring the fact that just as much conversation was taking place verbally and influenced through the gestures of individuals, both personally and professionally (think: stock prices). A million, ten million, a billion tiny gestures, some counteracting, others reinforcing, all add to the whole, just as they always have.
What's different is that we now have a series of mechanisms to capture such moments. Each offers a fragment of opinion, partially thought out based on an incomplete understanding of the facts. Less the wisdom of the crowds, then, and more the sentiment of the crowds, which needs to be viewed as a whole as much as individual voices, if not more.
While this may be an obvious conclusion, it's not currently the case. MPs write boiler-plated letters back to couch-based constituents, the cost of vellum, stamp and postal miles draining resources and missing the point. Meanwhile, social media sentiment analysis is a new field which will no doubt become an end in itself, also completely missing the point that it is measuring another set of measures, themselves based on incomplete modelling of what's happening "out there".
The question remains about whether, as Heisenberg might have predicted, the use of social tools or the act of measurement will change the behaviours they support. While this is perhaps inevitable in the short term, it is no recent phenomenon - uprisings such as those in Egypt will make use of the tools of the day, just as did pamphleteers like Thomas Paine in 1776, nigh on 200 years before the Internet existed. (Indeed, it would be an interesting study to see whether a correlation exists between generations of communication tools - printing, telephony, TV etc - and mass behaviour).
However, it's important not to judge the tools available today with older models - comparing response numbers like-for-like, for example. It is a moot question whether or not the underlying nature of democracy is changing. Rather, given that messages can be passed faster than they were in the past, government needs very quickly to move to a position where it can respond in a reasoned fashion to genuine sentiment, instead of becoming no more than a series of knee-jerk reactions to inadequately expressed opinions.
It's just a theory... gurus and mid-life crises
2011-07-19
I had an idea I wanted to test - that business texts and self-help books are written by people who are sufficiently compelled to do so. Here's the principle: we keep going with our humdrum lives until we reach a point we don't want to do it any more, for whatever reason. Some of us reach a kind of crisis point - which we emerge from, sometimes feeling all the better for it. An even smaller subset exits from this stage thinking, "Eureka! I've worked out the answer!" and feels sufficiently compelled to write a book about what they have learned. On occasion the book gets extremely popular and a new "guru" is born.
Now, I'm not going to say whether this is good or bad - but I thought I would test the idea. First I looked at the ages that people tend to hit mid-life crisis: this chart comes from a "2008 Gallup phone survey of 340,000 Americans" cited here:
As you can see, the "happiness slide" starts at about 34 and troughs at about 50. Now let's look at the ages of a few popular "gurus", and when they published what is generally seen as their seminal work:
Name | Born | Seminal work | Age |
---|---|---|---|
Richard Carlson | 1961 | 1994 | 33 |
Brian Tracey | 1944 | 1981 | 37 |
Peter Drucker | 1909 | 1946 | 37 |
Stephen Covey | 1932 | 1970 | 38 |
Dale Carnegie | 1888 | 1926 | 38 |
Mitch Albom | 1958 | 1997 | 39 |
John Gray | 1951 | 1992 | 41 |
Deepak Chopra | 1946 | 1987 | 41 |
M Scott Peck | 1936 | 1978 | 42 |
Susan Jeffers | 1945 | 1987 | 42 |
Charles Handy | 1932 | 1976 | 44 |
Robert Kiyosaki | 1947 | 1992 | 45 |
Michael Hammer | 1948 | 1993 | 45 |
Eckhart Tolle | 1948 | 1997 | 49 |
David Allen | 1945 | 2001 | 56 |
I didn't restrict this list in any way - if I thought of someone (or they were suggested for me), and I could find their date of birth, they they were in. There's a question on Richard Carlson (Don't Sweat The Small Stuff), rest his soul starting so early - as he was in the psycho-analytical game anyway - but then so was M. Scott Peck (The Road Less Travelled).
I'm not saying that everyone has to fit - after all, it's just a theory; there's also the question of whether one needs 40-odd years' experience before anyone, including publishers, would take you seriously. But it certainly would be interesting to know the back-story on some of the authors.
August 2011
This section contains posts from August 2011.
Reminder to self: this is the information revolution
2011-08-22
We all love banding around terms like 'revolution'. It makes us feel relevant and part of something worthwhile. The danger, of course, is that we use them so often that they lose their meaning.
Consider 'information revolution', for example. Yeah, yeah, heard it before, seen the video, got the powerpoint. Here's a prediction however: when the dust's settled in a few decades' time, when our realities are augmented and infrastructures virtualised, in other words, when all those things we talk about so much have become the paving slabs and tarmac of the silicon highway, we'll be able to see that the revolution was about information. Isn't that bleeding obvious? Well, no, not really, on two counts.
First, it is just one of many statements each vying for success. I've sat through plenty of presentations that say, for example, we have moved from mainframe to client-server to mobile, or indeed, from the information age to the collaboration age, or whatever. Marketers like to see revolutions around every corner, for fear that their own products are in some way dull and in the knowledge that the competition are doing the same. Each time, cynics and those who have been round the block more than once like to point out that they've seen it before, that its nothing new. In general they are right - we've seen it with social networking ("Started in the Seventies"), cloud computing ("Isn't that just a mainframe") and so on.
Second, the technology industry is stuffed full of geeky blokes who, to fall back onto unfair stereotypes, tend to prioritise tools over what the tools can do. Technology is a Toad Hall with a surfeit of toads, who absolutely have to have the next big thing even if they never use it to its full extent. We're all guilty, at least the most of us are - even when we agree that the computer we had 15 years ago was good enough for most purposes. The end result is that, every time a new wave of tech hits, we spend the next couple of years re-learning al the things we should have known already - security, management, you name it.
Against this background, we are guilty of ignoring the very real revolution that has been taking place since the Second World War and the subsequent invention of the transistor. It was always about information, and it will continue to be. From business analytics to Youtube, from mobile point-of-sale to smart grids, our ability to collect, store, process and access vast quantities of information continues to drive us relentlessly into the future.
And so, to the cautionary note. However we think about technology today, whatever urges us to buy the sexy new gadget or transformative software package, as an industry we have a responsibility to transcend our desires to deliver new and improved technologies, and recognise our role in catalysing the revolution that is taking place in front of our eyes, for better or worse. We have seen some great things, and some not so great, happen as a result of our new capabilities. We all have a role to strive for the former, whilst protecting against the latter.
September 2011
This section contains posts from September 2011.
Helping a friend: software delivery check list
2011-09-02
"Because I can," I've been helping out a mate who's outsourcing some development work. I looked on t' Web for a simple delivery checklist but all I found was DoD standard documents, which I thought were probably a bit over the top! Based on past experience, this is what I came up with - what did I miss? I'll update with any feedback, gratefully received.
1. Scope of delivery - describing the changes made, or referencing a description elsewhere
2. Content of delivery - describing the modules that make up the delivery and any supporting information (test scripts, documents)
3. Checklist items:
- Test scripts updated - this should be done as part of the development, either by the developer or a third party - User documentation updated - also part of the development - Comments included in code and headers - to indicate date and scope of changes
- Unit tests passed - if there are any units to be tested of course - testing at the debugger level - Functional tests passed - to confirm new functionality is working - Regression tests passed - to ensure that existing functionality is still working - User acceptance tests passed - to ensure package loads, runs and works acceptably
To think I was once a software configuration manager - shocking isn't it? :)
Talking crap is not a crime
2011-09-11
When I was a child, I remember a television series called Poldark. I say "remembered" but I don't recall all that much about it - some period costumes, sombre lighting, a few ships on Cornish waves, and the occasional bit of dialogue is about it.
One bit that stuck in my mind for the past thirty-five years is a court scene. In it, a gentlemanly type (I assume Poldark?) is being accused of having shouted, "Pickings for all!" on the tragic occasion of a ship being wrecked on a nearby shore. The witness is a heavily-accented local man: I think I can remember sideburns and a waistcoat, though I could have added them later.
Quite clearly, the courtroom is baying for Poldark's blood.
"Did you make the cry of 'Pickings for all!'?" asks the judge (I can't remember what he looked like).
"No sir, I did," says the man. The court case crumbles; Poldark is released without charge and the credits roll. Perhaps the villager gets it later, but I don't think so.
Humans are no strangers to talking crap. To shouting out when quiet would have been better; to making false accusations, dubious suggestions and unnecessary outbursts. No doubt, we have been doing all these things since the beginning of time. "Get him!" "String her up!" "I'm going to have you!" "Let's go kick their heads in!" and so on - whether or not there was any intent to actually get, have, kick or string. We're like that - particularly blokes I think, but maybe that's due to my lack of experience.
Enter the internet, and however bad our spelling might be, stuff we might have said out loud is now being recorded, broadcast, archived for playback at annoy point in the future. On Facebook I see usually-gentle people saying that they'd give so-and-so a good slap. Or that they believe hanging is too good for someone. Or whatever. Do they mean such things? Perhaps - at the time. Do they seriously expect them to be acted upon? Of course not - and indeed, the fact that sometimes people feel their views are not being heard may itself lead to more vocal, and indeed more dramatic expressions of such views.
People talk crap online just as offline - and other people are watching and listening. So we end up with the case of AA, who 'threatened' to blow Robin Hood airport "sky high". Did he mean it? No, of course he bloody didn't. Was anybody else going to say, "Oh, good idea"? Of course not. Did his sentence send out a deterrent? It's difficult to see about what, unless it's to deter people from speaking their minds.
The case of the two rather disappointing "rioters" in Northwich is more complex. Let's "Smash d[o]wn Northwich Town," they proposed - but nobody went, other than the police. Were they jumping on the bandwagon? Most likely. Did they succeed in increasing the violence or anything else in any way? It doesn't seem so, not in their areas.
There's a straightforward scenario, which starts with someone saying, "Let's do a bad and illegal thing." Bad and illegal thing is done, people get arrested and punished accordingly. Indeed, the fact that Jordan and Perry turned up will not have helped the case for the defence. But just how much of the sentence was against the act, and how much was it to do with the hopelessly bungled online post?
Don't get me wrong, I'm all for criminals being shown the error of their ways, whether they are rich or poor, in positions of authority or on the streets. We all have choices, whatever our circumstances, and we should face up to the consequences. And incitement to crime - where crime clearly taking place, or where its continuation (in the case of hate crime for example) is encouraged as a result, that's just plain wrong.
On this day of all days however, let's recognise that not all remarks, whatever the words are, should be interpreted as such an incitement. Yes let's have a robust legal system, and give our courts the tools they need to separate right from wrong. Let's recognise our online responsibilities, understand that cyber-bullying and insults to the non-PLU have no place in a civilised society. But let's not create a world where people can no longer make an online remark for fear of who might come knocking on their door, however stupid it might be.
Herding the article cats - a five minute work flow
2011-09-23
A bit of pre-amble. My simple, but surely not exclusive, even if slightly narcissistic need was for a place where people could come to read all my different articles, from different sources in one place. Given the less narcissistic option of having the ability to present multiple articles from different authors, surely I wasn’t alone in thinking of this?
The secondary criteria were ease of use – in terms of getting from new-article-on-web to new-article-in-index – the ability to broadcast from a single place about existence of said, and finally, a wouldn’t-it-be-nice-if I could measure who was reading what? Oh, and finally, zero cost other than my time, and no geekery beyond clicking buttons.
I started thinking about some kind of online index – there must be hundreds of articles out there, so an archiving tool which could then link out to the original sources? Great theory – but outside library management software (maybe a bit OTT) no dice.
After a long trawl around content curation, collation and presentation, a diversion into link sharing (Digg, StumbleUpon etc) and a quick trip through newsreaders and RSS managers, I finally settled on bookmarks and tags. Simply put, Delicious to bookmark articles and tag them appropriately – with “interorbis” say. I could then access the complete list whenever I wanted.
That solved building a list and ease of use, but it wasn’t so hot on presentation. Undeterred, I thought about a different route – to hand-craft a thumbnail of each article and host it on my own blog. Trouble is, Wordpress isn’t really cut out for few-frills blogging, and the off-the-shelf display themes weren’t ideal.
I went back to the link sharing route. Remembering one option was to set up a separate Twitter account and use it to pump out articles (rejected for display reasons), my travels took me to two services that bridge the gap between microblogging and full-fat blogging, namely Tumblr and Posterous.
Posterous initially looked like the preferred option – better sharing capability – but I settled for Tumblr because of the more established database of themes available. It is, apparently, possible to use a Tumblr theme in Posterous, but that was getting all too complicated.
In my travels I also stubbed my toe on Ping.fm, a hub for collating and broadcasting messages to social networks. Add that to Bit.ly for link shortening and (more importantly in this instance) click-through measurement and I had all the tools I need.
So, I now have a solution. Create blogs and articles wherever they need to be; then, at the same time they are published, copy a short section and use it with Tumblr’s share-link capability. Pipe the link through bit.ly first, to get that sharing measurement goodness, and then broadcast the final article via Ping.fm to Facebook, Twitter and LinkedIn.
End to end, the whole thing takes less than 5 minutes. Not perfect as it still requires some manual intervention, but not completely wrong either. And I’ve got a few of my photos in as well :-)
Twitter link scams grow in complexity?
2011-09-26
It's always gratifying to have someone reply to your tweets. But not, however, if they don't actually exist - as I started to find out a few weeks ago. To whit: a response I received to a twitter comment - here it is - was repeated four days later. Another Twitter response has been repeated twice by different Twitter 'users'.
Now, call me a cynical old hack (it won't be the first time) but I sensed something fishy going on. When I looked at the accounts in question however, they appeared kosher - at least at the outset. Genuine-ish sounding names, genuine-looking photos and profiles, tweets that looked human. and only a small number of shortened URLs.
Looking more closely however, things became more fuzzy. That @richardarguinem chap - why is the photo of a woman with a baby? And why was @jesusbig4, a Miami resident, tweeting about George Osborne's benefits claims?
On inspection it quickly became apparent that these were not real people, but rather, Twitterbots that were taking other peoples' tweets and adding them to their own streams of automated consciousness. And here's the 'clever' bit: shortened URLs were included only every now and then, but when there, they linked to shopping sites.
So, is this just another example of link bait using the latest social tool? Yes, in part. The links I tried (using a sandbox virtual machine) went through to game sites or lifestyle questionnaires, both of which presumably have some kind of affiliate relationship. In other words, nothing particularly illegal, though potentially lucrative.
Is it really that bad, apart from clogging up the twittersphere? Again, yes, in part. A number of risks arise from what might be just an initial foray into even more complex Twitter scams. It would be easy, for example, to tap into popular hashtags or even read users own bios and tweets, and send people in related directions - to download the referenced film, for example, or respond to an associated survey.
Equally, just because linked sites aren't dodgy now, there's nothing to stop them being so in the future. I wouldn't put it past a URL scammer to link to a scareware site -"Your computer has been infected, please download the patch" etcetera.
What to do, apart from being vigilant and keeping genuine 'shortened URL' protections in place? Most of all, each one of us has a role in reporting such twitterbots - remember that they rely on looking similar to the real thing, so if one is left unreported for a few weeks, it becomes harder to spot.
No less important is education - for example by broadcasting names of Twitterbots and telling people what they are up to. It's a shame nobody has come up with an automated tool which enables twitterbot followers (there are some, it's easily done) to be informed about their error, so they too can do something about it.
The bottom line is that hackers follow the money, so where there's opportunity, it'll be taken. Awareness is perhaps the best weapon we have.
Postscript: I happened to leave a Twitter search on the term "touchpad" going as a Tweetdeck column. It's pretty obvious where the linkbait is happening - and what Twitter could do about it - not least, looking for multiple RT's on tweets more than ten days old. Just sayin'...
Social blood donation
2011-09-26
Quick take from the bed:
The blood transfusion service started using social tools only 6 weeks ago.
Popular with students apparently.
Hashtag but not displayed publicly
Lots of potential!
More soon, now back to clenching fist...
I, Technology: Can you turn your back on the Web?
2011-09-30
You’ve had enough. Facebook, Twitter, targeted ads, unsolicited email, the uneasy, nagging feeling that your computer is doing things beyond your control, talking to other systems without your knowledge, the automatic downloads, the quiet information exchanges – is that your password information crossing the ether, in clear text? Or your credit card number? Or your childrens’ personal details?
Perhaps it’s more than a feeling. An incident. Something happens out of the blue – a case of mistaken identity, your access has been blocked, you have received one of those, “We’re sorry to inform you, but you could have been among a minority of our customers who…” letters. Or perhaps you’ve had a minor epiphany, a genuine, joyous moment of truth when you realise real life has so much more to offer beyond the screen.
Whatever. You’ve made the decision – you just want to leave the Web. Maybe you’ll come back – you don’t know. But for now, you’d like to take your information and go. Just how viable is it, however? Can you really just pack up your virtual possessions and walk away? While this would never be a straightforward question given just how complex the Internet has become, it should at least have an answer.
Let’s give it a go. Simplistically, yes, there is nothing stopping you turning off your computer, unplugging the broadband from the wall and dropping any smart phones into a bucket of water. It might be wise to inform friends, family and work colleagues first – for example setting up an out-of-office email that says, “I am no longer here. If you want to find me, you’ll have to go outside and look.” Or posting a message on Facebook to the same effect.
There – that was easy – you’re done. Assuming that everyone you want to speak to is able to speak to you otherwise. Which means you probably don’t have a job, or indeed live in a place where others live, or have any friends at all. Perhaps its just you and your dog on a remote Scottish isle, in which case you are quids in.
Apart from the trace that is, the virtual vapour trails of data you will have left, snapshotted, sliced, diced and stored by every service provider you ever came into contact with. The trails are yours – the principle that personally identifiable data is protected, is enshrined into the UK Data Protection Act and many other national and international laws.
So, yes, you can in effect request the removal of such information. It’s simple. Just go to every Web site you have ever registered with and request that your information be removed. How hard can it be – you did keep a list of them all, didn’t you? It’s a good job you can trust all those big companies to respond to the request.
Ah. What was that? There could be complications. It seems that some companies haven’t been playing ball – Facebook, Microsoft and even do-no-evil Google have all been called out called out as needing to comply with the a proposed right – not a ‘possibility’ – of citizens to say whether their personal information can be stored. And they don’t like it.
Giving both legislators and online providers the benefit of the doubt, let’s say you can erase the online identities you have created across all such companies. As well as e-commerce suppliers, forums, email list aggregators and so on (at least for the latter you can terminate your email accounts, leaving any messages sent to the now-defunct addresses to be mere ghosts in the global machine, passing across the network with no hope of reaching their destination).
Are you done? If only. You go to a party. Someone – a child, say – takes your photo and uploads it to their poorly configured, insecure Facebook account. Somebody else tweets your name; another photo is uploaded, with a timestamp and a GPS location this time. Before you know it, as far as the great data crunchers in the sky are concerned, you might as well be standing in Trafalgar Square with a giant name badge and a megaphone.
Meanwhile, even if you were satisfied with your new status as social pariah (Who was it that said, “Saying ‘I’m not on Facebook’ is the new ‘I don’t own a television’?) you might find it increasingly difficult to get to the services you need. Cash-strapped governments see online services as a money-saver: while the majority of us simply hope they are right, others may feel the squeeze in terms of information access, responsiveness and indeed, cost.
In other words, while the Web can be a wonderful place, an electronic playground full of shiny toys, an endless source of information, a collective environment such as the world has never seen, you’re in it. Chances are, if you walk away from it, you’ll be back before long: to turn your back on the Web is tantamount to turning your back on society, which may not be precisely what anyone had in mind at the start of this journey. Where it will yet lead is anyone’s guess.
[First published on ZDNet]
October 2011
This section contains posts from October 2011.
Brief thoughts on Steve Jobs
2011-10-06
I’m not sure what happened to my copy of Bob Cringley’s Accidental Empires, so I’ll have to paraphrase what he said about those upstarts who suddenly found themselves at the centre of the computer revolution. There was image-conscious Larry Ellison, the fantastically bright arch-nerd Bill Gates and his business brain Steve Ballmer. After a few chapters talking about the history of computing, their personalities and success stories, Bob turned his attention to the co-founder of Apple Computer. “Steve Jobs is crazy,” he wrote. Crazy, because he wasn’t like the others, following the money, fame or power. He was following an ideal.
As one of Steve’s earliest employees, it’s fair to say that he probably knew Steve Jobs as well than anyone, so I’ll leave the talking – and insight – to him. I am left with one thought however – that he lived up to the words of Mahatma Ghandi as closely as humanly possible, “Be the change you want to see in the world.” While he was in the right place at the beginning, success was never guaranteed; it would have been too easy to become one of technology’s also-rans, invest in a few films and a museum, and watch the world from the veranda.
Steve was the first of his generation to check out, but by the time he did, the empire he created was no accident. My thoughts are not of sadness, but inspiration.
I, Technology: 7 reasons why schools need 3D printers
2011-10-12
It is no great disclosure to say I am a governor of a local secondary school, but perhaps I should be honest about my own love of new technology, electronics and gadgetry of all kinds. As a post-modern, dyed in the wool geek, I do sometimes - okay, often - have a habit of getting excited about such shiny things, looking for uses where perhaps none exist.
In the case of 3D printing however, I'm pretty sure the engineers are on to something. I first saw such a printer in action at the Renault Formula 1 team's headquarters, where it was used to print out prototypes of aerodynamic parts, which could then be tested in the wind tunnel. This particular printer was the flat-bed type, using laser interference to heat up specific points in a bed of resin. Drain away the remaining fluid and the part would emerge, almost majestically.
Other models of 3D printer exist, notably ones that create objects through extrusion of quick-setting materials, which can then be built upon layer by layer. The result isn't necessarily an object of beauty - there's only so much you can achieve by squeezing epoxy out of a tube. But both the capability, and the pieces that emerge, open new doors to down to earth, home-brew innovation.
Consider, for example, the RepRap open source 3D printer project that can actually create printer parts. Such a device does require some components that cannot be printed - steel bars, circuit boards and so on - so using the term "self-replicating" is a bit of a stretch. However the majority of pieces can be printed, as can spares, should anything break.
While the range of applications is pretty broad, or indeed, perhaps because of this, one potential "market" for 3D printers is education. Let's consider some reasons why:
- Low cost of entry. Equipment to be used in schools on design courses takes an inevitable hammering, and therefore needs to be robust - a quality which comes at a cost. Education is facing a financial squeeze as much as any other sector, so 3D printers offer an option for providing lower-cost equipment which can be repaired locally without (expensive) third party intervention.
- Basis for creativity and innovation. While design equipment can be criticised for taking away the need to develop hands-on skills, it also offers the opportunity to develop more complex designs using parts that can be built on.
- Wider world online interaction. Open source 3D printing communities exist which offer, share and advise on design templates - these can be downloaded and used, or indeed uploaded for comment. This enables a shared forum where students can benefit from the knowledge of both peers and more experienced designers.
- Outreach to other schools. Changes to UK educational policy (for example the academy programme) has put schools more in charge of their own destinies, and provided more authority and responsibility. At the same time, this has encouraged more interaction between schools, both at a peer level and between secondary and primary tiers. 3D printing offers an opportunity to both share and enable, for example "printing printers" and taking them into primary schools.
- Business acumen. 3D printers offer an opportunity to produce items that have real-world uses, on more than a one-off basis. The opportunity to create simple toys or ornaments which can potentially be marketed and sold, albeit with the cost of raw materials, offers insight into both manufacturing process, market understanding and business development.
- Genuine real-world skills. 3D printing may be in the domain of the early adopter right now, but continued advances suggest that it will become more of a mainstream capability. For example, in several years' time 3D printers may be more common in homes and businesses. If so, knowledge of how to create and print appropriate designs will become useful.
- Direct use for schools. There is no reason why parts need to be limited to students' design projects. A 3D printer could be used to create items which are of real use to the school - for simple examples consider coat hooks, doorknobs and other fittings.
While the future of 3D printing is unclear, already uses are emerging that were not even considered by the original creators. For example the epoxy resin can be replaced by chocolate or sugar paste to make confectionery; printed parts can be milled to add more complex features; no doubt we will see printers with new features and techniques and using new materials, allowing for new types of design.
Maybe we will never be able to print a flat screen TV, or a car, or a house, but right now the low cost of entry opens up a wealth of possibilities. It would be folly not to investigate further - indeed, it's what education is all about.
[This article first appeared on ZDNet]
My Boris Bike questions
2011-10-13
There's an event next week (Tuesday 18) where you can give feedback to the managers of the London Barclays Bike scheme. I can't make it but here's the questions I'm emailing in.
Sorry, can't make your event! Great service - when it's good it's very good, the problem is around "edge cases" - so questions:
* Is there any way of creating more drop-off points based on your knowledge of load? And also, is there any way of a user easily loving that they tried to drop off a bike at a certain point, but couldn't?
* Could we have a GPS enabled smartphone app which shows (and reserves) the nearest available bike, for 10 minutes say?
* Could we have a PDF map please - or make it easier to find! (I couldn't find it online)
Kind regards, Jon
Tech.Maven: Stand by while we augment your reality
2011-10-17
Reality can be dull, so who wouldn't want to spice it up a bit? The phrase "augmented reality" (AR) distils all that could be hoped from technology - it might as well be called, "reality, just that little bit better," like a technological pill to brighten the mundane. But does AR sufficiently merit its name? While it is still very much in the experimental stages, we are starting to see some real applications.
To be specific, AR involves capturing a video stream of a physical object or scene, and then adding information to the stream in real-time, displaying it on a suitable screen. Right now the whole package can be achieved with a smart phone with a camera and internet access - the video is captured, uploaded and recognised, then additional information is downloaded to add to what is seen on the display.
Perhaps AR is currently more, then, about "Augmented Imagery" - there's not (yet) any augmentation of other sense-related information, say aural or tactile signalling. This isn't a knock - more an indicator that we need to keep things in perspective when we consider examples of AR at work today. Essentially these fall into three groups: symbol-sensitive, location-sensitive and object-sensitive.
While symbol-sensitive AR is the simplest to implement, it is perhaps the most fun. The 'symbol' needs to be a pre-defined, fixed image that can be recognised by a program installed on a smart device. The latter can then add information in real time - for example adding a 3D avatar, or replacing somebody's head with a cartoon image.
This model is not dissimilar to using QR codes - squares of pixels which can be photographed and interpreted to link to online information. Indeed, examples exist of using QR codes as the basis for symbol-sensitive AR. Both have also been used with some success at conferences and events, so it will be no surprise to see these applications growing.
Object-sensitive AR takes things one step further, in that image recognition software can identify specific objects and then construct a virtual world around them. Examples include Metaio's digital lego box and apps to show how, say, to remove toner cartridges or other products from their packaging.
Finally we have location-sensitive AR, which captures the entire surroundings and adds information prior to displaying both. Google Sky Map is a simple, effective example of what can be done; other obvious applications are for travellers and direction finding, picking up specific street features and identifying the nearest pizza outlet, say. In these cases the video feed is supported by GPS information - so the software doesn't have to work out which street one is on from scratch!
All of these models are being tested out in various ways by vendors, with AR capabilities finding their way into games, creating "a range of new possibilities" (in vendor-speak) such as incorporating a camera into a model helicopter and turning it into a virtual gunship. While such examples are indicative of what's possible, AR has yet to find its killer app and has not, therefore, yet seen mainstream adoption.
Even so, it continues to develop. Gravity-related features or face recognition (as recently incorporated in Google Android's Ice Cream Sandwich) are being used now; meanwhile, sites like Augmented Planet are speculating about integration of near-field communications and heads-up displays to enable more deeply immersive experiences.
Of course it is typical of the technology industry to try technology combinations and see what sticks - and AR is no exception. This is not necessarily a bad thing but neither should it distract from research into delivering AR-enabled capabilities that are of genuine, compelling use - for example educational applications or other places where 'simple reality' is not sufficient. It may be that there is no killer app at all - rather, like location-based services (touted as the next big thing eight years ago), AR features will simply find their place and gain adoption as part of other applications and services.
As AR finds its way more into the mainstream, the chances are some of the downsides will emerge - equally, then, it would be worth thinking about these up front. For example there are very clear privacy questions around using facial recognition in conjunction with AR. Right now we can sit on public transport with relative anonymity - but this could easily change if, say, our image could be mapped onto Google images. There's also the possibility for social engineering, as the AR equivalent of social engineering takes hold.
Augmented reality may have plenty to offer - particularly as it starts to integrate other forms of information and as new applications are brought to the fore. We may yet reach a point where we drop the A and it simply becomes 'reality', at which point the bar of what augmentation means has to rise. Meanwhile, as a cross-over capability that can enhance applications and services across the board, it is certainly one to watch.
Tech.Maven: Are companies supposed to be social?
2011-10-17
Wouldn’t everyone love to work for a cool company? You know the sort of thing – pizza and beer in the fridge, bright colours, bean bags and everyone generally thrilled to be there. Indeed, exactly the sort of company that would welcome all those zany, fun, exciting social-networking-in-the-enterprise tools.
While such a working environment might sound exciting to some, it doesn’t necessarily have the mass appeal that the purveyors of such tools, and indeed the pundits and commentators who embrace them, might believe. “I’m an electronic engineer,” said one person to me when I asked about his company’s use of Yammer. “What time do I have for filling in my profile or joining in some chat?”
According to those who buy into the concept of “Enterprise 2.0” at a more evangelical level, he must be wrong – or at least he’ll need to get with the program sooner or later. There exist “sea changes in the world right now in terms of the way we are globally transforming the way we live and work,” argues Dion Hinchcliffe, one of the “Enterprise Irregulars” – a group whose numbers include consultants, analysts and vendors in the social media space.
The sea changes and global transformations have but one goal, namely to take us to a place where business is done very differently from the past, enabled by a stratum of collaborative technology. Without a doubt, plenty of room for improvement exists in today’s sometimes monolithic, other times fragmented but always sub-optimal organisations. But – to ask the million-dollar question – what if that isn’t the case? What if, shock horror, business in fifty, a hundred years time will look quite similar to how it looks today?
Whatever the rhetoric, it’s clear that social media hasn’t yet made an enormous dent on the enterprise. Part of the reason is down to the lack of obvious impact on real business outcomes, beyond general notions of knowledge sharing and engagement, argues Rob Preston at Information Week. “The movement's evangelists employ the kumbaya language of community engagement rather than the more precise language of increasing sales, slashing costs, and reducing customer complaints,” he says.
Perhaps the issue is in how the argument is being framed in the first place, once again (and how many times have we seen this) taking a new technology and seeing it not as a response to a specific need, but as the answer, the salve to solve all ills. Organisations are inherently bad, goes the thinking, and now – praise be! – we have the opportunity to change them for ever. Even the term “mainstream acceptance” implies some kind of waiting game, like it’s only a matter of time before even the most cynical succumb to the charms of the social.
It seems highly likely that we shall see social media tools go the same way as all other game-changing, world-beating technologies. The ability to store information in a relational database was once the most fantastic innovation – and perhaps, when we look back on all this in a few decades, we’ll find it still is. But apart from in a few sad corners of the Internet, you don’t tend to find people wanting to be called “database evangelists” anymore.
Technology commoditises, becomes part of the infrastructure, lowers in price as the attention (and the money) moves to the next big thing. In general few technologies ever die, they simply find their niche – that is, uses are found for them that deliver specific value. In social technology’s case, for example, getting a quick expert answer to a customer query, or as Rob Preston mentions, finding a spare part.
To add insult to injury, however, some innovations are never destined to exist in their own right. The humiliating phrase “Is that a product or a feature” could well be applied to social networking capabilities, which are becoming more and more integrated into platforms such as Salesforce.com for customer management and Broadvision for content sharing. Enterprise social networking may well turn out to be one of those things that has succeeded when everyone stops talking about it.
As a final point of course, today’s youth is growing up believing that such tools just exist, there to be used just like the telephone or the video camera. It may well be the case that such technology-savvy youngsters arrive with different expectations of how business could be done. But given the dual pressures of corporate inertia and the need to maintain governance, the kids aren’t going to have things all their own way.
So, yes, it would be great to work in a fun office, to be creative, to share information and change the world. In reality however, no organisation is going to implement a technology simply because it fits with an idea of what business might be like, at some point in the distant future. Even if there is beer involved.
[Originally published on Tech.Maven]
I, Technology: Older generations need innovation, not just internet
2011-10-17
Today saw the publication of the Times' "50 ways to improve old age" - the link is behind a paywall, but in summary, it did indeed contain 50 snappy ideas of how older generations could be treated with more dignity, helped across the road and so on. From the paper version I picked up in a cafe, I drew the conclusion that it was not based on any particularly exhaustive study or experience, and so should not, perhaps, be subjected to too much scrutiny or comment.
However I was, of course, interested to see just how much technology was seen to play a part. Of the 50 ideas, just four were either directly, or indirectly related - in order:
- 1 - Get the online literacy rate up to 100%
- 12 - Have an "Ooh, was he in…" app
- 44 - Embrace robots
- 49 - Reward inventions and innovations that make life easier for the elderly
My first reaction, that this was a bit of an odd mix, was quickly replaced by an oh-here-we-go feeling about getting older people online. It has to be a good thing, doesn't it? In all honesty, I'm not so sure - or at least, I don't believe that it is possible to distil the question down into a simple, binary decision that being online is better than being offline.
Let me be clear. Some sterling efforts are taking place across the UK and beyond, such as the race online - who should be applauded for their efforts. To not be online can very often be seen as a disadvantage for a raft of reasons - access to services and information, communication, interaction - all very good things.
However getting online should not be seen as an end in itself. I've been helping a few older residents in my village with their computers, particularly when they have problems with their connectivity, and what's pretty clear is that the data-pipe on the wall can be the source of as many problems as it solves. Some people I know have refused to have anything to do with it, as they tried and failed to make it work for them; others access the internet only rarely, or when they are prompted.
The fact is that many of the currently available mechanisms for Internet access were designed by geeks, for geeks. Workarounds exist for many problems, and people with experience know how to avoid issues such as picking up computer viruses and spam. Things are changing all the time, and a minority of the population takes pleasure in keeping up, adopting new things and trying them out, coping with accidental data loss and remembering to do backups along the way. A majority don't, however, and the proportion of people who actually enjoy tweaking, fixing and generally coping, drops with age.
So, while the principle that online access is a good thing is sound, the way that it is currently delivered is not. Even tablets, smartphones and their respective apps - which could be seen to hold some of the answers here - have largely been designed to satisfy the needs of the younger demographic. Ideas here are welcome - though, one would hope, with wider applicability than item 12 on the list, an app to look up old actors and find out what happened to them.
What we have is as much an opportunity as it is a challenge. Genuinely easy to use, low-cost technologies that have been designed from the ground up to deliver on specific needs of older people - which are often much the same as younger people - bus and train timetables, communication with local communities and distant relatives, keeping up to date, simple access to services and understanding of available benefits.
All this without needing to sit for an hour with the help desk of the broadband provider being told to switch the router off and on again, or having to get the daughter-in-law round to find out why the printer has stopped working even though it worked yesterday. If, as the information generation, we owe older people anything it is that all the clever stuff we have come up with really should just work, at least as well as the capabilities it reputedly replaces.
Scraping in at number 49 on the list are, "Inventions and innovations that make life easier for the elderly." A catch-all category which, in all likelihood, deserves a "50 ways" list of its own. To suggest that the most important thing is getting people online not only misses the point, it fails to face up to the real challenge of making technology simpler to use for all of us.
Cloud Society: Will the consumer cloud finally have its day?
2011-10-17
Latest blog - http://ping.fm/gP1mY
Tech.Maven: Closing the gap between business and IT
2011-10-19
Productivity is one of those things that is easy to define, yet almost impossible to determine. In mathematical terms it concerns a ratio between the quantity of output and the amount of effort to deliver it. As both can be defined in financial terms - an output has a financial value, and effort comes at a cost - it is straightforward to calculate a productivity factor. If it remains above 1, then things are looking good.
If only things were that simple. As anyone knows who has worked in an office environment, the daily routine is a far cry from simple questions about time spent and outputs created. Business processes can be complex at best, and even in relatively structured environments like call centres, unexpected events and inconveniences can scupper even the best measurement plans. Meanwhile the number of different ways of doing things - aided and abetted by an increasingly fragmented set of technologies - means that sometimes, even choosing the most appropriate presentation technique, communications mechanism or staged approach becomes a process in itself.
Against this background, in IT circles some still have the audacity to say that this or that new technology has the potential to increase productivity; the rest of us go along with it without being absolutely sure what was meant, or indeed, whether it were true in any case. The gulf between squeaky-clean theory and chaotic reality becomes quickly apparent when people think about their own working environments. For example, consider the challenge many face dealing with the fire hydrant of email messages: any potential for increased efficiency would almost immediately vanish, as there is always more email than there is time.
While setting a gold-standard definition for productivity may be impossible, in a recent report analyst firm Freeform Dynamics has identified a number of factors that have an influence on the overall figure. Top of the list by a short head, above general criteria around efficient working and collaboration, is ensuring that people have the information they need to do their jobs, when they need it.
Information delivery is by no means the only factor but it is compelling to drill into, particularly given the benefits that technology professes to bring in this area (this is the information revolution, after all). Remarkably, only a third of respondents to the survey felt that their organisations were competent in this area - nearly half felt they were doing badly. This begs the question - is productivity seen as important at all? Perhaps it's not so big a deal, but that wouldn't make mathematical, or indeed financial sense. After all, time is money. Far more likely is that the situation is too difficult to understand, and the potential solutions too complex, to do anything about it.
In other words businesses are beset with productivity difficulties, with no clear way to do anything about them. Frustration is perhaps inevitable, both at senior management level and among rank and file. Which brings to consumerisation, that most horrible of words, which describes the phenomenon of staff using non-business-supplied kit and online service to do their jobs. In complex areas like this, perceptions are as important as reality.
So the fact from the above report, that more people see consumerisation as having a positive impact on productivity, than having a negative impact, is a telling statistic. As is the finding from another Freeform Dynamics research study on consumerisation, that the net-net productivity gain can, in reality, be marginal. "Distraction and time wasting can sometimes offset any potential efficiency gains," writes CEO and lead researcher on the report, Dale Vile.
While the gains may not always outweigh the costs, at least it puts more (perceived) control in to the hands of individuals, relieving what can be very real frustration. There is plenty more in the report about how to minimise the risks while increasing the likelihood of improved productivity, but perhaps the most important point it makes is how consumer-oriented business practices from technology vendors are no small part of the problem. "As time goes on we expect at least some providers of consumer equipment and internet services to become more business-friendly; indeed this is already starting to happen," says Dale Vile.
In other words, while technology users may increase their own productivity by using their own kit, downsides of distraction and operational overhead remain works in progress. Even as they are addressed, we can only hope that a thought is spared for what really constitutes productivity, and that genuine innovation around how to deliver innovation, increase worker efficiency and improve collaboration takes place in a way that significantly outweighs the continuing downsides. Having had over 4 decades of research and development, by this stage IT should be doing better than this.
I, Technology: Can I be the first to say the 'social network' is dead?
2011-10-27
So Facebook has announced a new data centre, the latest in a series of moves to keep on a par with Google, even as the two companies continue to ladle new features into their social collaboration platforms, all the while trying to steer clear of breaking any patents. While the trolls are probably rubbing their hands together in glee, not everybody is quite so thrilled. From the punter's perspective it feels like the Microsoft vs Lotus vs Wordperfect bloatware wars all over again - while the gorillas are scaling out and stockpiling functional inventory, those sitting on the other side of the computer screen are left bemused, confused and in some cases downright cross at all the new capabilities they never asked for.
Back then of course, people had to wait for release dates before we could rail against the burgeoning floppy-loads of function. These days, it sometimes appears, capabilities get added or removed on a nightly basis without warning. Insiders talk about this-platform-versus-that-platform and measure popularity in terms of numbers of active users, or home page settings, or messages passed with the inevitable consequence that quantity, not quality is the driving force. Who cares if people don't like the experience - to win means getting the most points.
Meanwhile, what started as a clever way of interconnecting individuals and passing simple messages is becoming a godforsaken mulch. Pioneer of clean communications, Twitter has maintained a reasonably simple interface, sticking to its microblogging guns and keeping its share of the headlines - but isn't yet earning megabucks. Facebook and Google, each in their own way masters of experiment, are trying just about every possible combination of options for sharing of messages, content and services, pushing the boundaries of both legality and acceptability along the way.
And meanwhile meanwhile, everybody and their dog is queuing up to offer "new and exciting" ways of connecting, collaborating and generally becoming a downright nuisance to the people they count in their networks. We're all guilty of a global affliction of over-sharing, exercising the multichannel equivalent of using every shape and sized arrow on the drawing palette, with only the excuse that everyone else is doing it. Me too.
Whatever is emerging from the frog-in-a-liquidizer mess, it isn't social networking. It's a cacophony of comments, a fire hydrant of feedback, an endless sump of suggestions for what we should be reading, watching, listening to, connecting with. Current signals indicate that it's only going to get worse: the lines are drawn for a ponderous march forward, with the big players creating increasingly complex environments which we will all continue to use, despite the growing nag that it wasn't what we signed up for. Not a Tower of Babel; more the shifting sands upon which no castle can be built.
And then, the oh-so-smarterati will say "social networking is dead." They'll cite the losers, referring back to Myspace and Bebo, recognising the falling rolls of active users, and saying it was all so predictable. They'll be wrong - it wasn't predictable, and social networking won't be any more dead than it was when it was the latest big thing. In the tech industry we've seen it so many times - the relational database management system; the client server, or the service oriented architecture; the mainframe or personal computer. All were dead long ago, and have been, repeatedly, ever since.
But technology doesn't die. Rather, it stops offering interesting opportunities for venture capitalists and media types who want to be wed to the next big thing. These are trophy technologies, to have hanging on your arm at parties, wearing something smart and with a good set of teeth. Truth is, to say, "So-and-so technology is dead," is a euphemism for it finding its place in the substrate of infrastructure. In other domains we might say tarmac is dead, or push-joints for plumbers. They're not dead, they just got boring when everyone else started to use them.
Social networking will never die, not as such. However, if there's one thing we can learn from Facebook's latest move it's that the future is about providing a comprehensive platform for sharing, communicating and indeed other services, integrating what we currently call 'social' facilities along the way. In themselves, such capabilities don't actually deliver anything - to continue the plumbing analogy, they offer fantastically simple to use, internationally available, amazingly real-time, multimedia pipework.
The rest is down to us - and the irony is, we're currently spending our days using such tools to talk about what we might do, rather than getting on with doing it. The world hasn't seen a better procrastination device since the invention of toe clippers. There will come a time, however, where people want to get on with their jobs, and will no doubt be happy to use social tools to support the communications they need to get the job done.
On this basis, what we may soon see the back of is "the social network". Or at least, the recognition that such a thing exists. It doesn't - in the hyper-connected global village that we are becoming, relationships exist because they really exist and exhibit social facets by their nature, not because they are programmed into such tool. Or, to put it another way, just because Arnold Schwarzenegger is following me on Twitter, that doesn't mean I should be waiting by the hyperspatial door mat for a dinner invitation.
In the future, as socially-enabled services start to supersede social for its own sake, Facebook becomes simply another software as a service provider with as much in common with, say, Salesforce.com as Google or Microsoft. Indeed, looking at the latter company's portfolio and how it is evolving, it's not hard to see what any such company should look like a few years from now. But while social networking might have been the big innovation of its time, future innovation will take place outside of the technology echo chamber, in a world where social tools are seen as just that - tools.
Don't get me wrong, high-tech has enormous power to support genuine advances in creativity, healthcare, achievement, wellbeing. Even something as simple as information sharing can have a quite profound impact, as we have seen in several cases already this year. While the social networking behemoths continue their slow march into the sand, we can only hope that a solid platform emerges that truly can be built on in the future.
[This blog was first posted on ZDNet]
November 2011
This section contains posts from November 2011.
I, Technology: Just how much privacy are we prepared to give up?
2011-11-04
This should be a short blog, as it asks a simple question, prompted by the announcement that both Visa and Mastercard are looking to sell customer data to advertising companies. However, while the question is simple enough to ask, it is not so straightforward to answer.
While I don't have any firm statistics to back this up, I can't imagine that many people are so violently opposed to having their data stored anywhere, that they systematically erase their financial and spending trails as they go along - pay by cash or cheque, seldom give out an address, always check the "no marketing" box, avoid internet spending, use a bank account purely as a repository for money and not as a base for standing orders and direct debits, etcetera. If you're out there and reading this in an Internet cafe, please do come forward
The majority of us consumerist Westerners - I think - will have some 'form' in giving up some information about themselves to purveyors of financial services and consumer products. We mostly have a piece of plastic, indeed - as I found out to my chagrin a few years ago, when trying to pick up a rental car - some services won't accept anything else. And even if the situation isn't quite so binary ("no card, no siree") we are prepared to give up a piece of ourselves for the sake of lower cost, increased convenience or other benefits.
It's a trade - we accept payment for elements of our privacy. In the case of supermarket loyalty cards for example, we know that we and our shopping habits are being scrutinised, like "transient creatures that swarm and multiply in a drop of water." But, to our delight, we receive points, or vouchers, without really worrying whether we've got a decent return on our investment. I do, occasionally, buy something completely random that I don't need, just to throw them off the scent.
It's the same for online shopping. However clunky the algorithms appear to be - just why do we still receive books for nine-year-old girls, some six years later? - be in no doubt that every purchase is being logged, filed and catalogued. It seems such a small step, when you buy a paperback from Amazon rather than paying in cash for the same book from the local shop; the difference, however, is that the purchase of one will remain forever, indelibly associated with your name.
There was a certain gentleperson's agreement that was in place at the outset, given that nobody ever reads the T's and C's of these things. Namely: that the data would only be used for the purpose it was originally intended. Indeed, such principles are enshrined in laws such as the UK Data Protection Act (DPA). But purposes change, and so do businesses, and it isn't all that easy to map original intentions against new possibilities.
Enter online advertising, itself subject to regulation in various forms including the DPA. It is unlikely that the Web would exist in its current form without the monies derived from advertising, from click-throughs and mouse-overs, to tracking cookies and web beacons. Advertising is the new porn industry, pushing the boundaries of innovation and achieving great things, despite a majority saying they would rather it wasn't there.
In all probability, we just need to face up to the fact that we are making a trade. If you want to know everything about me so that you can sell me stuff, then you can pay for the privilege. Why not come to my house, see the books on my walls, the food in my fridge and watch my children dancing around in the garden - that way you can work out exactly what it is I want and offer me a highly customised set of products, goods and services. I'll make you pay - hmm. Five hundred quid for full access? A thousand?
Or, maybe, that's not what we want at all. More realistically, if someone came and offered such a service, we would tell them where to stick it. To the point: nobody in their right minds would even consider doing it in the first place. But what if, instead, every time anyone came to our house to offer anything at all - a new kitchen or a charity bag - they happened to be carrying a camera, and if I didn't remember to tell them otherwise, they would be at liberty to flog the photos?
When it comes to the Internet, we are in danger of doing precisely that - evolving into a situation that nobody wanted because the vested interests were moving faster than the regulation, without anyone giving serious consideration to the consequences. I'm all for advertising - bring it on, I say - but let's not sleep walk towards a scenario where we find we have already given away our privacy, for free. Once it's gone, it's going to be a devil's job to get it back.
[Originally posted on ZDNet]
Solid state storage now running at a pound a Gig
2011-11-08
Just marking the moment really - a cursory browse around UK computer components sites suggests that a 500Gb hard drive is just about available for the same number of pounds. eBuyer.com, for example, is stocking a Crucial 500Gb drive for £550 quid. At the smaller end of the scale, a generic 16Gb Class 10 Micro-SD card from MyMemory is currently costing £16. Sure, you can pay more and there's a load more factors than storage size, but it does look like quite a watershed moment.
If you're wondering what to put your 2.5" SSDs into, here's a suitable NAS enclosure from Synology - the DS411slim. And meanwhile, another interesting product from Maplin - a 4-slot SD Card RAID controller. One day all storage will be like this :)
Tech.Maven: What if CIO's shouldn't have a place on the board?
2011-11-14
A confession for a Friday - I've been struggling with a concept that's been around for years. It's the idea that CIO's (Chief Information Officers, or IT Directors in UK parlance) should have a place on the board. In principle this makes sense, given just how strategic both information and technology are supposed to be these days. Equally senior technologists can hit a glass ceiling in their own careers, which can be frustrating, but perhaps not as annoying as the fact that IT still isn't taken as seriously as it could be by the business side of the house.
When I worked at Freeform Dynamics, I was involved in studies reviewing the state of IT in organisations of various kinds. In general (though I paraphrase well beyond the bounds of statistical acceptability) we saw three kinds of organisation. In the first group IT was appreciated as being of strategic value, and the business reaped high levels of benefit from its technology estate. At the other end of the scale and in roughly equal proportion tended to be companies that saw IT as a cost centre, perhaps unsurprisingly these organisations derived far less value from their tech.
In the middle were the people who weren't sure whether IT was of strategic value or not - and in all honesty, they might as well have been in the third camp. Technology requires effort, and if you're not going to commit fully to delivering high-value IT, you're unlikely to reap the rewards.
With this in mind, it's not hard to arrive at a conclusion that a seat on the board for the top IT guy would be of significant value. "Get me at the table," goes the argument, "and I will be able to argue the case for technology. Get me at the table and I'll get the clout I need to drive IT into the business and demonstrate real value." While this is a perfectly valid stance, the trouble is that it is putting the cart before the horse. In the days of the industrial revolution, would the person in charge of all the lathes expect a board-level position?
Another argument for the board-level CIO centres on the 'I' - information - which is (we are told) a company's biggest asset. But again, just because something is a huge asset of strategic business value, that doesn't make it in itself worthy of the boardroom. Indeed, given that the ultimate responsibility for assets lies with the Chief Financial Officer (CFO), this is perhaps an argument for having the CIO report straight to the CFO. Heaven forbid, though it is quite common.
Once again, the problem isn't so much that the 'thing' shouldn't be treated strategically - quite the opposite. However the 'thing' itself is not the business, any more than the car pool or the buildings. The operations director of a company I used to work for referred to himself as 'head of bogs, drains and car parks', and while he was on the board, it was pretty clear in his mind that his job was to keep costs down, not try to overflow the importance of such things.
However, and here's the rub, it is fundamentally important that both technology, and information, should be treated at board level. Ha-ha! you say, he's contradicting himself now, we do need a CIO on the board! Well, no, I don't believe we do, or at least there is the root of my struggle. I actually believe we need technology to be considered strategically by all board members; the act of having a separate individual responsible for IT immediately creates a communications requirement that shouldn't have to exist. It would be like someone sitting there responsible for the pencils.
Businesses are driven by strategy, and value is perceived by those who drive. Business value based conversations are very much for the board; new opportunities, new products and markets, companies to buy, elements to divest and indeed technologies to be adopted or dropped. These are all part of the same conversation, but it is primarily a business conversation. To have a person responsible for strategy on the board - if not the CEO, a direct report - there's nothing wrong with that as long as the role is setting business strategy which may or may not encompass technology strategy at any point in time.
The principle is sound, and it is borne out by examples such as 'The changing role of the CIO' - which are so prevalent on the Web. Perhaps the landscape will one day have evolved so considerably that, while the title remains, the role will have become far more wed to the business. However, and however competent the people in the task are now and in the future, it may well be that success can only be achieved by having a strategic business role which incorporates IT, rather than a strategic IT role which understands the business. It may well be that we need a new one altogether - the recent emergence of titles such as 'Director of Business Enablement' could be a sign that things are heading in this direction.
Or maybe we'll just stick with how things are - with frustrated senior technologists wondering how they can convince disinterested peers in the business about how important it is to have a dialogue. Perhaps this really is the only possible route - in which case expect it to be long, tiring and without any real guarantee of ultimate success.
UK SME Agenda: Too big to succeed?
2011-11-15
Almost a year ago, faced with withdrawal of support from banks and general belt-tightening across sectors, the UK government announced a series of measures to help small-to-medium enterprises (SMEs). Most welcome was Cabinet Office minister Francis Maude's announcement that government would work towards doing 25 percent of its business with the SME sector. But just how has this panned out, 12 months on?
The fact that government has challenges working with SMEs is a simple question of mathematical scale. Central government is large, monolithic and (let's be honest) bureaucratic - things can tend to happen slowly over a period of years, and projects tend to be large scale and in many cases, fantastically expensive. The result is a general sense of stability and long-termism but by the same token, government can find it difficult to respond quickly to requirements and when things go wrong, they do so in a big way.
Meanwhile (and again by its nature) the SME sector is small, lithe, agile and relatively cheap, without the overheads of larger businesses. Bureaucracy is minimised, meaning that expertise is easier to reach and can be better tuned to the task in hand. Some downsides are, of course, linked to size - smaller companies can find it difficult to scale and, in some cases, tend to see the world of projects as a set of nails to fit their own, metaphorical hammers.
The problems really start when large meets small. For government to engage with any company comes at a cost - the sizeable overhead of simply doing business can make what should be a small project financially prohibitive, which inevitably drives buyers towards larger projects with relatively smaller overheads. By extension, the idea that (say) a hundred projects currently awarded to large companies, should now be dealt with as a larger number of smaller projects, working with a larger number of smaller suppliers, would raise a grimace on even the most stalwart of procurement managers.
Similarly, smaller companies engaging with government procurement can very quickly come unstuck. For all the promise of larger rewards for example in the shape of long-term fees, for a smaller business the tendering process can be as time-consuming, and indeed margin-consuming as the project it represents. Without, however, any guarantee of return - as a result it can be just too much of a gamble to take on.
It is this latter point that Francis Maude wanted to see tackled first, in the shape of simpler, standardised pre-qualification questionnaires that would be mandated across government departments. At least in this way one significant element of the overhead would only have to be done once. At the same time, an SME panel was constituted, chaired by Maude himself, with representatives from smaller companies who could "hold the government's feet to the fire".
For all the good intentions, however, the devil could well be in the detail. While tenders may be simplifying, the aforementioned nature of government departments means that changes are slowly working their way through the system - they could still be made simpler yet. Open questions around the nature of contracts, types of business eligible to bid, the need for insurances and so on add complexity which will no doubt be ironed out over time, but time is a resource the government has plenty of, while for small businesses it can be in short supply.
Meanwhile there is the question of skills. Even larger companies can lack key skills in response to a request for proposal (RFP) - at which point they look to associates and subcontractors (and indeed, some government departments and larger service companies are seeing this as a way to hit their 25% target). Smaller businesses are less likely to have the complete set of skills in house, and by their nature, specialists are more likely to already be booked on a job. To respond to this, some organisations (such as Delivery Cell One) are looking to pool their expertise following a "micro-consortium" model.
Overall, it is not hard to see that everyone gains from the government engaging better with smaller companies. The benefits are legion: smaller projects that more quickly achieve their goals, the real potential for lower overheads and therefore reduced costs to the taxpayer; and meanwhile, a shot in the arm to the UK's comprehensively staffed and highly skilled small business community, benefiting from its expertise at the same time as catalysing growth.
While the principle is sound, questions still remain about the practice. For all its good intentions, it just may be that the very thing that the UK government wants to see changed - work smarter and deliver better for less, by enabling large institutions to work with smaller companies - becomes yet another casualty of the fact that the government is too big to do anything quickly. One thing's for sure - unlike government, SMEs do not have the luxury of waiting for the problems to be solved: they will move on or go out of business first.
December 2011
This section contains posts from December 2011.
Inition and the shape of things to come
2011-12-02
Last week was a week like any other week, the event like any other. Inition, a Shoreditch-based company providing a range of high-tech products and services to a variety of industries, had opened its doors to prospective customers, journalists and other interested parties. The company’s offices occupy the ground floor of a blocky, nondescript building, whilst the basement houses a range of machines, a screening room and an open space that can be used for studio production purposes or to demonstrate what the company offers and how it works. Peer in the windows and it was just like any other of the tech/creative businesses I walked past on my way there. So why, precisely did I find the company so intriguing?
I was met by Jim Gant, one of the company’s founders, and account manager Kathy Hall, who took me on a tour of the various demonstrations on offer. We worked through the 3D Surface display, which (with glasses) I could view a simple object or a street scene from multiple angles. We looked at the iPad-enabled Augmented Reality setup, headed back round the holographic display via a discussion of 3D object rendering from 2D images, and then moved onto the glasses-free 3D displays one of which, while just a prototype, was clearly the shape of things to come. We then passed the Pepper’s Ghost in a bottle, checked out the advert done for Samsung with Manchester United team players, had a go at driving a tank with spectroscopic goggles, and wandered over to the haptic devices, then 3D printers – one resin-based which could print quite detailed moving parts, and one plaster-based which could print in colour.
While the tour was as mind-boggling as it was whirlwind, it left me with more questions than answers. Yes, quite clearly here was a company with genuine hands-on experience of some pretty leading-edge technologies – some, indeed, that were still to demonstrate their full value. We discussed QR codes and geolocation, film and production practices, current and future opportunities. Yet despite Jim's own expertise and the company’s obvious competences, and even though business was growing and customer needs were clearly being met, I was even less sure by the end of the tour what the company stood for.
This is not so much a comment on the company, as I have taken pains to point out, but more a reflection on where such technologies as 3D, AR and so on currently sit in the scheme of things. What was obvious to me – and I cite Moore’s law as evidence – was that many of the demonstrations were destined for mainstream use. Equally, however, right now they lie outside of the mass market, which makes them appropriate for three places: domain-specific applications (such as haptic tools and dentistry); bleeding edge adoption by rich people (3D TV) and marketing. Much of the opportunity currently lies in this latter group. Marketing, be it for products or people, music or media, is always looking for something to differentiate what may be a quite dull offering against the competition – at which point new tech can deliver something that hasn’t been seen before.
Inition's trick, however, is not to simply offer technological bling – whether they'd arrived at that point by luck or hard-earned judgement, that's what I perceived at the heart of the company's offering. 3D printing, haptic devices, new display technologies were all interesting in their own right – but they are only enablers of ‘the thing’ – be it communicating a message, connecting with an audience, supporting training or whatever it happens to be. Central to Inition's wherewithal is a recognition that new technologies will keep on coming, and while somebody needs to be on top of all of that, the real opportunity is to continuously develop and improve the value-added services that run on top.
No doubt multi-dimensional displays will be in every front room just a few years from now, and we shall all be having fun manipulating images on the fly and forwarding them to our friends, while dentists and surgeons become increasingly dependent on haptic technologies. Such capabilities, today so new and refreshing, will quite quickly become old hat. But it is to be hoped that Inition will still be ploughing the front of the furrow, learning how to make the most of the newest technologies and delivering best in class services to their customers so that we can all benefit at some point later on. Whatever space the company ends up in, will be a space worth watching.
So what is it you do, precisely?
2011-12-07
Warning. This post is somewhat self-indulgent. If you're not interested, please skip straight to the bio.
It took me a year to update my biography. Stepping down as an industry analyst was a tough decision for me: I enjoyed the work, I loved the company and people I was working with yet I still felt I was compromising. Compromising what, precisely, I didn't know. So I took the choice to deal with some other projects that I was not finding the time for - completing one book and starting another. To pick up a few writing jobs. And, in the meantime, to spend a bit of time work out what it was I wanted to do.
Very quickly I arrived at a quite succinct conclusion - I wanted to get more involved in helping technology make a positive difference. What took the time however, was how to actually enact this. As I spoke to numerous individuals, marketing and PR agencies, IT vendors and end-user organisations, media, music and publishing contacts, the need became pretty clear. Each area was an obvious source of knowledge about how IT could add value - and there were, and are, great things being done. But equally, each was on its own journey, a victim of its own provenance and even, in some cases, distrustful of progress made in other sectors.
For technology to make a genuine difference, I realised, best practices needed to be learned from wherever they were emerging, and shared across domains. The key was not to focus on 'the thing' - the domain-specific elements, for example how research chemists use technology to do research chemistry, or how musicians make music. Rather, it was to look at the areas around 'the thing' which are common to all domains. To my good fortune, I realised all of these started with the letter 'C'.
The first of these, then, is 'Capability'. Technology can have a positive impact simply by being there - but so often, it gets in the way, it's too expensive to procure and install, requirements get misdefined, the results are inappropriate and therefore compromise any positive impact they might have. Best practices exist in these areas, such as agile software development and value-based project delivery, and technology is evolving to become more usable and affordable in the shape of internet-based services (aka Cloud). New capabilities are emerging all the time, which bring technology closer to people and enable more effective service delivery, and I am watching developments in augmented reality and 3D with interest.
But capability is nothing if it doesn't support our 'Creativity' and innovation. In a world that is increasingly strapped for resources, organisations have two choices - to work within increasingly squeezed margin structures delivering commodity products and services, or to identify new opportunities to create value and deliver services of their own. The publishing industry is a case in point - right now the literary-agent-and-printed-page model which has served so well for hundreds of years is currently being subjected to enormous change. Technology is both the problem and solution - from one perspective it is the great destroyer, but from another it opens up new possibilities. Insight drives innovation, which is why I'm watching recent developments in big data - it remains to be seen whether deriving insight from information remains an elusive dream but I see it as another waypoint on the journey.
'Communication' is becoming the backdrop to most human endeavour. Today's technologies enable collaboration during the creative, research and development process, and then as part of engagement with prospects, customers, music fans, intermediaries, citizens and other stakeholders - on a global scale, and in real-time. Indeed, the lines between development, marketing and sales are becoming increasingly fuzzy, to the extent that technologies such as social and collaboration tools, mobile messaging and so on can sometimes emphasise conflicts between departments more than solve them. Communication goes back to back with privacy and identity and, like all of us, I am a guinea pig in the great experiment we are currently undertaking with our private lives.
Which brings to the final 'C' - 'Context'. It is one thing for technology to add value to market capitalisations of new technology companies, but does it actually make us happier? Are we more productive in our working lives, are our societies more democratic, are we better educated, or financially and personally better off? Is it the right thing to help other people get online, and do computer games help or hinder the development of our young? Are we better communicators, or merely more able to fire random streams of text into the ether which undermine our individuality rather than reinforce it? What's perhaps more important than these questions is the absence of debate, which is fascinating in itself.
Putting all of this into a bio has been something of a challenge, but given the fact that I don't have all the answers, the bottom line had to come down to what I can contribute. As a technologist with 24 years experience I have done most things, and seen examples of the best and worst of all IT has to offer. Meanwhile, over the past decade I have been developing my skills both as an analyst and a writer, in both technology and the creative - notably musical - world.
So that's me. I will continue my research into what's going on: indeed, as I have found over the past 12 months I have no choice, as it's what gets me out of bed in the morning. The books and articles I read, the relationships and conversations I have, the part of my brain which continues to machinate, digest and attempt to make sense of what's going on, is geared towards technology and its congruence (or lack if it) with society as a whole.
In the meantime, here's the skinny: I offer writing, consulting and facilitation services, helping clients reach their audiences, tell their stories, solve their own problems with information technology. I'm as happy helping older residents in the village understand the Web, as I am advising major corporations establishing a global presence - and in many cases, the challenge is the same: it's all about dealing with people first, technology second. Am I still an analyst? To be honest I don't know, nor am I that bothered - I've always had a problem with labels but if people want to call me an analyst that's fine. What I am clear about is that I no longer feel I am compromising, as I am practicing what I preach - for example, I'm very happy to say that I'm becoming a director of a technology-oriented charity, which feeds my need to roll the sleeves up and get stuff done.
That's probably enough about me. I set up the company Inter Orbis (literally, 'between spheres') as a vehicle for what I wanted to do, and as well as my own clients I'm working with agencies, media organisations, events companies and consulting firms to help deliver on the promise of technological impact. My bio is here - if you have any questions, do get in touch (email jon at inter hyphen orbis dot com), and thank you for your time.
Twitter: @jonno
Linkedin: jonnocollins
Facebook: quakerjon
Targeted ad serving may not breach privacy laws, but it's bloody annoying
2011-12-08
We looked for a hotel in Paris. A lovely thing to do - next April, we'll be going there. How fantastic. we didn't book anything, but certainly got some good ideas.
And then, for the following week, every web page we went to included adverts about hotels. Short breaks. Booking. Bargains.
I was doing some research into online storage. I looked at the big guys, the small guys, the old and the new. I looked at domain-specific storage companies and those more suited for small companies, big companies. I wrote an article about it all.
And then, for the following week, every web page I visited offered me online storage services. Specifically a service for musicians and creative types. For that was, clearly, who had the money right then.
My wife was looking into insurance for students. Short term, keep the costs down. She found some, and bought it. Paid there and then.
And then, for a week, came the insurance ads. For students. Though she had already bought some.
Let's be clear. I've seen the slides, the importance of closer connections between people and brands, the opportunity for better-targeted advertising, for measurable outcomes, for pay-per-click services that offer far greater ROI.
But then, I'm also seeing the reality.
It's like a really crap sales guy. I used to work with one. He couldn't do the 'listening' thing, he'd try for a while and then, almost without warning, explode in a flurry of products and services that didn't quite meet the prospect's half-formed ideas of what the issues were.
It's like the bloke at the bar who just doesn't know how to speak to girls. He'd come up with some really poor, contrived chat-up line and then wonder why she walked away. After all, it worked on the video he saw.
It's like that annoying person who wants to be your friend. Who won't go away or leave you alone, who will pick up on whatever you say and twist it slightly as he or she repeats it back, showing that they didn't really get it.
That's what it's like - bloody annoying. And slightly perturbing, when you realise that the annoying person is in your computer, on the internet, following you around and piping up unexpectedly about a topic you thought you'd finished with.
It's the same on Facebook. Sometimes it's funny, sometimes inappropriate, as your comments and messages are reflected moments later in 'targeted' advertising. Sometimes it's downright offensive, and I dread to think what the algorithms do on occasions where people put their hearts out, have breakdowns, lose the battle with drink or suffer great loss.
This is not the future of advertising, of audience engagement. This is a cack-handed attempt to make sense of fragmented data presented through a distorted lens with no knowledge of context. It's crap. But, because it's all that businesses have and it is better than what they had before, because some poorly designed policy engine works with incompletely specified rules defined from an incomplete understanding, because of all these reasons and more we are served paid ads that, for a second feel half-relevant until we remember they've missed the boat, we were already there, the time has passed.
At the moment it just makes us feel… nothing. Or a little uneasy, like we're being watched. Which, of course, we are, with tracking cookies and web beacons and the like. We know that there's no such thing as a free lunch, that if we want to make use of 'free' online tools, we have to be prepared to offer an ounce of flesh in return.
But how long is it before we just feel fed up that our open-ness is not only subject to abuse, but worse, that we have given all this information to incompetent buffoons who are going to spend the next few decades trying to sell us childrens' books long after the kids have left home, or medical insurance even when we have, for unexpected and quite upsetting reasons, become no longer eligible?
I don't know. Perhaps we'll collectively and quietly accept that the future is to be full of blunted, inappropriate messages being thrown at us wherever we go, most of which will fall by the wayside. Perhaps we'll simply learn to treat them as noise, to block them out - but I get a nasty feeling they'll become louder, more gaudy and in-your-face in response.
Or perhaps those responsible for producing such tools will learn, either off their own bats (which would be nice) or through a backlash in which sales go down instead of up, that our screens are not simply advertising hoardings available for rent, and our behaviours are not up for surveillance. That we don't want poison dwarves following us around asking if we've thought about booking that hotel yet, or buying that insurance. That, surprise, our lives are more complex and interesting then can be modelled in some banner advertising business model, and that - even more of a shocker - we don't want such intrusive company along the way.
I know, it's wishful thinking. I don't have a problem with advertising, and indeed, I have even bought things on a whim due to an online suggestion. But I do have a problem with people dressing up poorly construed, badly implemented and inconsiderate ideas as 'the future', particularly when the intention is to entice me to do one thing or another. Like Schroedinger and his infamous cat, just because we feel we know how to measure and even influence, doesn't mean that we can guess what behaviour will occur as a result.
2012
This section contains posts from 2012.
January 2012
This section contains posts from January 2012.
The Kernel: A home fit for start-ups
2012-01-10
How would the European technology scene – from London to Berlin – seem to a visitor from another era? His or her experience would, I think, be rather similar to the world seen by the hero of Mark Twain’s novel, A Connecticut Yankee in King Arthur’s Court.
In the book, an American engineer of the Industrial Revolution finds himself, after a bump on the head, waking in the purported age of chivalry and knighthood. Purported, that is, as he quickly finds himself embroiled in serfdom, disease, prison and, above all, ignorance from the ruling classes. “CAMELOT – Camelot,” said I to myself. “I don’t seem to remember hearing of it before.”
Equally, an industrialist from Britain in the 1960s, brought up among the optimism of Blue Streak, TRS-2, and Harold Wilson’s “white heat of technology”, might find themselves just as disoriented to wake up in today’s Silicon Roundabout.
Twain’s hero, unfortunately, was unable to overcome the attitudes of Merlin and his “pretended magic”, even armed with inventions such as the telegraph, the Gatling gun, and printed newspapers. Intellect and technological prowess came to naught when faced with the massed institutions of the time.
By Mark Twain’s own day, the United Kingdom had a more positive attitude to its innovators: Brunel, Watt, the Stephensons and not least Charles Babbage, the father of computing. This continued into the 1940s and 1950s, with Alan Turing’s wartime work, at Bletchley Park, on cryptography, and the world’s first stored-program computer, the SSEM, which was designed and built in Manchester.
Ferranti’s Mark 1 was the world’s first general-purpose computer manufactured for sale. The stage could have been set for Great Britain to be the technological capital of the world.
But even as the information revolution started to gather momentum, the British spirit of entrepreneurship lost its lustre. The UK economy declined steadily in the post-war era, so that by the time mainframes were being fork-lifted into the computer rooms of large companies, UK-based computer makers such as ICL were already struggling.
It was a different story on the other side of the Atlantic: the United States saw strong economic growth, benefitting from the liberalisation of trade it had negotiated in the last days of the war. The first venture capital companies took root in California in the early Seventies, creating a financial engine room that drove Silicon Valley to world prominence.
Californian free thinking combined with New England sensibility and the Seattle work ethic to create and maintain technological superpowers such as Apple, Microsoft and IBM. In the UK, though, engineering prowess failed to be matched by funding, or political support. Venture capital firms were few and far between, with Investors in Industry – 3i – standing out like a light in the darkness even as many of Britain’s brighter sparks were being enticed across the Atlantic.
In parallel, the remaining elements of the UK’s manufacturing industries were diminishing. Manufacturing constituted 40 per cent of the UK economy at the end of World War Two; by the Millennium, this figure had halved. Britain had the intellect, knowledge and skills to lead the world in technology. But under-funded, over-bureaucratic and distracted by disputes, it was not to be.
It would take a period of considerable economic and social upheaval to set the scene for Britain’s entrepreneurial renaissance. The wave of privatisation and deregulation of the Eighties may not have met with universal applause. But it did set the scene for simpler finance and employment law – both important factors in creating an environment more suited to startup businesses.
It was not until the mid ’90s, however, that those changes made themselves felt in the technology sector – as another great British mind, Tim Berners-Lee, invented the World Wide Web and spawned a global boom in software and services. The initial the dot-com boom was driven from the US, but the mad scramble online was highly enticing to British fund managers and private equity firms.
Ben Tompkins, partner and investment manager at Eden Ventures, says, “There was a rush to join the market in 1999-2000. That’s when Britain got into its stride.” It may have been a bubble, but the trick was to get in early and the excitement was Transatlantic.
Early UK startups (for example, online auction site QXL) were often close mirrors of their US counterparts, a trend reminiscent of motorbike and electronics during the Seventies expansion of Japanese manufacturing. “If you had a web site that copied something in the US, it would be funded,” remarks Richard Eaton, a partner at law firm Bird and Bird, who was involved in the dot com sector at the time.
Few existing businesses could afford to ignore the much-hyped “e-business or no business” mantra, injecting further capital into the system and driving demand for software development and web design skills. Of course, the dot-com boom turned to bust. But even while the still-nascent UK venture capital community nursed an extended hangover, an industry subsector had been formed and an ecosystem had been created.
Wounds were tended, and by 2005-2006, investment started to pick up again. Today, VCs are older and decidedly wiser, and a wealth of creative talent is available – possibly a spin-off from the surfeit of media-related degrees that have attracted ridicule in the past.
The wave of interest around social networking has spawned a number of companies, some of which – like Bebo, Tweetdeck and Last.fm – have already achieved “exit” through buy-out. Unsurprisingly, there is less investment in hardware companies than in the past. “We’re seeing a lot of Web 2.0 and social media companies,” says Richard Eaton. “If they get £50,000 to £250,000 that will see them through for one or two years. The only fixed costs are their employees and the computers they use.”
The Industrial Revolution may have started in the Midlands and the North, but, for today’s start-ups, London is the hub of activity, for many reasons – not least because it is a hub of activity. “Interest in Shoreditch started largely because the rents were so cheap,” explains Rassami Hok Ljungberg, a communications consultant working with startup showcase TechPitch 4.5. The Government has capitalised on this interest, with initiatives such as the Tech City Investment Organisation heavily promoting the location.
Experts such as Bird and Bird’s Eaton are unconvinced that Shoreditch is an ideal location. “It’s trying to force something into where you wouldn’t think of having a technology hub,” he says. Shoreditch lacks a major research-based university comparable to Stanford, or indeed the ecosystem of financiers, lawyers and advisers that Silicon Valley boasts.
“All the same,” Eaton adds, “companies such as Cisco and Google have been very supportive – it can only help.” Or, as Georg Ell, general manager of social software site Yammer, puts it: “When Yammer made a decision to start a European headquarters, the first decision was to place it in London, on the basis of the talent pool we felt we could recruit from and also that so many potential customers, multi-nationals, are headquartered here.”
That is not to say that UK start-up culture is a London-only phenomenon. Science parks around the country have also notched up success stories. Often connected to universities, Cambridge is the obvious – but by no means the only – example, spinning out Autonomy, Zeus and ARM.
Meanwhile, regional governments have their own inward investment and development schemes such as Software Alliance Wales and Connect Scotland. In addition, the pan-British Technology Strategy Board, formed in 2008 with a remit to “promote and invest in innovation for the benefit of business and the UK,” regularly funds competitions, matching funds in technology startups in fields from metadata to genetic research.
Another change for the better is the propensity for the British to actually see entrepreneurship as “a good thing”. Traditionally, Britons have been very good at holding innovators with cool disdain. “It’s fine as long as they don’t go on about it all the time,” is the general sense. Recent TV programming such as Dragons’ Den, The Apprentice and even The X Factor should be acknowledged for their reinforcement that hard work deserves not only reward, but also praise.
Why base a company in Britain? A number of fortunate accidents of history and geography suggest themselves. History, because it could never have been foreseen that English would become the de facto language of global business. Yet its country of birth is undoubtedly a beneficiary, helping support that “special” relationship with the US that encourages investment from the other side of the pond.
Geographically, with the British Isles sitting between the US and the rest of Europe, the UK is a good starting point, along with Ireland, for US companies that want to expand onto the continent, as well as for European countries that have their sights set on the US market.
“For Scandinavian startups, their first port of call is the UK once they are developed within-region,” says Rassami Hok Ljungberg. But Britain offers more than a beach-head. “It’s easier to start up in the UK than elsewhere in Europe,” says Hok Ljungberg. “You don’t have to have capital up front, which makes it so much easier – and winding up a company is easier too.”
Unique benefits of the UK include more flexible employment law, tax breaks, post-nationalised industries, and the right kinds of inward investment. These factors lower the bar to entry for UK-based innovation.
But it is not all plain sailing, particularly given the reduced appetite for risk when it comes to funding. “Seed capital’s higher than ever before, but there’s not enough capital in the mid-market,” says Ben Tompkins. No doubt the current global financial crisis is not helping.
“It’s about the risk profile,” says Richard Eaton. “A lot of traditional VC players are just not operating in this space anymore. Some companies will get funded, but that’s where the squeeze is – it’s being called ‘The Valley of Death’.”
For these reasons it is not possible to consider London in particular, or the UK in general, as a replica of Silicon Valley. Quite clearly, it is not. “The culture is very different,” says Eaton. “There’s much more willingness to lose money in the US – it’s generally accepted that not every deal will succeed. European VCs don’t have that – they came out of the private equity market, whereas the Silicon Valley venture capital community was formed from scratch.
“Plus [in Silicon Valley] you’ve a large concentration of big companies all based in the same area, meaning a steady supply of people and a close eye on acquisition targets. It’s like one big incubator.”
Even once a product reaches release, the complex operating environment in the UK does not make it straightforward to do business – although, sometimes, this can be a good thing. “The UK has one of the most sophisticated, mature and diverse markets in Europe,” says Hok Ljungberg. “Anglo-Saxon thinking can be a bit negative but it causes a constant, critical evaluation of everything which is very useful. If you can succeed in the UK, you can take it anywhere else in Europe.”
This makes the UK a better place to start up a business than many other countries. Berlin – increasingly seen as London’s great rival for start-ups – is full of bright, dynamic people but the city is still emerging from the shadow of the Cold Wwar. “A couple of years ago, it was illegal to be an entrepreneur in Berlin,” says Tompkins, just back from a trip there.
“There’s huge enthusiasm but limited capital. It reminds me of the UK in 1999 – a lot of cloning is going on, copying US success stories and creating local language versions [of services].”
In Paris, attitudes may be more positive but employment law makes it tough to afford growth in personnel. In Stockholm, Swedish law has no concept of share options operating on a tax-free basis for capital gains. And in Amsterdam, you have to go to court to change the articles of association of a company. Startup Britain may be hard, but it is fair.
What would Mark Twain’s Yankee think if he came to the UK today? The country has a wealth of resources, both intellectual and practical, that could be better utilised than they currently are. Britain is a long way from achieving the kinds of funding models, legal ecosystems, or indeed the general spirit of entrepreneurship that exists across the US.
But perhaps it will not need them. Start-ups such as Seedrs are questioning the roots of innovation funding itself. “We’re pending FSA approval but we hope by the spring of 2012 we will be providing hundreds, or even thousands, of companies a new way of raising capital effectively, from ‘the crowd’,” says Jeff Lynn, co-founder and chief executive.
Britain does still occupy a unique position in Europe, but again this could change. Berlin may be in a similar position to where the UK was in 1999, but the city will not stay in intellectual clone territory forever. The best estimate is that the UK has a decade’s grace before other European cities catch up and begin forging their own new directions.
The UK is still weak on inward infrastructure investment and current manufacturing forecasts do not bode well. A state of financial crisis persists, and it remains to be seen whether David Cameron’s recent use of the European veto will have any lasting impact on Britain’s business interests – though perhaps the UK’s unique geographical position will trump even that card.
And if there is one lesson to learn from the US, it is the importance of investing in, supporting and rewarding designers, innovators, engineers and technologists. The education system is the place to encourage an entrepreneurial spirit, by building the confidence to step outside the norm, and the necessary self esteem to counter any fear of failure.
Entrepreneurs – and those who back them – warn that UK must review its institutions, governmental and corporate, to ensure these encourage and promote future growth, even if this is at a cost in the short term. If there’s one lesson the UK’s start-up culture can learn from the US, it is the importance of investing in, supporting and rewarding those in the driving seat, so that they no longer feel the need to go elsewhere to succeed.
“He’s one of the cleverest people I’ve ever met,” says Richard Eaton of Dr Mike Lynch, Cambridge graduate and founder of Autonomy, recently acquired by Hewlett Packard for $10 billion. “He’s a mathematical whiz, but equally, he could see that if he wanted to succeed, the US was the place to go.”
Perhaps if Twain’s Connecticut Yankee travelled in time to the UK in 2012, or perhaps more realistically, in 2015, he would feel more inclined to stay.
[This article originally appeared in The Kernel]
Open data: not an open, nor shut case
2012-01-10
Who wouldn't want data to be open? In IT circles, the usual opposite of 'open' is 'closed', but it could also be 'shut', 'exclusive', 'inaccessible' or 'locked-in', all of which can be associated with feelings of frustration, of missing out, of being prevented from sharing in whatever are the benefits of getting in.
The reasons why data can be rendered inaccessible are not always simple. For a start, data has an inherent cost to collect, collate and otherwise control. Sensors need to be placed and connected, spreadsheets and databases to be filled and formats to be converted, servers and storage devices to spin up and keep running, people to pay and problems to solve. That cost needs to be amortised somehow. Whatever 'open data' can mean, it certainly can never be considered to be 'free'.
Once data has been gathered, however, it can be given away - either because an individual or organisation wants it so, or because a government (through actions or laws) deems it so. Governmental information sources are obvious candidates for the 'open' tag, in the UK's case with data.gov.uk collating and enabling access to as much non-personal data as is feasible. This is, of course, to be applauded - it serves a very useful purpose, in that government does not, by itself, have the resources to do all that is possible with the data.
Examples such as the Ordnance Survey, roughly 50% of which is funded by the tax payer, show how ‘giving’ should not be a simple, blanket stance however. Data has a cost, and a value that people are prepared to pay for, either in its raw or derived form (such as maps). As such, it is worth considering the commercial value of data, and balancing this with the reality of how that data was funded, e.g. by the tax payer.
Which brings to commercial data sources. Some companies have built substantial businesses on the back of data gathering, interpretation and analysis - indeed, in the IT industry alone the global IT analyst market is said to be worth $2 Billion a year.
Some sectors, such as pharmaceuticals, are highly dependent on data: they spend a lot of money on it, and their products are largely based on it. From a business perspective, profit margins boil down to the amount of revenue that can be created from a new product, minus the amount it cost to create it - in the case of pharma, then, anything that can be done to reduce the cost of creating data has to be a good thing.
This position becomes more tenuous when organisations use tactics to make it more expensive, or even prohibitive, for the competition, for example through the use of patents law – the dubious practice of companies collecting samples form rain forests and attempting to patent anything with potentially healing properties, say, is an example of this.
Data recipients can also misuse information that they have bought, or which has been freely given. A clear challenge is the nature of aggregated data analysis, for example being able to link internet searches information with current events and depersonalised references to derive quite invasive information about specific people.
We've heard, for example, how Google can follow an outbreak of an illness by tracing the locations of where medical advice-related searches are taking place. While this is a positive example (in terms of informing local clinics with an anonymised picture), it is quite another thing to then link this to commercially available databases and direct marketing the antidote.
Things can get even more hazy where the data source, the collection organisation and the data customer have different interests. Who owns your data, for example, your blood group or your credit history? Who owns the data about the shape of your garden, or the drainage properties of your fields? Who owns the information about the chip in your dog? The recent example of NHS data being sold, albeit anonymised, to pharmaceutical organisations is indicative of what a can of worms this can be.
The aggregation question raises stark questions about privacy, and while computer company bosses from Scott McNealy to Eric Schmidt have declared privacy to be dead (with McNealy famously saying, "Deal with it."), privacy might not be as dead as they have declared – at least in the minds of ordinary people. Nonetheless the nature of privacy is changing, in that people have to decide, on the basis of a fragmented and sometimes deliberately obfuscated picture (in the case of social networking site T’s and C’s), what they are prepared to give away.
The question of privacy looms at a national level as well. Julian Assange may have become the darling of the 'open' movement for his role in Wikileaks, as the organisation cast a torch around some of the murkier corners of our political systems. But a sword such as open-ness can cut both ways, and needs to be handled with care – without citing specific examples, there are some things it would be insane to publish particularly if they put lives at risk.
Opening data then, like opening doors, requires forethought. As Pandora discovered when she took the lid off her mythical jar, open is not always better than shut but it depends on what lies inside, and how it relates to everything else. Neither are the protections currently available universally good, nor universally bad – we need laws on privacy to respond to today's realities, and patents and IP, and copyright, but such things can be misused, as can any tool.
So let's not just band around the 'open' tag as though it will always be a good thing but rather, let’s see 'open' as an opportunity to decide, for each source, the benefits of enabling access together while keeping in mind the risks. The one thing we should keep open above all, is our minds.
[Originally published on publictechnology.net]
I, Technology: Is using computers to heat houses such a daft idea?
2012-01-17
OK, it's cold. Winter has finally come - or come back, given the cold snap of early December. And there's nothing like sitting in a freezing room, worrying about the cost of oil, to focus the mind on alternative methods to heat a house.
Meanwhile, we are told, computers are excessively power-hungry. Particularly desktops with their 200-watt-plus power supplies. All that energy has to go somewhere, according to the law of conservation of energy: indeed, it is largely dissipated through the various processing, memory and other chips as heat, which is then sucked away and blown out of vents.
To all intents and purposes, then, a computer is running as a heater, albeit perhaps a rather inefficient one (though by the same token, the only other places that energy can be released are into noise and light, so it is quite efficient, in its own way). The daft idea is, why don't we make more of this capability? That is, actually use computers as a source of heat?
A precedent has already been set by data centre designers. Server rooms are notoriously un-green, in that they gobble whatever power is made available to them and spew it out as heat which then requires even more power to carry away. News stories about data centres warming nearby buildings or even greenhouses appear with reasonable frequency - the cynic in me says that it's an attempt to deflect attention, but equally, it's good that the heat isn't being completely wasted.
So, why not homes? Mainframes house chips on ceramic substrates, and ceramics also used in heating elements - I'm showing my ignorance to an alarming extent but could it be that a technology used to insulate, could also be used to distribute heat? Perhaps not enough to warm a whole house, one might argue, but… let's think about this a bit. Rather than a radiator, would it be possible to create a wall-mounted device, architected to achieve high temperatures purely by performing calculations? Of course, these could be random floating point operations, but perhaps, more usefully, they could be non-random, programmed to achieve a goal - help with finding the cure for AIDS, for example, or supporting the search for little green men,
To take this one stage further (and I know I'm stretching things to the limit here), perhaps such processor time could even be rented. For money. To an organisation that could make use of it. Given an appropriate, pre-tested network connection, maybe Amazon could take advantage of my radiators for burst capacity for its elastic cloud service. At a cost, which could offset, or even pay for my heating bills. I might even turn a profit.
But is this really stretching things? After all, micro-generation from solar panels might once have been seen as mere fantasy, but a flurry of neighbours have bought into such schemes recently. To the extent that passing conversations have moved from being about the weather, to the return on investment that a sunny day can bring.
Home heat through processing might seem similarly out there, but the major vendors are at least thinking about it. Microsoft Research calls the concept the 'Data Furnace', suggesting that data center (sic) owners could save hundreds of dollars per server per year in cooling costs, if servers were distributed around willing (and appropriately equipped) households.
While the concept does have associated challenges, such as data security (which could likely be handled with encrypted virtual machines), it does have a lot going for it. Not least that less heat would be generated by centralised data centres, and more heat would be available for households, saving on personal fuel bills.
Indeed, it's hard to think of one element of technology that doesn't already exist to make this happen today. Perhaps, in fact, the idea of using computers to heat houses is not that daft after all.
[First published on ZDNet]
87 days to go...
2012-01-18
That's 87 days of self-abuse and semi-sobriety for me, 87 days of grumpiness for everyone else, but all in a good cause as I'm running the Paris Marathon in April. As previous fundraising has been for overseas causes, I wanted to do something closer to home - so I've chosen 2 charities. First MusicSpace, a music therapy charity based in Bristol, and second, Isabel Hospice based in Hertfordshire.
http://www.justgiving.com/musicparis and http://www.justgiving.com/hospiceparis for further information!
March 2012
This section contains posts from March 2012.
The Facebook IPO, and where it takes us
2012-03-02
The long-awaited Facebook IPO is nearly upon us. While there has been plenty of speculation about the company’s valuation, less has been said about how it’s going to achieve the quoted numbers, and its potential impact on the market. These questions are inextricably linked, so here’s a quick summary of the first.
Based on calculations linked to its SECC filings suggest a ‘worth’ of $5 Billion. However, trading on [controversial?] private exchange Sharespost implies a valuation of $80-85 Billion (understandably creeping up, given the suspense), and the general line is that the company could be worth as much as $100 Billion.
The term ‘worth’ should be used guardedly of course, given that share value is a construct existing entirely in the minds of investors. Some companies achieve the dubious distinction of a valuation that is less than their actual net assets – the stock broking equivalent of being worth more if you were melted down for scrap.
But just how much is Facebook’s valuation actually grounded in reality? The company turned over $3.7 Billion last year, and made a profit of $1 Billion. In other words, to pay back investors, it would take 100 years at current rates. So, there has to be more than it than that.
By way of comparison, Google reported revenues of $10.6 Billion in the last three months, and profits were up 6.4% to $2.7 B. Even with unforgiveably shoddy maths that equates to a return an order of magnitude greater than Facebook. Google is currently valued at $180 billion, which is twice as much as the suggested ‘value’ of Facebook.
Interestingly, at the time of its own IPO, Google was valued at 121 times annual sales but now trades at about 20 times – which is six times less – roughly the same proportion, one could say, that Facebook is overpriced. In other words, based on existing considerations Facebook shares are quite likely to drop to about 20% of their initial offer price.
Where’s the growth coming from?
Let’s go with the former scenario, for the moment. Additional revenues could come from three places – growing the subscriber base from the current model, or providing additional services to advertisers, or selling additional services to subscribers. Facebook currently earns about a dollar a year per subscriber in terms of revenues– so to grow organically from its current base it needs to expand to five times as many users – about 4 billion, that would be, or roughly everyone on the planet who has the capacity to read and write.
Advertising is the company’s biggest revenue stream, but while advertising revenues are increasing, a 500% uplift would result in a similar increase in expectations on the part of advertisers. One can only speculate how that would go down, even if the quality of click-throughs could be increased to reduce advertisers’ acquisition costs per lead. (Note: Ad pricing increased 18% in 2010, the number of ads increased by 42%)
It is difficult to see how this could happen without further erosion of individual privacy. The Facebook Open Graph, for advertisers, is predicated on the idea that not only are people prepared to give up their privacy on the premise of ‘free stuff’, but also that they will shop their mates. Or as Facebook puts it, “Helping brands connect with your friends.” You’ve had a Starbucks or bought some Levis – perhaps your pals would like to do the same?
So, what about selling additional services? Facebook’s biggest area of growth (up five-fold in 2010, to $557 million) is currently ‘payments’, i.e. ‘freemium’ apps such as Farmville, in which you can pay for additional stuff such as a rake or a bale of hay. The explosion in the gaming market overall suggests this could be an area of considerable growth to Facebook – but, as the company itself mentions in its 22 pages of risk factors in the IPO prospectus (LINK: http://www.sec.gov/Archives/edgar/data/1326801/000119312512034517/d287954ds1.htm#toc), much of this growth could be outside of the online platform’s control.
Speaking of the prospectus, the company is a lot less verbose when it comes to future strategy. The model is the model – and indeed, many of the company’s announcements (such as Open Graph, say) do, in hindsight, look like they have been implemented in order to give the IPO a bit more meat. But the model is the model, the plan is to grow by continuing to do what the company is already doing, with no suggestion of other rabbits to pull out of the hat.
“More of the same” includes mobile, possibly the most intriguing area for the company. Facebook is the most downloaded app on smartphones today, which is great news for the company’s footprint. Not so great for advertisers, nor for developers as the mobile Facebook client doesn’t support ads or apps. Indeed, this is quoted as a risk for the company – the IPO document suggests incorporating sponsored stories into the mobile news feed, but again, this is untested.
The broader impact
What, then, of impact on the wider market? All of the above suggests that if Facebook is to grow, it will need to start eating the lunches of other companies. Ruling out the unlikely scenario that existing punters have even more time to spend on social networking sites, Facebook’s required gain will require a level of pain from other providers, in particular Twitter and Google. Twitter, which is probably Facebook’s biggest contender for eyeballs, is not going anywhere, and Google is arguably the most creative of the three.
If the majority of Facebook’s revenues are from advertising meanwhile, the company will have other online advertising models in its sights. Like Google, Facebook operates a walled garden approach. While both offer a more attractive, ‘joined up’ proposition to advertisers, given the wealth of information they are building (rightly or wrongly) about their subscribers, the broader advertising world is also evolving – for example with web beacons and retargeting of ads.
Advertisers are currently investigating every potential alleyway down which they can connect with customers. Whether people like such models is a moot question – the important thing right now is that they work, and that advertisers will be clinical in how they assign their budgets, according to detailed analysis of Cost Per Acquisition (CPA) and other metrics.
Which brings us, inevitably, back to the question of privacy. You may be of the “Privacy is dead, deal with it” school of thought, but this makes some quite dramatic assumptions about both forward behaviours, and the fickleness of youth. Generation Z, we are told, doesn’t give two hoots about privacy. But equally (and following the same, misguided stereotyping logic), it doesn’t give two hoots about anything. Brand loyalty is everything, but loyalties can change very quickly.
Where else could Facebook have an impact? The answer to this question returns to developers. As things stand Facebook offers a highly scalable, globally accessible, advertising funded application platform, suggesting the real battleground for the company is more with Platform-as-a-Service companies such as Microsoft, Amazon and indeed, Google than with social networking providers. The latter have been targeting corporate apps and standalone startups; whereas Facebook’s offer is decidedly consumer (with undertones of Apple vs Microsoft).
Developers lead to startups, and whatever comes out of the IPO, Silicon Valley is likely to gain a number of angel investors (LINK: http://www.kernelmag.com/comment/opinion/1279/the-age-of-angellist/) with positive attitudes about the future success of the company, and hence the platform. Startups lead to diversity, and diversification could well be the key to Facebook’s future – though as the current model goes, it is more likely to bask in the reflected glory of those building on its platform. One can almost imagine a ‘Facebook Inside’ marketing campaign in a few years’ time.
Could Facebook itself diversify?
Given Mark Zuckerberg’s mission statement, “To make the world more open and connected,” there is no reason why Facebook-the-company needs to stop with Facebook-the-social-portal. The idea of Facebook smartphones, or indeed, Facebook SIM cards has already been suggested. However, to spread its wings it will need to enter areas of the market where it lacks first mover advantage. It has not succeeded in winning over email and messaging; it isn’t a player in videoconferencing; it doesn’t have a stack play; it has no enterprise offering. And to put it bluntly, the grudging acceptance of the Facebook consumer brand is unlikely to translate into the corporate space.
Perhaps, quite simply, Facebook’s future remains inextricably tied to the future of social networking. The company was always a bet, and it remains one. Thus far it has paid back in spades. Mark Zuckerberg and his merry men discovered a seam of gold that nobody had yet spotted – whether it got lucky or not is irrelevant. The real question is whether that seam of gold goes even deeper, or whether Facebook will have to compete more directly with other companies in the future.
The fact is that for a generation, together with Twitter, Facebook is social networking. However, while the company has achieved a staggering level of success and defined a whole new method of global communications in the process, the company’s ambitions in terms of the future of social remain relatively paltry.
Somewhere along the way, Facebook seems to have confused personal interaction with genuine, emotional engagement; equally, if the company doesn’t sort out a more solid (i.e. revenue-earning) mobile strategy, someone else will. The future in all its reality-augmented, near-field, sentiment-analysed glory will be device-independent, and Facebook’s business model (while not its reach) is still very dependent on the computer browser which, we are told, is seeing its twilight years.
So, will Facebook’s IPO have an impact? About the only thing we can say with any certainty is that a very short list of people are going to become very rich indeed.
[A shorter version of this article was originally published on The Kernel]
Public Technology: CloudStore is only the start of needed procurement reforms
2012-03-08
I understand a collective sigh of relief was heard as the UK Government CloudStore finally came into being this month. The pent-up frustration experienced by government departments when it comes to IT procurement now has a solution, at least for cloud-based services.
A G-Cloud catalogue entry equates to pre-vetting, and in many cases therefore, the difference between procuring a service or not. Whereas in the past it was both too costly and too complex to acquire a cloud-based option, such hurdles have now been considerably reduced.
Elsewhere of course, it's procurement business as usual. Current conversations with public sector organisations take me right back to my days as an IT manager for a large organisation, where I had to order everything in bulk to counter the overheads of procurement. An order had to be at least 500 quid, or it just wasn't worth it.
The procurement challenge impacts not just IT but business stakeholders, in that the only conversations worth having are for the big stuff. Need an entire new IT system? Let's talk. But if you just want a new feature enabled on an existing PBX, sorry, not worth our while.
Whatever the higher-level views around consumerisation or cloud, the fact is that IT costs are becoming increasingly fragmented. In this world where users are increasingly doing things for themselves, it should be possible to buy a tablet, or a printer, or an 'app-for-that' without needing to sign forms in triplicate or drive to head office. I know of one example of where staff buy their own printer paper from the local supermarket because it's just too painful to wait.
The other knock-on is, of course, with the smaller supplier. Despite the fine efforts being driven by central government about providing more business to SMEs, the overheads inherent in the process make this very difficult. Even if the playing field is levelled, small companies simply can't afford the lead times that government procurement processes still entail.
Equally, even knowing who to talk to can be a challenge. If you're an IT supplier with a new, innovative solution and you want to find out whether anyone in the MOD, or HMRC, or the NHS might be interested, who should you talk to? In all honesty, even if you find an appropriate person, the chances are they're going to be too overwhelmed with everything else to have the time.
At a macro level, several procurement efficiency initiatives are underway and frameworks being developed (such as the Public Services Network), both within and across departments, as there have been in the past. I genuinely wish them all the best, with one caveat. Any programme of change that is going to take more than two years to implement will almost inevitably be overtaken by events.
Perhaps one day, all procurement will be as straightforward as the CloudStore is intending to be. It's still early days, and no doubt lessons to be learned (from experience for example, assuring checks and balances concerning demonstrable business ROI on purchases). But we can hope that it becomes a flagship example of how procurement can be done, if the will is there to change.
[Originally published on publictechnology.net]
April 2012
This section contains posts from April 2012.
The Kernel: The day the music died
2012-04-04
Do you believe in rock and roll? Can music save your mortal soul?
When asked what American Pie actually referenced, Don McClean was reputed to say: “It means I never have to work again.” While he may have been among the handful of lucky ones to have achieved hit record nirvana, many pundits still hanker for the days when all hopeful musicians had to do was send a thousand demo tapes, tour their backside off and wait for glory.
Those days are gone, of course. The music industry is dead, gone, shuffled off this mortal coil; mere carrion to be picked apart by maudlin journalists. Behind the pronouncements about the current state of the biz from commentators, analysts and industry bodies lies a conundrum, however. Namely that people insist on continuing to listen to the damned stuff.
Today’s youth appear to be the worst culprits – not content with simply graffito-ing bus shelters or terrorising grannies on park benches, they are doing so against a back-beat of new (and un-danceable) tunes. And their generation-punk parents are no better, fathers and sons glued to YouTube and Spotify.
So, what’s really going on? Have the harbingers of industry doom got it nailed? Are we heading towards a world where creative acts return to busker status, mere sideshows in a Simon Cowell theme park or earnestly grateful participants in some free music utopia?
As with all such questions, the answer lies somewhere in between. For sure, there’s real pain being felt by many participants in the current music business, and potentially real gains to be had. If only the participants could work out where, and how.
Make the bad noises stop
Take a certain perspective on music’s recent fortunes and it is easy to get morose, particularly if your annual bonus, or indeed your mortgage depends on them. Total album sales plummeted steadily from 1999’s figure of 940 million to only 360 million in 2010, with only a minor blip in 2004. Not a great story to tell the bank manager.
The industry has put the blame firmly at the door of torrent sites and other music piracy tools, claiming up to 95 per cent of all music is illegally distributed and therefore unmonetised. While “protectionist” music publishers have been painted as the bad guys in the affair (they can, at least, be censured for their outrageously wishful thinking that every single illegal download corresponds to a lost sale), it’s difficult to imagine anyone in the same position taking a more positive stance.
Indeed, many artists have expressed similar anxiety at this erosion of revenues. Not just Metallica; many smaller, independent bands have suffered all the pain without having a major label’s lawyers to defend them. The comfort blanket of a record deal has proved scant comfort for many a good act, as cash-strapped majors have made efficiency savings by removing all but their fattest cash cows (such as Pure Reason Revolution, culled by Sony/BMG in 2006) from their rosters. To no avail. Of the big six, once-proud behemoths of music, two have already been assimilated and a third (EMI) is in the process of being, pending antitrust checks.
Digital music business models continue to baffle, meanwhile. Spotify is repeatedly being put under the spotlight about whether it pays decent royalty rates to artists, a topic on which it prefers to keep stumm (the answer: probably not, but have you any idea how much those negotiations cost?). Meanwhile, Apple’s iTunes has gone from industry darling to monopolist by taking a 30 per cent cut of every song sold though its store, even as it “owns” 70 per cent of the legal digital download pipe.
And still, hopelessly hopeful young bands hold on to the dream of getting signed by a record company, even as they upload their crown jewels to the free-for-all that, YouTube and SoundCloud and their ilk. What hope does anyone stand?
It’s not all awful
Of course if you are Coldplay, Cliff Richard or Kasabian, life remains a peach. You can record an album, head out on tour or release a DVD in the knowledge that each will more than cover its costs. “The people making it tend to be those who have already made it,” says Ed Averdieck, formerly of Nokia Music and OD2. Frustrations about artistic freedom and unfair contracts aside, today’s Don McLean types remain in a strong position.
As for the biz, recorded music sales are actually up, for the first time in seven years. Finger’s crossed that it isn’t another seven-year blip, like the one seen in 2004. Other segments of the market are also doing well, including live music revenues which increased year on year until a fall in 2010 (which could be put down to stadium-filling bands simultaneously not touring).
Move outside of the traditional music business, and a wealth of new possibilities abound. Radiohead has shown all and sundry how it is possible to be a commercial success while still cocking a snoop at the establishment; many bands have proven beyond doubt the pre-sale model to fan-fund albums or tours; the market for premium versions of albums such as Pink Floyd’s Immersion editions confirm a continued appetite for both physical music and associated flammery.
That’s before we even get to the rosy-cheeked optimism of music-related start-up businesses. It’s hard to believe that Spotify only opened for registration three years ago; even more astonishingly, YouTube (30 per cent of downloads are music videos) has been around for a mere seven.
It all adds up to evidence, surely, that the industry is alive and well? Perhaps. The trouble is that nobody seems to know what’s going on, nor have any real control over the direction things are going. “Music is messy, organic and human,” says Dan Crowe, CTO at live music site Songkick. “You can’t force it to happen.” But with business being all about a guaranteed return on investment, where exactly can more certainty be found?
Speculate, accumulate?
We can all speculate and philosophise of course, throwing out concepts and theories like long-tail or crowdsourcing, drawing Dilbert-esque bar charts and pie charts, linking boxes and arrows. We can represent or publicise the interests of one group or another, be they rights holders, publishers, retailers or industry players, individually or in combination. And we can all hope, in the process, that simply saying these things makes them so.
Or, indeed, we can just give it a go. Many start-ups and large company initiatives appear to be games of “what if”; a random progression of combinatorial experiments, each testing out new selections of features and services on an unsuspecting public, to see what sticks. And when we find, in hindsight, that only one out of a hundred ideas had any legs, we claim the experiment to be a huge success.
Perhaps that’s a bit harsh. Individual success stories shine out as clearly as the problems they are trying to solve, such as Songkick, an online service that helps people log their musical interests and sends an alert when a particular band is playing in the local area. “Our mission is to get more people going to see live music,” says the company’s CTO, Dan Crowe.
A straightforward ambition; and when faced with the number of half-empty venues at so many live events, one that clearly needs tackling. The ticketing industry itself is in turmoil, as agents, sponsors and venues test out the business models (including the nauseatingly oxymoronic “convenience fee”), balancing their own interests with the need to get punters through the door. “There’s up to six layers of intermediary between the artists wanting to perform live, and the fans wanting to see them,” says Crowe.
For every Songkick, however, there exists a field of wannabes and also-rans, as well as the Cinderellas that are yet to find their golden slippers. In the live events space alone, for example, there’s LiveMusicStage which claims to be focusing more on the social side (like Songkick isn’t… well, yes it is), Clubbillboard which is more about clubbing, the recently deceased Plancast… the list goes on.
Don’t get me wrong, I have nothing but praise for anyone who chooses to try to build something new and fresh, and there’s more than enough cynicism (particularly in the UK) to go round. Pretty clearly however, not all such ventures stand a chance of succeeding. It is also difficult to shake the feeling that many such setups are features, not companies. “We’ve created a mechanism for jazz listeners to share their favourite moments on Facebook,” for example.
Okay, I made that one up. But given the cauldron of opportunity, the danger of looking too closely is that all you will see is diced carrots. To understand the future, it is important to go back to the source.
Going back to basics
At the heart of all music-related sales lie six fundamentals. First, that people will continue to make, and subsequently listen to music. A raft of speculation exists on this topic, which all seems to reach the same conclusion: we do it because because it satisfies some basic, Mazlo-esque urge. So, one way or another, we’ll carry on doing so.
Second, despite common sense suggesting the contrary (“The natural inclination is to sell direct,” says Averdieck), intermediaries provide vital links between supply and demand. Managers, labels, distributors, retailers, broadcasters and agents all play their part in helping music reach its hungry audience.
The idea of disintermediation may offer a debating point but in reality each role serves its purpose. Some have discovered that they can perform certain functions themselves – self-managing bands, for example – but the management role doesn’t go away, it simply moves in-house.
Trying to remove a link in the chain can have disastrous consequences. “The music labels tried to cut out retailers, but that failed spectacularly, as retailers listen to their customers,” recalls Averdieck, which illustrates a third point, that vested interests will continue to skew the model.
Forget romantic notions of altruism: all parties are human, and therefore subject to the deadly sin of greed. Equally, grey areas of favour exchange, horse trading, back scratching and transactions that don’t make it onto the books will always exist, particularly where both money and creative types are involved. Indeed, in its worst excesses, the music biz is almost as bad as politics.
Despite this the fourth truism is that, contrary to the popular myth, people are genuinely prepared to pay for music. Sure, we are lazy and opportunistic, taking the lower road where it exists, looking for a bargain and casting a blind eye on our own hypocrisies. But as shown by Spotify’s impact on BitTorrent, we’re perfectly willing to take the legal route if it presents itself.
Turning to the artists, the fifth principle is that having a big break was never easy, and never will be. All the technology in the world will not make every hopeful ensemble successful overnight, however talented. Apart from the lucky few (and we can all win the lottery as well), most fledgling artists will still have to do their time playing basement bars, handing out flyers and taking down their own kit and if they want to make it big.
Finally, the notion of celebrity, or brand association, or whatever you want to call it, will continue to colour the musical business model. People pay a premium for Nike, Hollister and SuperDry because they want to wear the badge, and it’s the same for artists. Fans do like music, for sure, but they also like to be associated with their heroes, and are frequently willing to bang the proverbial drum in support of their favourite acts.
Musical information age
While all the technology in the world can’t change these fundamentals, it certainly offers an opportunity to do clever things. Whatever you want to call it, the interactive web, social networking or music 2.0 all try to capture the “new” ability we have to inform, share and comment on each other’s musical preferences. So many start-ups are based on the simple premise that people like talking to each other about music.
Tools from Soundcloud to Bandcamp facilitate and simplify the online experience for bands that don’t have major label marketing muscle behind them. What they don’t do is replace the need for direct interaction on the one hand, and a concrete rationale behind it on the other – as illustrated, for example, by the patronage model employed by artists such as My Life Story’s Jake Shillingford.
The two-edged sword of technology also serves in music’s publishing, delivery and monetisation. While it could be argued that YouTube, iTunes, Spotify, Pandora et al have already cornered the market, there appears to be plenty of room for others to try their luck at “reinventing music delivery”, such as the now-defunct mFlow and its successor, Bloom.fm.
The opportunity for bands to promote themselves today appears greater than ever, which raises an interesting spectre. Traditionally, artists could get on with being creative, leaving all that grubby marketing stuff to the labels. Not only might a virtuoso guitarist believe that self-promotion was for others to do, he or she might also be rubbish at doing so. In this brave new world, is there actually room for softly spoken musical genius, or will only the loud survive?
Even once an act has seen success, it will still be faced with the familiar challenge of keeping the back catalogue monetised. Familiar because, in the old days, records (like books) were subject to a certain production run. Today’s digital models enable a production run of “one”, but only (and this is the weakness at the heart of the Long Tail model) if people know where to look.
This issue is tackled by recently launched CueSongs, a rights buying portal started up by the aforementioned Ed Averdieck and musical pioneer Peter Gabriel. The platform makes it easier for corporate music buyers (think films and advertising) to access content which, when faced with the idea of having yet another Moby track on an advert or, please God no, not *that* Elbow riff as used by the Royal Wedding, is welcome news indeed.
“We’re looking to move artists higher up the value chain, connecting the growing market for licensed music tracks with commercially released repertoire,” says Averdieck. “Licensing a track for commercial use was entirely manual. Our aim is to streamline and simplify the process.” And, as a result, to also help rights owners get back catalogues into the sunlight once again. “All these rights are sitting in the basement in contractual straightjackets. We thought, if we could only give them a canvas…”
Technology has a role to play when it comes to enabling traditional music models to happen, according to understood principles and business models. The bar isn’t so much raised as moved to a different, online playing field. While the internet isn’t paved with gold for musicians and their representatives, the opportunity is at least to go where the action is.
But surely technology can be doing so much more for music? Well, yes, maybe it can.
The new frontiers
The sheer volume of information being generated around music is as staggering as it is unquantifiable – or at least, nobody has yet done so. By way of illustration, however, four billion videos are watched on YouTube every day, and we know that roughly a third of these are music related. Meanwhile about five million viewers a day are commenting on other social sites, comments that beget other comments.
Add to this the tracks played on Spotify or via Soundcloud; add all other related social networking activity, on other sites; add data around actual plays via connected devices; add commentary about who’s been to see whom and where; add metadata, advertising click-throughs and historical sales information; and you would start to build a very interesting picture indeed, if only you could make sense of it.
The self-effacing term is “big data”, which gives quiet mention to both the staggeringly vast pool of information, and the enormity of the challenge of making sense of it. Record labels are already throwing global sales information into the number cruncher, using the results to forecast demand and organise distribution.
And meanwhile, some start-ups are tackling the challenge of mining online sentiment from accessible social sites and generating useful results. For example, We Are Hunted and The Sound Index, both of which generate “charts” based on what people say they are actually listening to.
The opportunities are legion, not least in the live arena, thinks Songkick’s Dan Crow. “There’s a huge opportunity for a much more holistic, data-driven approach based on real evidence,” he says. Not only could this benefit performers and fans, but also owners of clubs and concert halls. With the right data at their fingertips, they would be better able to pitch to performers to come and play at their venues.
Thinking about the experience, plenty more can be done with all that data. Time for another buzz phrase, this time “augmented reality”, or the ability to add information to a current situation (as viewed, say, through a video camera on a mobile phone). Google Goggles is one indicator of the shape of things to come. Meanwhile, applications such as Soundhound and Shazam (which, in the simplest terms, can “listen” to music and tell you what the title is) are also examples of apps that combine captured information with that available from an online database.
Clever as they are, capabilities from We Are Hunted to Soundhound are only starting to scratch the surface, providing a rear-view mirror onto what has already happened. Things start to get exciting when they are seen as the basis of new decisions, or indeed experiences.
A purely speculative example involves putting together a touring plan. Yes, information about where the fan base was stronger would help the band make better decisions about where to play. But equally, if venues knew of the tour plans, they might also want to propose the band stop to play in their own town, en route? And, potentially, stream it live, if the facility to do so was available to them?
And new experiences? We’re already seeing bands experimenting with audience involvement in the creative act, for example, simply letting fans get on and record shows. “We’re seeing bands experimenting with more open policies around the recording of gigs,” says Songkick’s Dan Crow. “The line between recording and live is blurring, we’re going to see far more of it.” Bands like Marillion have taken this one stage further by recording a gig through the camera phones of attendees, then editing the footage together, into a music video.
Enough speculation, but for a final acknowledgement: that creativity doesn’t have to stop with guitars and keyboards. The DJ/producer phenomenon started in the era of Frankie Goes to Hollywood has evolved to such an extent that today’s chart-toppers are just as likely to be DJs (think David Guetta and Skrillex),something which the industry as a whole is struggling to et its head around. “DJs and producers were never really invited to the rock and roll party,” says Karl Nielson of Dubstep/D&B media company AEI Media. “The industry was built for white guys with guitars.”
Similarly, it becomes harder and harder to distinguish the boundaries between music and other spheres of creativity, entertainment and engagement. Examples from The Great Global Treasure Hunt to the multimedia stage adaptation of Howl’s Moving Castle by Davy and Kristin McGuire are indicative of both the shape of things to come and the kinds of revenue models that become possible, if only the industry itself chooses to pick them up and run with them.
There’s all to play for
Despite all the possibilities that technology now affords, we are perhaps not all that far from the fundamental principle of lobbing a shilling in a bucket in gratitude for being entertained. “If you can please the artists and the fans, the rest will follow,” remarks Songkick’s Crow. And all that “the rest” implies – managers, labels, broadcasters, marketers and everyone else involved in the production, delivery and performance process.
Perhaps the trick is to recognise that the traditional music industry was ever only a chain of intermediaries that became too powerful in one ecosystem, itself currently being dismantled to make way for another. Nobody has a monopoly on the future, but rest assured that major players will emerge, and in twenty years’ time will be the established music business.
While on the surface it may appear that incumbents are being hoofed out of the nest by a new generation of cuckoos, in truth, the last labels standing have as much chance as Songkick and the rest, if they choose to seize the nettle. “Music should be challenging, it should be entrepreneurial, it should be pioneering,” says Karl Nielson. “Record companies should be the most exciting places to work ever! Musicians are already going out on a limb, we should be as brave as the artists we are representing.”
Technology offers a wealth of possibilities, but more important is how it feeds the human desire to make creative use of such tools and perform the results to the largest possible audience. Nor will it change the business imperative to “become number one in the space”, or to overcome the nature of a few bad apples to upset the whole barrel. These are the ingredients we are given.
The winners in the music space will be those who crack the code, either through careful assessment or by stumbling upon the magic key, and then (did anyone mention MySpace?) not killing the golden goose.
In the meantime, just because a certain business model has existed for a few decades, that doesn’t make it the best way of doing things and should rightly be dispensed with if it gets in the way. Music might not be able to save any mortal souls but right now, it certainly offers a host of opportunities to any party with the gumption to get in the driving seat. In the words of Ed Averdieck: “Someone else was going to crack it – why not us?” Why not, indeed.
Race report: the Paris Marathon
2012-04-16
It must have been the beetroot detox smoothie. After 6 months of what I believed was less than adequate preparation, including a pulled hip muscle in September and a fortnight of man-flu in February, and carrying a stone more than last time, I took on the 26 miles of the Paris Marathon and emerged relatively unscathed.
The race itself – suffice to say, if you’re going to do a marathon as a one-off, it’s a good call to do it in a relatively flat, major city with lots of landmarks. Starting on the Champs Elysees, heading past Concorde, the Louvre, Place de la Bastille, the Seine tunnels, le Tour Eiffel, l’Arc de Triomphe, plenty of sights kept the interest levels up.
Only the two parks at either end of the course dragged a little. And even these had their advantages, particularly for the weaker of bladder. Indeed, I’ve never seen so many people dart in and out of the undergrowth. So, yes, thoroughly recommended. But enough about that.
Meanwhile, I was keeping that certain sense of dread at bay with a concerted effort to keep performance expectations very low. My last (and only previous) marathon in Brighton didn’t go so well. I started with best intentions and very good company, and I confess, a feeling that I could beat the odds. I ran too fast for 15 miles before realising the mistake as, over the next 11, I experienced what it might feel like to have iron nails slowly inserted into my thighs.
So this time I was determined to keep things slow, consistently sticking to between 10 and 10:30 minute miles. All the way round. As a result, without hitting the wall, and without any real pain until the final half mile. I even came in a bit faster than last time – 4:42, rather than 4:45. Without feeling in the slightest bit smug – I’d been in the same position myself – I continued at the same loping pace, passing innumerable people in the final 3 miles.
Highlights: just doing it, heading down the Champs and other long, straight avenues with thousands of like-minded people; seeing my lovely family at frequent vantage points (hurrah for the Metro); the tunnels, difficult to explain but it felt a bit sci-fi; the Eiffel Tower; the buckets and sponges; the foolish but “heck, why not” snifter of wine 2 miles before the end; the innumerable brass bands, drum troupes and rock groups; the massages.
Less good, while I thoroughly appreciated the orange quarters and banana sections, the resulting mash of skins and the chaos trying to pick them up resulted in at least one bad fall that I saw.
Overall, if you’re going to be mad enough to run an entire marathon, then Paris is as good a place to do it as ever. That, coupled with the simplicity (and cheapness!) of a €70 entry fee rather than having to enter a lottery or guarantee £1000-plus charity fund raising. If it was my last (I have a funny feeling it won’t be) then I will have finished on a high.
P.S. If you did have any spare change, I was raising money for a music therapy charity and a hospice so feel free!
The Kernel: London Book Fair
2012-04-18
On the table lie a banana, a note pad and a business card. Nobody is seated, which makes it an exception; most other tables at the Penguin Books stand are occupied by earnest young things and mature, sober types, chest-deep in conversations and scribbled notes. It’s much the same at Random House and Harper Collins, each but a carpeted corridor away, back to back with a host of smaller publishers of every kind of book imaginable.
Each stand is lined, sometimes top to bottom, with books. Books are everywhere. Books, books, books, in all shape and sizes, on every topic imaginable. The collective offerings from publishers and resellers, industry bodies and government agencies stretch across both halls of Earl’s Court. Come to the London Book Fair and at first glance, you would think the information revolution never happened.
“It’s just a façade,” one person (who’s been coming to the show for 18 years) tells me. “Everybody knows that the world is moving on.” E-books may be the future yet real business is being done at these tables, slots allocated according to the carefully planned schedules of publishers, retailers, rights agents and other intermediaries. One seller told me that diaries fill up months in advance, for what is a highly significant element of the publishing business calendar.
So, where precisely is the future of publishing to be found? While a scan of the floor plan reveals a Digital Publishing section, it feels more like the naughty corner than the brave new world. Activity is decidedly muted in the Digital Lounge – indeed, an empty booth offers the perfect place to write this piece, far away from all the hubbub.
The twenty or so vendors and device manufacturers occupying the digital enclave project the feel of a technology-based event. Banners proffer world-changing mantras, while heads of marketing and pre-sales engineers engage with passers by. No real business will be done here; success will be measured in badges scanned, conversations had and leads generated.
It’s a conundrum. If anyone had told the publishing industry that its models were dead, it clearly forgot to listen. Or perhaps, it is so deeply ensconced in wheeling and dealing that the geeks can’t get a word in edgeways. A few interactions (notably with a particularly disdainful head of commissioning, though I did nick a plastic bag from the stand shortly beforehand) left me feeling here was a closed shop, insiders only need apply.
Or perhaps, given that time is money in any business, rapidly compressing margins have left publishing industry decision makers with no time but to do more of what they have always done? “No time to say hello, goodbye! I’m late, I’m late, I’m late!” exclaim the notepad-carrying white rabbits as they hurry between half-hour appointments.
A small part of me – the techno-evangelist which I generally try to keep in check – wants to run through the hall screaming at everyone, like the character in the film who has had advance warning of the impending disaster, the epidemic or the tidal wave. But a bigger part whispers in my ear that the situation deserves quiet reflection. “Don’t do anything hasty,” it says, Ent-like. My head is buzzing with literary references.
So, where’s the truth? All is not necessarily well. A section of remaindered book companies bears witness to the shotgun models still employed in modern publishing: their tables yawning under the weight of celebrity bios, pasta cookbooks and obscure histories. “That’s not a good sign,” says a lady at the desk of The New York Books Review stand. “I never expected there to be so many.”
I am told, by others, how the show is a crush of egos, carried forward by its own, historical inertia. How the publishing industry could learn so much from the music industry, which (despite its own inadequacies) is so clearly ahead of the game. Ironic given how much gloom (LINK: http://www.kernelmag.com/features/report/1740/the-day-the-music-died/) pervades the latter.
But meanwhile, there’s something strangely compelling about the sheer volume of books carried by the show. Some are in nondescript or tacky covers, but others have been produced with careful attention to packaging, to tactility and tangibility. Indeed, whole companies are dedicated exclusively to the cover rather than the content. “What’s the future of the publishing industry? It’s us!” they tell me.
As I head back to collect my coat, I stumble across a ‘tweetup’ – an eyeball opportunity for the Twitterati – at the Illustrators Bar and find myself plunged back into the digitally enhanced world. Some familiar faces, such as the founders of audiobook streaming service Bardowl, are present as well as both authors and publishers. I’m told that the digital track of the London Book Fair has certainly progressed, moving from vision a couple of years ago, to discussions around workable strategies.
Whatever the future holds, whatever new possibilities offered by e-books and apps, streaming tools and social networks, certain principles hold true. Sure, the industry will undoubtedly change, alongside habits of both writing and reading. Sure, new opportunities for (that horrible word) monetisation exist, even as some older models wane.
But it is difficult not to maintain a sense of optimism about the publishing industry. Even the most cynical seemed positive about the future, for example in terms of the increasing quality of writing, the broadening opportunities for distribution, or the growing market for ‘special editions’, with digital capabilities augmenting, not replacing what has gone before.
A final view over the stands, stretching away from the upper-floor cloakroom, allays any fears about all that tangible goodness becoming no more than pixels on a screen. For all its precious egos and old habits, its bluster and self-importance, the future of publishing is bright. And it will involve books. Lots of books.
Cleaning lady’s response to Werner Herzog
2012-04-24
The universe. Profound in its enormity. The detailed atomic structure of a particle in the depths of space, simultaneously illustrating both immeasurable breadth and infinite detail. And likewise we are specks, motes, purposeless fragments floating in our own, great beyond like those very flakes of skin and soil, buffeted by Brownian motion and attracted by still-unexplained laws of gravity and static onto your shelves and the inner surfaces of your electronic devices.
And yet even as we, these miniscule scattered, inconsequential shavings, mere powder filed from that universal grand design, even as we somehow avoid the inevitably grim consequences of our improbable existence, marking each cycle of our tiny planet round the sun as a notch on some celestial stick of achievement, we still strive for explanations, from fulfilled prophecies to punctuated equilibria. We share this common urge for meaning and yet we have different priorities, each of us loosely woven filaments in the threadbare carpet we call civilisation.
Speaking of carpet, you might want to try vacuuming yourself sometime. You might enjoy it.
Another one bites the dust - Roadrunner Records
2012-04-26
And once again, a bijou record label that has been developed from the ground up through hard work and dedication, and which then sees the next logical step as accepting the advances of a larger, more powerful and seemingly equally dedicated organisation. Perhaps indeed, the initial intentions are sound, and new artists join the fold in the knowledge they're getting the best of both worlds.
But then comes the reality - that despite an upturn in fortunes, business is tougher than expected, there's an economic downturn don't you know. The finance people juggle the figures, which senior management are forced to acknowledge mean tough decisions. Which are taken, to the detriment of both staff and artists large and small.
Farewell, Roadrunner Records.
Addendum - here's the final section of Cees Wessels, RoadRunner CEO's original statement:
"...We can take Roadrunner to the next level by focusing our resources on marketing our existing line-up of acclaimed artists as well as discovering the stars of tomorrow."
May 2012
This section contains posts from May 2012.
Why aren't you passionate about open standards?
2012-05-21
A central government consultation on ‘Open Standards’ is underway right now, as instigated by Francis Maude MP and the Cabinet Office. The goal, as stated on the Web site, is to support ‘sharing information across government boundaries and to deliver a common platform and systems that more easily interconnect.’
Obscured by the quite animated debate taking place both online and offline lie a number of quite serious issues that could affect both government departments and citizens alike. So, what’s it all about? The rationale for the consultation is based on UK.gov’s current mantra of ‘better for less’, which means putting less money in the hands of suppliers, whilst improving the delivery of public services.
From a tax payer’s perspective that sounds like a goal worth striving for, so just how can open standards save money? Taking the two words in reverse order, let’s start with the easy one – ‘standards’. The history of computing is a tale of getting things to connect together, pass information or share a common data store. It’s a fair assumption that interoperable systems and services run more efficiently, and therefore cost less for the same result, than their isolated or difficult to integrate equivalents.
Of course, not all prospective standards make the grade. X.400, that super-resilient messaging standard, has shuffled off to the elephant graveyard, as has Open Systems Interconnection. Some get superseded, some (like punched card layouts) now lost in the sands of Internet time. And others become de facto even though they’re not the best – or example, rather than adopting X.400 we slog on with the less reliable SMTP. And it seems to be doing alright.
So, yes, standards are A.Good.Thing. A bigger challenge appears to be around that delightfully vague and multi-faceted word, ‘open’. Ignoring the publicly broadcast debate for a moment, the battle lines seem to be drawn around a single core meaning of the word – that the cost of storing, transmitting or accessing public information in any given format should be zero. Zilch. Nada.
Which sounds reasonable. If you’re wanting to file your tax return, you shouldn’t have to pay for the privilege. Equally, if you are a researcher wanting to access census data, a healthcare worker wanting to access medical guidance, or a commuter checking bus times. The problem isn’t so much the actual information – the mapping co-ordinates, meeting minutes or contact lists – but more the formats in which such information is stored.
Commercial organisations are currently making quite a success out of selling (aka licensing) certain information formats, and/or ensuring that their own tools are the de facto access mechanism. There’s nothing wrong with this in principle - a TV manufacturer for example will have a complex set of arrangements with its suppliers, licensing a video compression standard here, buying in chipsets and other components there. They’ll then build the TV and flog that, and expect to make a reasonable sum on the deal.
But what about public information? Governments are moving service delivery online because of all the supposed benefits – cheaper, less paperwork and so on. Indeed, the recently launched ‘Digital by Default’ initiative is focused on exactly that. But is it reasonable to expect departments or citizens to pay a premium to certain companies, simply to do the things they used to be able to do via paper? Similarly, if a government can adopt (or otherwise come up with) a standard for document storage that is free-to-use, should they be spending tax payers’ money on licensing formats from computer companies?
The answer, of course, is ‘it depends’ – but while every case needs to be considered on its merits, a number of gating factors exist. The first concerns just how many people are impacted, and by how much – for example, filing tax returns affects every adult in the country. The second concerns the benefits which may be achieved by using a certain pot of data, or a certain format, over the costs to the nation of doing so.
Third, we have the question about whether any alternatives exist – there is no point (particularly with technology) sticking with a certain way of doing things simply because that’s the way it’s done at the moment. This does, however, have to be balanced against the costs of making a switch. Finally there’s the question of how much the absence of open standards get in the way of other activities, such as starting a company, delivering new products and services, providing jobs or otherwise influencing the competitive landscape.
That all sounds pretty reasonable, and well worth discussion. However, the only topic that appears to be debated is the last one, and this from the perspective of vested interests. Incumbent vendors are acting like protectionist oligarchs, complaining how they will be unable to innovate if their own approaches (which involve paying them money to license formats) are not adopted. No doubt this argument is partially true, but their real panic comes from the potential of losing a lucrative revenue stream. From a hard-nosed taxpayer’s perspective the response is, clearly: tough. Or to put it more politely – as a nation, let’s pay for things that add genuine value, not those that don’t.
Meanwhile we have ‘open’ lobbyists who seem to think that everything commercial companies say is immediately, and irretrievably corrupted by the forces of capitalism. That may also be philosophically unassailable, but the resulting polarisation has led to much of today’s confusion – particularly when more time appears to be spent on the suitability of those involved, or on the meaning (all parties are guilty of this) of terms such as “free” and “reasonable” when it comes to licensing models, than more important questions that ensure such standardisation activities lead to UK-wide value.
Perhaps the reasons such narrowly focused discussions have come to the fore is because, quite simply, nobody else is doing any talking. Given that the consultation has been extended until June 6th, let’s finish with a call to action: for public sector organisations and their broader suppliers, for strategists and front-line staff, for citizens across the board to add their voices to the debate. Only by understanding the broadest range of views will the Cabinet Office have the information it needs to make a decision. And saying nothing is tantamount to accepting the outcome, whichever party ends up benefiting the most.
[Originally published on publictechnology.net]
June 2012
This section contains posts from June 2012.
I, Technology: Surface and Voice – a marriage made in geek heaven?
2012-06-22
Microsoft’s launch of its Surface Tablet was bound to whip up a storm of controversy. However good the product was going to be, the cries of “rip-off!” from the loyal ranks of Apple users were as inevitable as the game of, “No, they didn’t think of it first, it was…” backwards leapfrog. My take: the clipboard thought of it first. Or Moses. But anyway.
Despite this, first reports give the impression that the company really has pulled a rabbit out of the hat. While the hardware spec certainly does give Microsoft’s partners a run for their money, the tale contains a couple of clever twists – not least, that the operating system is being standardised across both phones and computers. Barring tweaks and chip-specific limitations, that’s one heck of an app store.
I’m grinning as I write the next bit. The one (more?) thing, the as-yet unmentioned consequence is that full-fat versions of voice recognition tools from the likes of Nuance will, from November, be available on a robust enough tablet device to make them sing. For the record, Dragon Dictate remains the only real contender in this space, simply because other products don’t work as well.
“Sure,” say the fans, “We have Siri.” Mark my words well – Siri is a gimmick, as is the version recently announced by Samsung for its S III. A good gimmick, but a gimmick nonetheless, the fat finger equivalent of a voice-enabled input device. Which requires an Internet connection to work – so there go applications for planes, trains and cellar bars.
Apple no doubt knows this – it also knows that future versions will have far broader capabilities and applications, so it doesn’t mind too much. For all the glitz with which it was launched, Siri is a straw man, thrown over the castle walls to get the peasants used to the idea before the gates open and the real deal is marched out.
We don’t have to wait more than a few months, however. For all its training requirements, fully-fledged voice recognition has been raring to go for years. With one limitation – nobody, outside of films, has ever linked the capability to an appropriately formed device. Enter Surface, the Intel-based Windows tablet that can. And on stage right, voice recognition, its perfect marriage partner.
Now, let’s not let things run away with themselves. Nobody (least of all me) is suggesting that voice recognition will cause us to dispense with all other methods of human-computer interaction. Also, both Apple and Microsoft will bring their own recognition tools to market – we’ll see how those get on. And finally, nobody should be all surprised when new form factors (such as those made possible by projects such as Google Glass) move the tablet back down in the rankings. The time has come, however, for this particular input option to finally deliver on its potential.
[Originally published on ZDNet]
July 2012
This section contains posts from July 2012.
The Kernel: Fresh fields and pastures new?
2012-07-16
Britain is a great place to do business – or so we are told. But Government and media attention tends to focus on London – perhaps not surprisingly, given that, despite the relatively recent moves in the broadcasting industry to Cardiff, Salford and so on, a great deal of the Government and media is in London as well.
The regions have – or did have, as in the case of One North East – their own development agencies, and larger provincial cities such as Manchester and Leeds have made a reasonable stab at drawing non-manufacturing businesses to their metropolitan bosoms. But more rural areas have had only limited inward investment, and even less publicity. So just how realistic is it to set up a high-tech business in the hinterlands?
An example of a company bucking the trend is Cotswold-based mobile gaming company, Neon Play. Business success is never a given: clearly there is more to starting a company than setting up and pressing a big, fat “go” button, whatever the location. All the same, it seems we can learn something by teasing out the more geographical threads from the Neon Play story.
The first thing to note is that founders of Neon Play, Oli Christie and Mark Allen, were not in the area by accident. The disturbingly languid Cotswolds plays host to a sizeable creative and marketing community, much of which has been driven out of EHS Brann, now EHS 4D)=, a digital agency established 25 years ago that helped Tesco launch its Clubcard.
Cheltenham is another centre of creativity, illustrated by its recent design festival and the fact it is home to SuperGroup, best known for its Superdry brand. Other parts of the country have their own specialisms, built around similar success stories: for example, Bristol has animation (Aardman); Newcastle has software (Sage); Cambridge has silicon (ARM).
As concerns evolve and people leave their respective motherships, setting up as freelancers and forming alliances, an ecosystem of smaller firms can begin to grow organically, operating locally and often under the national radar. Like breweries on the Trent, Sheffield Steel or Kentish market gardens, it was ever thus.
An ecosystem is one thing, the logical extension of which took people into cities in the first place. But simply following the crowds to the epicentre creates its own challenges – not least, it can be the enemy of innovation. “Working independently from the big players gives us freedom and autonomy,” says Neon Play’s Oli Christie. “It enables us to come up with genuinely new ideas and drive the destination of our games.”
Deciding not to go with the pack comes with concomitant trade-offs, however, not least of which is attracting skilled staff. Ageism aside, it’s a fair bet that the designers, developers and other creatives at the heart of a gaming company will tend to come from a younger demographic – which won’t necessarily view the nightlife of a historic market town with untrammelled enthusiasm.
It’s not just the social challenges: larger centres come with a purpose-built community and work ethic. It’s impossible to walk through Soho or Shoreditch, for example, without getting a whiff of industriousness. “We knew that being in Cirencester wouldn’t be easy. Bristol would have been easier,” says Christie. “The culture is massively important. We had to think hard about making it stand out. We wanted to make it small, fun and passionate, a genuinely nice place to work.”
It’s not window dressing when the company offers “ten reasons to work with us” on its website, a list that covers everything from a quarterly bonus and extra days off to an inspiring office environment, no doubt taking a few leaves out of Google’s handbook. There’s beer on Fridays and “guaranteed posh bog roll”. (These things matter in the country.)
Another challenge is building the right relationships to both win business and maximise publicity – both important factors in the lottery-like mobile apps market, in which only about 1 per cent of apps make any decent money for their developers. To adapt the phase incorrectly ascribed to Willie Sutton, people do business in London because that’s where the money is.
While other developers have moved lock, stock and barrel into the smoke, Neon Play has settled on a compromise, involving frequent commutes for its more publicly-facing executives. “The new intermediary is the App Store,” says Oli. “It’s difficult to stand out against half a million other apps – we’re constantly trying to stand out on Twitter, Facebook, YouTube, you name it.”
Indeed, while it may not be true for all businesses, it is difficult to imagine how a social gaming company could be successful today without an equally social online presence. The upside is, of course, that the provinces are not quite as disconnected as they used to be either, even at developer level. Employees can benefit from lower costs of living and genuinely reduced levels of stress, at the same time as interacting – at least virtually – with their city-bound peers.
Ultimately, the country idyll isn’t going to be suitable for everyone. But everyone has a choice, thinks Oli: “You have to decide where you want to work and start from there.” Neon Play and companies like it are more than bucking the trend: they offer stoneground proof that tech startups do not have to feel restricted to a few higher-rent and lower-air-quality patches of our green and pleasant land.
[First published on The Kernel]
RIP Claus Egge
2012-07-29
Claus Egge was one of the first analysts I met, and one of the first I admired. Calm, analytical (it helps), Claus never needed to be highly vocal to impress. When he spoke, he spoke with quiet authority. While I never knew him outside the analyst circuit, I often had the pleasure of his company and, from time to time, the benefit of his advice. RIP, Claus. You will be missed by many.
August 2012
This section contains posts from August 2012.
Sleeping rough, but not in that way
2012-08-24
In about 6 weeks’ time I will be sleeping rough, under the banner of Byte Night. I won’t really be sleeping rough, of course. I will take my old, but functional sleeping bag, carry mat and reasonably healthy self to London for the day, bed down in the evening, have an awful night’s sleep (but think of the camaraderie), then head back the next day for a decent shower and that inner warm feeling that I’ve done something useful.
Call me old fashioned, but that’s not sleeping rough. Sleeping rough means the feeling of nowhere else to go, no money to pay for the fares, no hot shower to look forward to. Sleeping rough could mean sofa surfing on occasion, or being offered a place in a hostel but lacking the wherewithal to respond. As a genuine rough sleeper, chances are I would have psychological challenges, demons in my own machine. I wouldn’t have asked for these, they may be in my DNA. Or, perhaps, I’ve faced, but not faced down some traumatic experience, such as domestic violence, loss of livelihood or family. I might well have ended up with a drugs problem: that wouldn’t have been in my life plan either.
So, if I'm not really sleeping rough, why am I doing it? First, to help raise awareness of a growing problem. The credit crunch has hit people from all walks of life – since 2010 the amount of homelessness in the UK has continued to rise. According to charity Crisis, in Autumn 2011 the number of rough sleepers in the UK averaged over two thousand every night – an increase of just over a quarter from the year before. A sharp increase in a tip-of-the-iceberg figure which hides the true scale of the challenge.
Sleeping rough isn’t a simple decision, it’s a bottom point on a complex, downward-spiralling journey. A number of organisations are working independently and together to minimise the causes of homelessness, maximise the options for people who want to get out of the rut, and provide any palliative help they can for people sleeping on the street. These organisations need money and resources to do their work, and Byte Night is at its core simply a call for funds for Action for Children, which targets young homeless people - all donations gratefully received.
Finally, I have a lot to learn. Quite recently I became involved in a community interest organisation that is being spun out of homeless charity St Mungo’s. We don’t have a name yet (unless you count ‘Social Inclusion Enterprises’) – but the remit is simple – to employ low-cost technologies to respond to the needs of people at the periphery of society, be they homeless, long-term unemployed or otherwise disadvantaged for whatever reason. Services such as Voicemail4All for example, which can offer a communications lifeline, helping its users find a job or reconnect with family members.
So, no, I will not be sleeping rough on October 5, not in its truest sense. But nor should anyone else, not in this day and age. As long as ‘modern society’ feels it is acceptable to leave individuals curled up on street corners, we still need events like Byte Night to ensure that they, and their needs are not forgotten.
September 2012
This section contains posts from September 2012.
Update to Separated Out
2012-09-14
Has it really been ten years? The first edition of Marillion/Separated Out (The Complete History) was released in the autumn of 2002. As I had absolutely no idea what I was doing when I started, I wrote the story as I uncovered it, gleaned from interviews, references and anecdotes.
What emerged was a tale of band members making the music they wanted to hear, as a result developing a unique relationship with its audience. Since Marillion was first formed, the band’s music has lifted spirits, offered support through personal difficulties and become otherwise woven into the experiences of fans.
It is appropriate that Separated Out ended with the band riding the wave of affection of the first convention weekend. While the event was highly successful, Pontins offered an unlikely foundation stone for future success. The book's conclusion was equally non-committal: “Extrapolate a few years more and all the dreams will come true: the hit single, the radio play, the household name across the globe. Or will they?”
We now know the answer. Not only has the band seen an unprecedented resurgence in its fortunes over the past decade, it has also produced some of its best music – from ‘Marbles’, through ‘Somewhere Else’ and 'Happiness is the Road' to the just-released and already acclaimed 'Sounds That Can’t Be Made'. So much water has passed under the bridge, it seems hard to believe that there was any uncertainty.
Quite clearly, an update to the book is long overdue – indeed, nearly two years have passed since Lucy asked when this might be available, finally prompting me into action. When I reviewed the original content, suffice to say I wasn’t particularly happy with it – some sections were notably clunky. So I set about working through the text, tweaking, culling, nipping and tucking as I went - this process is now complete.
The next step is to get the story up to date. I will be looking for fan feedback on their own feelings about the band and its music over the past ten years - the highs, the lows, the touching moments… Please do email me at jonc (at) separatedout (dot) com or leave a comment at www.joncollins.net if you have any ideas or thoughts.
And meanwhile, in the words of the profoundly wise Karin Breiter, I shall “shut up and write.” Thanks for reading, and I hope you will enjoy the new edition.
My Desert Island Dozen
2012-09-30
Every now and then I read a book which makes me feel simple, profound gratitude to the author. For having the gift of being articulate in the first place, but then, spending the time to pull together a work which could give me so much pleasure. As I'm just finishing such a book, I thought I would list the titles that have pushed the 'profound gratitude' button in the past, all of which would be great companions in the unlikely event that I should be washed up on a desert island with a crate of books! Here goes...
The Player of Games - Iain M Banks
While not all of Iain Bank's books are as engaging as his best writing, they share a remarkable breadth and scope. Despite the aliens and artificial intelligences, they are profoundly human tales - of strength and weakness, power and intuitive wisdom. And when he does manage to crack the code, like in The Player of Games, the results are stunning.
Jude The Obscure - Thomas Hardy
My Thomas Hardy phase was an eye-opener, as I discovered not only a fascinating insight into still-recent rural life, but also that I could be interested in 'the classics' - indeed, what earned them this term. There is not much to be happy about in this book (though it is almost jolly compared to Tess of the D'Urbervilles), but a moving read nonetheless.
The Baroque Cycle - Neal Stephenson
Simply, wow. This many-thousand-page tale about the origins of currency and the dawn of science, written across three hefty volumes and with its multiple, intertwined plots, left me with the mother of all bittersweet feelings when I finally turned its last page. I very much doubt that Neil Stephenson is for everyone - he writes with a level of detail that could put many people off. I'm not one of them, clearly... more a welcome passenger on a continent-spanning journey with a master narrator. Start with the single-volume, wartime tale Cryptonomicon perhaps, and if that floats your boat, a world of wonder awaits.
The Seven Basic Plots: Why We Tell Stories - Christopher Booker
Mr Booker might have some dodgy views about climate change but from my standpoint he's nailed this one - a book, researched and written over thirty years, about the underlying archetypes and plotlines within both traditional and modern tales. His conclusions offer a profound insight into the nature of consciousness, following as comprehensive a treatment of the novel you are ever likely to get. It may not be the only explanation and is open to dispute but it is as good an answer as I have ever read to the question - why do we tell stories?
Coming from quite a sheltered, home counties upbringing, this book deeply affected me, wrenching my eyes open to the realities faced by those who have grown up in parts of the world where politics, prejudice and power plays become more important than the basic rights of individuals. I can't remember if I saw the film 'Cry Freedom' before the book or the other way round, but together they forced me to think about just how lucky I was.
I probably should have been in lectures when I devoured a near-constant diet of fantasy and science fiction novels. Each volume of The Belgariad (and The Malloreon after it) was eagerly awaited, as several months would pass between volumes. And meanwhile, Stephen Donaldson's Chronicles of Thomas Covenant filled any time I had left - it's a wonder I got a degree at all. I re-read the entire set a couple of years ago and they had lost none of their easy-going sparkle.
The Shockwave Rider - John Brunner
Don't listen to what anyone else tells you. John Brunner invented the Internet, in all of its cyber-criminal, virus-ridden (Brunner called them phages) glory. And he did so in the mid-seventies. This short novel definitely falls into the category of 'very important books which must be read'. A bit dated now, but still up there.
I hated history at school. Actually that 's not true - I really enjoyed the first two years of study, when we were told stories of battles and kings. Then the teacher changed and we were fed a diet of dessicated facts and figures. This book documents the early history of Britain, written by people who were there. I picked up a copy when I started to get into the Dark Ages, and it remains one of the best historical sources.
The Road Less Travelled - M. Scott Peck
"You must read this book!" is a reasonably common request, usually when someone has had a life-changing experience reading a self-help book which really seems to talk to them. I have two theories about such books - first, they are written by authors post-crisis, and second that they work when they fit with the personalities of those reading them. With Tuesdays with Morrie a close second, The Road Less Travelled is the one that worked for me, back in my early thirties when I thought I'd have my own mid-life crisis early. Nice to get it out of the way.
The Diving Bell and the Butterfly - Jean-Dominique Bauby
Also to be filed under "gratitude for life" is this book. How a man, paralysed with malady, could contemplate 'writing' a book using his only remaining physical ability - eye movement - beggars belief. A deep insight into what it means to be human, a quiet struggle against serenely uncaring adversity. Let them take everything away but my mind, I hope I respond with as much dignity.
Girl in a Swing - Richard Adams
If had to pick one modern novel, it would be this one... a tragic tale about an Englishman who finds himself completely out of his depth in a relationship with a troubled heroine. Just the right mix of literary and real, and up there with the best that Faulks and McEwan could conjure. I think it's a masterpiece, not everyone would agree but that's what makes books so, well, personal!
Fugitive Pieces - Anne Michaels
This is a profoundly powerful book, written about Jewish immigrants in Canada who are still coming to terms with their recent past. Beautifully written, deeply emotional yet at the same time gentle, quiet, measured and all the more moving because of it.
In a similar vein but a very different style, Pat Conroy's book traces the story of a man coming to terms with losing his wife. The story exposes the lives of the people around him, each layer more traumatic than the last, right up to a no-punch-pulling finale. Heavy stuff but no less brilliant because of it.
October 2012
This section contains posts from October 2012.
A Passage To India
2012-10-05
The weekend before I was due to fly to Mumbai, I was starting to feel quite excited. A whole new continent and a whole new experience - having heard various things (positive and negative) about the place, I was keen to know in which camp I would find myself. And of course, travelling to any new place is always going to be a thrill.
On the Sunday night, twenty-four hours before I was due to fly, I went through some last-minute checks. "I know," I thought, "I'll take a look at the Indian Embassy web site to see if there's anything I need to take." Sure enough, there was something: a visa. I checked the Embassy opening times - visas were issues 11am to 2pm - not a problem, I decided, I could get the later train and arrive about ten minutes to eleven. Plenty of time.
The next day, well-rested, I caught the expected train and headed into central London, arriving at The Strand at precisely 10.50. The Indian Embassy was at one end of a D-shaped block, about 200 yards in diameter. As I approached, I started to focus on, then tried to suppress an emerging feeling of panic as I realised the queue stretching all the way round the 'D' was probably the queue for visas.
For a few minutes I stood near the head of the queue in a state somewhat resembling shock. An exhaled string of semi-expletives drew me to the attention of a tall Rastafarian who was leaning against a litter bin next to me. "What's up mate?" he asked, and I explained my predicament. "Not a problem," he said. "See that bloke there, near the head of the queue? That's my mate, he's got all our passports, he can take yours as well if you like." A few seconds was all it took to realise this was the only option I had, so I handed over my passport nervously (he was a total stranger, after all) but with equal relief.
Tens of minutes passed; progress was glacial. After a while, the people at the head of the queue started to disperse. I wondered whether they had just got fed up... but my new companion they were just getting tokens. "They're closing for lunch," he said. "Only those with tokens can come back after." My gorge rose, then quickly subsided as I saw his friend walking towards us, beaming and clutching a handful of tokens. Giving one to me, the pair wished me luck and went on their way.
Suppressing a desire to raise my arms to the heavens and shout "Alleluia!" I decided to check I was absolutely prepared - I'd made one mistake and I didn't' want to mess up again. I headed across the road to a branch of Prontaprint, which had wifi (and, as it happened, print services), and logged back onto the Embassy web site. Ah. "Don't forget to bring a letter of introduction," it said. No problem I thought, printing off the invitation email I had been sent. I was all set.
An hour later I queued, clutching my token like a schoolboy with a shilling, and before long I was granted entry to the dimply lit, teak-lined room that was the visa issue bureau. It looked like a cross between an old bank branch and a visiting room at a prison. No matter, I was in - I sat on the hard bench and waited my turn. Suddenly, so it seemed, my number was called and I stepped forward to the counter, all forms and paperwork complete.
So I thought. "I'd like a business visa, please," I said, handing everything over. The man on the other side of the plexiglass had clearly been to international clerical bureaucracy school, that's the only place I can imagine to learn the nuanced slowness that is the same the world over. Eventually, he paused. "Where is your letter of introduction?" he asked, so I pointed him to the email. "But this isn't..." he spluttered. I was in trouble. I could feel the muscles in his back starting to knot in unison with mine. "You can't... there is no way..." I was completely helpless. "What am I to do?" I pleaded, my face a picture of desperation.
He exhaled deeply and his shoulders slumped, then he rose again in his chair ("Here it comes," I thought...). "I am very sorry, I have no choice," he said, authoritatively now. "I'm going to have to... (exhale) ...I'm going to have to give you a tourist visa." Bang BANG went his stamp in my passport, which he slid back towards me. "Thank you. Good bye. NEXT!"
I nearly ran out of the place. The sun may have been shining, but if it wasn't I probably would have displayed an unearthly glow. By now it was about 3pm, ample time to head to Heathrow, check in and make my flight. What about Mumbai? Suffice to say, I loved it - the people, the sounds, the smells. Brilliant.
A few months later, I happened to be talking to someone else who was heading to India. "Not going for a few months yet," they said. "Off to the doctor's next week, to get my jabs."
"Jabs?" I said. "What jabs?"
November 2012
This section contains posts from November 2012.
Feedback to a budget hotel owner
2012-11-07
Dear Sir,
We all love a bargain, and it can be quite galling to pay the kinds of rates some hotels see as normal for very basic facilities, so I applaud your pricing strategy. I also very much appreciated the warm welcome on arrival and being asked about my day, though perhaps then asking for an in-depth breakdown of my weekend was a little excessive.
You asked for feedback on how my stay could be improved, so I thought it was only right to offer this, in the spirit of goodwill you already displayed. Here are a few suggestions for future guests.
1. If you are advertising an en-suite double room, the term tends to be used when the bathroom has a toilet as well as washing facilities. Indeed, it more normally applies for hotel rooms which have a separate shower and wash room, rather than a shower cubicle and sink squashed in the corner of the bedroom.
2. Indeed, on the subject of ‘double room’, this normally refers to a room large enough to accommodate two people and not a small room with a three-quarter sized bed squeezed up against the wall leaving little space to access the bathroom facilities.
3. Which reminds me – perhaps you might consider more than one toilet between 14 rooms? While it was nice to get to know the other guests, some were a little agitated and not so inclined to conversation.
4. Many hotel residents, including myself, like to be able to leave a window ajar at night. Sadly this was not possible, for fear of dislodging the toilet roll stuffed in the cracks whose purpose, I can only assume, was to prevent drafts. You might wish to think about the logistics around that one.
5. I need to mention the pillows. I understand you may be responding to customer demand by providing harder pillows, though perhaps you may have tended to one end of the scale, if not beyond. Less attractive to guests might be their infusion with cigarette smoke, though whether this occurred recently or before the smoking ban was difficult to tell.
6. Thank you for the clean sheets and (eventually) hot water, which are ultimately the most important things of all. Thank you also for the sachets of shampoo and the bar of soap. I do wonder whether the latter can really be categorised as ‘gentle’, however, given that it was in fact a hard block of nondescript substance which refused to lather.
7. For reasons of simple flood avoidance, it is useful if shower doors can actually be shut. I did manage to dislodge one of them after a struggle, but the flakes of material this generated left me loath to attempt moving the other. This was a particular issue given the propensity of the shower head to flick off its mounting.
8. It is always a bonus to have coffee and tea facilities. However, given the small size of the room, I would question the merits of having a full-sized fridge, which limited access to the wardrobe. It might also help to have a kettle that can be filled at the sink without resorting to filling it using the tooth mug.
9. On the staircases, I understand that hanging pictures on the walls is intended to add to the ambiance, but their positive impact can be diminished if the pictures are left to slide to the bottom of the frame.
10. In the breakfast room, it can be helpful to label the vessels containing coffee and tea, particularly when they taste quite similar so that guests can get a better mental image of what they are supposed to be drinking. The label on the “scrambled eggs” was much more helpful, as I wasn’t sure.
11. Finally, please do not leave pairs of underpants on the flat roof above the breakfast room, as this might be off-putting for diners.
I hope these suggestions for small improvements might benefit your establishment, and wish you every success in the future.
Kind regards.
I, Technology: When is a spy-pen not a spy-pen? Don't expect a politician to know
2012-11-22
You've got to hand it to whoever decided to use 'spy-pen' to describe the device that led to the recent resignation of the chairman of a Scottish college.
The term has everything: popular relevance, gadget credibility and just that frisson of edgy uncertainty. Damn right, it suggests. Whoever's using one should either be saving the nation from evil tyrants or banged up.
The trouble is, the device at the centre of the controversy is no such thing. The Livescribe pen has been around for a good few years. Yes, it can act as a notes and audio-capture device, in conjunction with special sheets of paper. But calling it a spy-pen is tantamount to calling the average tablet device a spy-pad.
Kirk Ramsay, the chairman in question, stepped down after a row with Scottish education secretary Mike Russell, over a recorded conversation. "It's quite a clunky kind of thing — not the sort of thing you can use without folk knowing," Ramsay told The Scotsman. "I have had it for three and a half to four years — you can buy it on Amazon."
The episode is a good indicator of the attitude to technology displayed by our heads of government. Note that no information was leaked, or intended to be. Merely the use of such a device was enough for Ramsay to have to consider his position.
In the worst case, it suggests that Arthur C Clarke's tenet that "Any sufficiently advanced technology is indistinguishable from magic" holds true even for mainstream device use. While we no longer burn witches at the stake, it appears that practitioners of such magic should still be treated with the kind of distrust usually reserved for travellers and vagrants.
A more generous observer might consider such remarks in the context of the bumbling judge in the 1980s TV series Not The Nine O'Clock News: "Digital watch? What on earth is a digital watch?" he asked, before expressing similar incredulity at a series of innovations that would today only be found in a museum.
What's particularly disappointing is that the Livescribe is actually useful, particularly for those who need to sit through long-winded meetings, which sometimes ramble off the point. The audio function — which is not enabled by default — can be very handy when, weeks later, notes that meant something at the time cease to make sense.
[Originally published on ZDNet]
December 2012
This section contains posts from December 2012.
A new kind of loyalty card
2012-12-12
Marillion / Separated Out - Redux
2012-12-13
Sloe gin in hand.
Twelve years ago, I had an idea - and we all know how dangerous those can be. Still, I went with it. I was never 100% happy with the result, which is unsurprising given that I had never done anything quite like it before.
To have the opportunity to revisit Marillion/Separated Out was one that couldn't be missed. There's an element of déjà vu; the process wasn't always straightforward, they never are. But the only thing that really matters is the content, which will still be around long after we're all gone.
A big thank you to all at Foruli for making this edition possible, and for working so hard behind the scenes on the copy, the artwork, the whole package. To all at Racket of course, for initiating the project in the first place and for their support throughout. And to everyone involved in the text, the fans, the collaborators, and (it goes without saying) most of all Marillion, a band which has created the soundtrack for the lives of so many. To quote the preface: "This is your story: I hope I've done it justice."
For more information www.foruliclassics.com
2013
This section contains posts from 2013.
January 2013
This section contains posts from January 2013.
Superdry - a non-technological success story
2013-01-03
Note: I drafted this last year, having had the good fortune to talk to Richy at the Cheltenham Design Festival. For one reason or another it was never published. And while the share price has had its ups and downs, the Superdry brand still holds its own!
Richy Baldwin stands next to a coat rail of t-shirts. Wearing jeans, a checked shirt and baseball cap he could be the bloke who brought the t-shirts in. Or a part-time musician. Or a sign painter. He is, in fact, all of those things.
He's also Head of Graphics at Cheltenham's success story, Superdry. Which means he's responsible for every single logo and design that has appeared across the clothing range since its inception, 8 years ago. "How do you draw them," I ask. "With paper," he says. "And a pencil."
While this may sound like a dialogue from the church of the bleeding obvious, it makes a welcome change from the world of Wacom tablets, airbrushing and CGI that seems to fill the pages of today's creative magazines. We look at oil cans, made up to look like they've been filched from the back of some Grandpa's garage, complete with bashes and brown stains. "Black coffee and a paint brush," says Richy.
It's all such a long way from technology evangelists and their wild claims about how the latest big thing is going to change the world. Superdry's story is about a couple of blokes and a minor epiphany that inspired a clothing brand, marrying Eastern graphics and lingo with classic fashion.
Richy was one of the first on board, and has seen the company grow from a back room to a global brand. The well-documented story includes how David Beckham's love of a certain Osaka design undoubtedly catalysed the company's success.
What the story doesn't cover so much is how the business continues to be run in much the same way, with the same people. And without the need for all that clever gubbins around social media monitoring or targeted ads. Indeed, the company doesn't advertise at all - it doesn't need to, as its customers are its best advocates.
All startups can learn from this. Not that technology isn't useful - it can be fantastic. And not that the company ignores the potential of the Web - it employs a social media manager, for example. But that when business is done properly, based on an innovative idea executed properly, technology takes a supporting role.
A bit like a piece of paper. And a pencil.
What does “sustainable income” mean for authors?
2013-01-16
This is a question that has been puzzling me for some time (and I understand that I am not the only one). I started to look into it but couldn't find an exact answer - so I put the following together for the purposes of discussion.
Link here for a printable PDF. Feedback welcome, in any of the usual places starting with @jonno on Twitter or by emailing jon at this address.
Introduction
A burning question for many prospective authors is – what is a valid, sustainable level of income, and how does that translate into the output of authorship, i.e. the books themselves? Answering this question requires certain assumptions to be made:
- First, that authors – people who make money from writing books – need to eat. We have choices in this – of course, we could leave authorship to the domain of the independently wealthy, which would make for quite a limited group. But, it is assumed, that is not the preferred route.
- Similarly, if there will be books in the future, printed or electronic, it will have to have been worth the authors’ time. Which leads to the second assumption, that people will continue to be prepared to pay for ‘content’ – including books. Writing is a trade, so any other way lies madness.
- Third, that there will be a need for supporting services, an evolving industry of providers will emerge. Today’s publishers retain an inexplicable level of authority, which is fine for those who succeed. But new services are emerging around self-publishing, coming from traditional houses and elsewhere.
With these assumptions in mind, read on. First question – are we talking writing books as a full-time career?
Full-time vs part-time models of authorship
Authorship can only follow one of three models: full time, part time or hobbyist. Outside the bestseller lists, a good question is exactly how many established authors would qualify as ‘full-time’. Very few, is probably the answer (for reasons which will quickly become apparent).
A spectrum of part-time authorship exists, from people who have written a book in their spare time or to “tell their story” – these one-off authors had little intention at the start to write books as a career. Equally we have struggling writers, who need to supplement the income from their passion with ancillary work. The stereotype is working in a restaurant, but good money can be made from writing; indeed, numerous writers see books as simply one format they can work with.
Meanwhile we have the hobbyists – people who feel the need to scratch the creative itch by writing a book. Nowt wrong with that but the impetus to make money may be less than the desire to write. Or indeed, a subset of such writers may spend money to get a few hundred books published specifically for family and friends – somewhat disparagingly labelled as vanity publishing.
Suffice to say this piece is more targeted at people who are considering the first or second option – that is, who need to see a return on their investment of time and resources.
How much does an author need to earn?
A starting point is to think about what might constitute a “minimum necessary” salary for an author. Considering the full time model first, this could again divide into two – whether writing is an over-riding passion which trumps all other money-making opportunities, or whether it is simply an option – attractive, maybe – to be weighed up.
Let’s consider first the author doing it because they are driven to do so, in which case they will be prepared to compromise on salary. By how much? The minimum wage is £6.19 per hour. So, let’s say working 40 hour weeks as a writer with 5 weeks’ holiday plus 12 bank holidays – that’s 223 working days, or £11,000 per year, or (after tax) about £800/month. It may not be enough to pay all the bills, but that’s what the maths says.
In the second example, we might consider writing as an alternative to other professions which require a certain intellectual bent. it is not unreasonable for a new author to aim at earning – what shall we say – £25,000 per year – which is, roughly speaking, what a teacher could earn as a starting salary? After tax, that equates to £1,600 per month.
So, while the sky may be the limit, we can see £11,000 as an absolute minimum for someone that has bills to pay, or £25,000 to meet more reasonable expectations. So, how does that translate into book sales?
Using a publisher vs self-publishing – the financials
To state the obvious, the publishing world has transformed over the past five year. Not just in terms of the different methods available, but also the way they are perceived. Until quite recently, self-publishing was seen as a euphemism for vanity publishing, with all the baggage that contains. These days, it’s an acceptable alternative to publisher deals, and many established authors are following that route.
Looking at the traditional publishing model first, an author royalty equates to between 5% and 15% of net sales, depending on the type of book and the leverage the author can exert. Meanwhile, a printed book can cost between £5 and £15, depending on format. Going for the lowest-end of paperbacks, 5% of £5 is 25 pence, so to achieve £11,000 would require 44,000 sales: that’s a lot of books.
Of course the publishing model is more complicated than that. Special contractual terms for discounted sales, licensing, foreign editions, returns all add to the difficulty of allocating a single figure. The best anyone can say is that the figure will be plus-or-minus a certain percentage (25%?), the probability of which goes up with the success of the release.
There’s also the question of an advance, which once again comes in two kinds. The first, reserved for celebrities and literary big hitters, is the kind which may never be expected to be paid back. When Wayne Rooney receives an advance of – well, whatever it was – it wasn’t on the basis that he would sell that many books, more that the publisher wanted to be seen to have him in their catalogue. The second kind of advance is, notionally, a repayable loan at zero-percent interest, paid to cover the bills of an author while writing. The downside is that the first few royalty statements will earn the author nothing.
The self-publishing model requires different maths. Start-up costs are zero (nominally, if you already have a computer), and the royalty model flips around – for example, Amazon and Apple iBooks charge 30%(?) of cover price for each sale. So, if you’re charging £5 for a book, you will see £3.50 of it, which suddenly makes lower sales figures matter less. Back at the minimum wage for example, you’d need to sell just over 3,000 books in a year – if you did all the work yourself, that is.
What work is there? Copy editors can handle about ten pages per hour, at rates about £25/hr. So that 80,000 word novel, at 300 words per page, would require 27 hours’ work (or £675). Then layout. If you’re going for an e-book, there’s very little to do to simply keep up with the major publishers – a “free” tool such as Sigil (the developer accepts donations) is all you need. For print, quotes for layout run at about £300 for a book of this length, and £275 for a cover. You may have artwork, which will add to the costs.
So overall, you’re talking about £1,250 minimum to get your book into a fit state for delivery. You could do these things yourself (though proofing your own work is never advised), or ask favours from friends and family – but seasoned people will do the job faster, with less aggro. A more realistic figure for self-publishing, then, is that you would need to sell a further 350 copies of each book to cover its costs. That’s £3,400 in total.
Quantifying time and resource
While that still looks like a big number, these figures don’t have to be achieved with a single volume. There is no lower time limit on a book, beyond an absolute minimum of 10 days, which is about how fast a human completely in the zone can bash a typewriter keyboard.
More realistically and to follow Stephen King’s ‘On Writing’ advice, a reasonable writer should be able to churn out at least 1,000 words per day on average. It could be more – but on some days, it may be less, plus there is all that pesky research, structuring and characterisation/angle to be worked out, interviews to be done, places to visit.
All the same, with a target length of 80,000 words (again, there’s no rule here), that makes 80 days’ effort, or 4 months. Which, in principle, leaves the rest of the year for another two books. If following the traditional publishing model, this still means each book needs to sell 15,000 copies just to achieve minimum wage – which is still a bit of a choker when starting out, though not infeasible.
For self-publishers, the situation becomes even more viable. If you were able to write only two books in a single year, selling “just” 2,000 copies of each would start to pay the bills. Which is a good starting point and sounds achievable, though it may still be beyond the reach of many would-be authors.
What of the part-time model?
All of the above has assumed the full-time model, of course. Many authors work part time, with writing supplementing their ‘day job’, or vice versa. Indeed, while it may only be feasible to write 1,000 words per day, their actual writing doesn’t need to take that long in hours, minutes and seconds. And indeed, the required ancillary tasks may not fill the rest of the day.
The part-time model reduces financial risk and enables books to exist where it simply would not have been possible otherwise. To reference Stephen King again, he pointed an aspiring Neil Gaiman towards writing a page a day – to paraphrase, by the end of the year you look up and you have a book. Spending an hour or two every day for a year may sound onerous, but equally, it is a good test of whether it was sufficiently important in the first place.
Equally, the model guards against the lead times of actually seeing a payment. A book is highly unlikely to deliver a return from Day 1. The traditional publishing model pays royalties only every 6 months, and advances are becoming harder to come by so relying on book income alone is a high risk strategy.
Increasing the chances of success – targeting and self-promotion
The final reality check is that nothing is guaranteed, in authorship or anywhere else. While the baseline figure of selling 3,400 self-published copies of one or more books may appear feasible, the fact is that many self-published books fail to sell even a tenth of that number. And meanwhile, following the traditional publishing model requires stamina, not least to cope with frequent rejection.
We can all learn a good lesson from publishers – that they are only prepared to publish what they believe will sell. The same discipline should apply when self-publishing, in that prospective authors owe it to themselves to have a clear view on the saleability of what they are writing. This includes the topic – while books do follow fashion, it remains easier to target an under-exploited niche with a ready, affluent audience.
Much of marketing is not rocket science, but it does require people to do things that feel terribly discourteous. Such as asking people whether they would like to buy something (and if not, why not). Most people don’t like selling, and creative people more than most. But it stands to reason that a self-published book will remain in its carton unless somebody (if not the author) is actively looking to shift it elsewhere.
Which brings to the other reason to use a publisher – marketing. Publishers may feel like a hard nut to crack but once they have signed a contract they will act in their interests to make sure it achieves its objectives. However, non-bestseller authors who are already active on social networks, who have their own web sites and author pages on Amazon, might quite rightly ask what the publisher can add.
Conclusion – start from a realistic base
Over time, successful authors should be able to develop an income stream which can tide over the lean periods and enable more writing to happen. The financials are most difficult at the start of the process – when a prospective author needs to fund their existence for the months researching, writing and then waiting for the first payments to come through.
Most of the calculations here were based on the minimum wage, rather than the equivalent-wage scenario. To achieve the latter, an author would require 100,000 sales in a year. To put this into perspective, Nick Hornby’s About A Boy has sold 800,000 books, which is only eight times more. Bluntly, while you don’t need to reach the top 100 bestseller list, you do need a bestseller to break even.
In other words, authorship is not for the faint hearted and anyone that starts writing a book for commercial reasons needs to understand just how difficult this can be. Through simple economics, which are faced by the biggest publishers as much as the one-man-bands, the cards are stacked against authors who are outside the elite.
This doesn’t make it impossible to break through. Success begets success – not only will a publisher want to back a horse with ‘form’, so will readers. While the old adage does apply, “We all have a book inside us – and for some, it is better kept that way,” a successful book doesn’t have to be a work of creative genius, but merely pique the interest and make someone turn each page to the end.
Ultimately, people who really want to write will get on and do so. From the figures, the best advice whatever model is chosen is “don’t give up the day job” at least until another revenue stream starts to kick in. Becoming an author is fraught with risks – most important is to be realistic about marketability, and to recognise that everything comes at a cost, even if it isn’t paid for. That being said, nothing is stopping anyone putting pen to paper.
References
Referenced with thanks:
http://from.io/UtRWo6 What is the typical royalty rate for an author on book sales? – askville.amazon.com
http://from.io/V8C7Hc How Much Should You Write Every Day? – Write to Done
http://from.io/W7zct4 SfEP suggested minimum freelance rates – Society for Editors and Proofreaders
http://from.io/13EP0Jo FAQs: Using copy-editors and proofreaders – Society for Editors and Proofreaders
http://from.io/Ya5iuv The top 100 bestselling books of all time – Guardian
http://from.io/ZYKWzG Stop the press: half of self-published authors earn less than $500 – Guardian
http://from.io/U1V4bI John Scalzi’s Utterly Useless Writing Advice – whatever.scalzi.com
http://from.io/V5NtrJ And You Thought a Royalty Involved a Crown – editorialass.blogspot.co.uk
http://from.io/V5Nzj5 Publishing Money Myths | Frost Light – jeanienefrost.com
http://from.io/S4PiGJ How Much Money Does An Author Make? – KalebNation.com
An Origami mobile phone / smartphone stand
2013-01-27
Ever been stuck, miles from a gadget shop and needing to prop up your smartphone to watch a video or do some typing? Well, worry no longer - for here is the solution you have been waiting for - you can simply make one in a matter of minutes, no tools, no fuss. Read on to find out how.
Start with a piece of paper - in this case A4, straight from the printer tray.
Fold the paper across...
Then, fold the remainder backwards to make a triangle.
You can then cut/tear off the excess to make the square you need.
Now the real fun begins. Fold both corners into the line to make a kite shape.
Then fold one end over to make a triangle.
Fold both corners in and the triangle down. Note you will have to open the corners out again for the next stage - it's just to get the creases!
Now's the only, slightly tricky bit. Crease from the point downwards by folding each side in - then crease from the bottom cornes as well. You should end up with what's shown in the picture.
Fold the corners back in now you know where the creases go, as shown.
Now fold the top corner down about an inch. Ready for teh next, even more slightly tricky bit?
First fold the top over to make what looks like an elephant. Well, a little.
Then, take the "trunk" and tuck it into the neck. make sure the end fits firmly as far as it can go. This is important.
Fold the whole thing flat...
Then, turn over and flatten.
We're nearly done! Fold the lower edge (with the fingers) up about a centimetre width. You'll notice that it's quite hard to fold in the middle because of the end of the "trunk" but this is what gives the whole thing strength.
Then fold about a quarter of that up.
Pull the back out a bit and you're about done - here's the finished article!
That's it - happy watching/typing!
September 2013
This section contains posts from September 2013.
Reasons why I may have just unfollowed (or followed) you
2013-09-16
In no particular order:
Because I’m not absolutely sure who you are Because you haven’t tweeted in over a year Because I didn’t recognise you from your twitter handle Because you’re famous now and too busy to chat Because I know you socially on Facebook Because that’s not really you, it’s your news stream Because I reached out a couple of times, but… no Because you don’t follow me, and I just noticed Because you “follow” 100,000+ people – really? Because you do come across as a little bit of a knob Because we met and we got on, but we don’t share much Because my interests have drifted from your topic area Because I made a mistake – kick me!
Reasons I will follow you, if I don’t already:
Because I unfollowed you by accident and just noticed Because we have met and I enjoyed your company Because you are interested in having a conversation Because you have interesting or funny things to say Because I value your opinion and want to engage Because your areas of interest overlap with mine Because I have worked with you, or may do
But most of all, because I see Twitter as a place for conversations, not a listening post, and I want to engage better with the people I do follow.
October 2013
This section contains posts from October 2013.
Another Morning
2013-10-03
It starts just like the rest of them Clothing shrugged and bleary eyes Doubting Thomas, slippering awake Stops and stairs, not yet for the day
Half-made choices mingling, mangling Teaspoon stirs, marmalade jars Shrieking gadgets sound the charge Curtains swish, to let in the day
Flick the latch, hair, crisp air Coat and keys, shoes now tied Practiced purpose, best foot forward Long stride, now set for the day.
December 2013
This section contains posts from December 2013.
Isn't a Rush-Chemistry paperback the perfect Christmas present?
2013-12-13
Well, ho, ho, ho, it's that time of year again! We all know that a rock biography the first thing on everyone's Christmas list, so now's your chance to pick up a paperback copy of Rush-Chemistry for £10 plus £3 postage. Just message me and we can sort out the details.
With only five biography-shopping days to Christmas, you don't want to leave things in a...
Happy New Year! And Baking.
2013-12-31
What a very interesting year this has been. Without going into painstaking detail, it started with me wondering if I could really hack it as an independent soul (I even applied for jobs - quelle horreur!) and ended with the thought that I wouldn’t have it any other way.
For the record I’m now working as an analyst for the illustrious crowd at GigaOm, chairing a fantastic technology incubator not-for-profit called Reconnectus, writing books and working in a variety of consulting, advisory and writing roles. Diary management can be a challenge.
But enough about that. If there’s anyone left on the planet that I haven’t bored rigid, I have been baking. Every weekend since September, in fact. For the record, baking has to be one of the oldest applications of technology conceived by the human race, taking us from mere hunter gatherers to, well, something more.
One of the fascinating things about baking is what it tells us about ourselves, not least our post-industrial-revolution, reductionist, misplaced view that technology can make everything better, and better, and better. To whit Chorleywood, the village which gave its name to the process now applied to the majority of bread making in the UK.
(he paused, to put bread in the oven)
By the late 1980s only 3,000 independent bakeries existed in the UK - a by-product of our mistaken view that what comes in packets is intrinsically better than what is made by hand. The good news is, this number has now increased to about 7,000, and continued to rise. Why? Because, ultimately, bread tastes nicer when it is not subjected to mass production.
Technology is fantastic, isn’t it? As we live in the middle of some of the most dramatic changes ever experienced by humanity, we cannot but stop and wonder. Earlier today I documented my baker’s dozen (really) of technology predictions, ranging from the groundswell of smart to the orchestration singularity. Coming soon to a blog near you.
At the same time however, it presents a double-edged sword. Snowden and the NSA, hackers and spyware, email overload and online addiction, the challenges faced in numerous sectors not least music, publishing, retail and indeed the technology industry, all with a seemingly complete absence of control as to where it is all going.
I remain optimistic and stoic in equal measure, in the face of this increasingly data-oriented future. We need a new rule book - existing governance and legislation is repeatedly proving itself woefully inadequate, be it for corporations, individuals or indeed governments. Never has the proverb “May you live in interesting times" been more accurate.
Equally, we can learn from the bread industry that technology sometimes enables us only to rob Peter to pay Paul, sacrificing quality or experience for lower prices or broader reach. It may take us another two hundred years, but we are smartening up - learning that the X Factor's quick hit is ultimately unsatisfying. Simon Cowell could have come from Chorleywood.
The bread’s nearly done, so it is time to sign off for another year. Of all of our most ancient industries, perhaps the second oldest is potentially brewing - it isn’t hard to imagine how early tribes discovered other qualities of yeast, quickly realising that Man cannot live on bread alone.
On which note, the village hall and an evening of wine and song beckons. It just remains for me to me to wish you a delightfully relaxed New Year’s Eve and a successful, fufilling 2014.
2014
This section contains posts from 2014.
February 2014
This section contains posts from February 2014.
Making Big Big Bread
2014-02-23
Some may have spotted that I have a thing about baking at the moment. As it happens, I also have a soft spot for the music of Big Big Train, and a repeated theme across the past few decades has been, believe it or not, beer.
By happy coincidence, the band members of Big Big Train appear to have a similar taste for the latter, to the extent of licensing a chocolate porter in their name, from the appropriately named Box Steam Brewery.
An obvious next step is to combine all three, isn’t it? To whit, I proffer below a step by step guide to making Big Big Bread. You will need:
800g of strong flour - I used 550g wholemeal and 250g white, to keep it light. If you use more wholemeal, you may have to add a bit (50-100ml) of water.
12g salt - I have some sea salt with seaweed in, which seem to make sense for reasons I cannot fathom.
12g dried yeast - not the “fast action” stuff
If it takes your fancy, a tablespoon or so of light linseeds and, potentially, some raisins.
2 bottles of Big Big Train beer.
For equipment, a variety of bowls - a smaller plastic one for measuring, a larger one for mixing. An accurate scale. A cheap plastic bowl scraper. A thermometer with a metal sensor you can poke in the bread. And the "English Electric: Parts 1 and 2" albums.
First, weigh out the flour and mix with the salt and linseeds. Make a well in the middle, and add the yeast, followed by one bottle of beer. Then push a bit of the flour from the sides into the beer mix - enough to make a liquid mush (the technical term is a sponge).
Cover with plastic, and leave for 20 minutes or so - until there’s been some sponge action - enough to make you think, “Ooh, look at that." You should be about finished with 'Judas Unrepentant' by now - great track. About an art forger.
Once you’re happy/bored, use one hand to mix the flour and sponge together, you’ll need the other to hold the bowl. Turn the rough dough out onto the surface and start kneading. Don’t worry if it’s a bit sticky; add a little water if it’s a bit tough.
Knead for ten or so minutes (pretty much 'Summoned by Bells'), folding the dough towards you and turning 90 degrees. The time is less important than the result - you want to be sure the dough is stretchy and supple at the end, so that when you fold it over it is like a taut belly after a good meal.
Put the result back in the bowl and cover with plastic for a couple of hours or so at room temperature, until the dough has reached twice its size. Be careful with airing cupboards, as they can dry the dough. (You could always *gently* warm the beer to 30 degrees or so; I didn’t.)
At this point, sprinkle a bit of flour onto the top of the dough and “knock it back” punch it down with a fist. Then scrape the dough out (using the oh-so-clever plastic thing) onto a floured surface.
You can cut the dough according to your needs - two thirds would make a standard loaf plus three rolls, or the whole lot could go into a large tin.
Flatten and fold each piece of dough - both sides into the middle, turn and do the same again (like a sheet). Flatten and repeat, leaving you with nicely rounded balls. Oh, stop it.
Now’s the moment where you can use the raisins, if you are making a second loaf or rolls. Roll out the smaller piece of dough with a floured rolling pin, then cover the dough with raisins before rolling it up like a Swiss roll. Roll out again and up again before forming back into a ball.
Then - here’s the clever bit - get the air bubbles out of the dough by stretching the ’skin’ of the dough and tucking it in underneath. Put one (floured) hand either side of the dough and stretch it towards the work surface, tucking and turning the dough at the same time. By now, you will be gasping for a drink. Which is where the second bottle comes in.
Aaaand - we're back. Upturn the ball of dough onto one hand and pinch together the resulting seams before putting back in a bowl/on a tray and covering with plastic. Again. Leave for half an hour, for a second prove. Now you can shape the dough ready for putting into tins.
Oil the tins with a butter wrapper or an olive-oil-on-kitchen-roll combo. Flatten and shape each ball as before, then have a final shape by rolling it up while pushing in with the thumbs. Fold the ends underneath and pinch any seams before putting in the tin. Dust the tops with flour, then cover in plastic. Again.
Leave for fifteen minutes to rest before putting the oven on. Five minutes later, use scissors or a knife to slash, slice, cut or otherwise live out your vengeance fantasies on the top of each loaf or roll.
And into the oven they go. For about 25 minutes. If you have a thermometer, you want the bread to reach 94 degrees C, at which point it will be done - you’ll know as much if dough comes out on the sensor. If the top starts to burn, turn the oven down and put tin foil on top.
And if there's more room in the oven, you might as well use it!
And if the time says this, the bread may be very nearly ready.
Take the bread out of the tins and put onto a cooling tray to, er, cool.
Try to resist cutting off a sumptuous crust of freshly baked bread, smothering it in butter and biting into… oh, never mind. You’re done.
August 2014
This section contains posts from August 2014.
The Pit
2014-08-12
This much I know. There exists a dark pit in all of us, the blackest of black places, deep enough to feel bottomless. How fortunate the few who never have to fathom its depths, but most will, at some time and without warning.
Some enter never to leave, reluctantly languishing, having lost the energy to fight back. A tragic few lose the battle and pay the ultimate price. Others continue their struggle without a murmur, their despair and anguish visible only to those closest to them.
Those who are able may choose to ignore, or deny its existence, though the pit lurks within them as well. Many try to help to no apparent avail; some simply offer comfort and solace, which is all anyone can really do.
Eventually, after an age (the pit has no notion of time), its hold might loosen allowing light, once again, to shine into the depths. As the longest night gives way to day, for a while it may appear the pit is no longer really there. When it does return however, it is just as deep, just as black, just as indiscriminate.
All anyone can really hope, if lucky enough to emerge, is that moments spent outside the pit will grow and expand: happier hours, days, weeks spent without teetering on its brink or sliding, hopelessly, into its maws. Perhaps such times will extend such that, one day, the whole experience becomes no more than an unhappy memory.
When it returns, as it surely will for so many, may we all have the strength and knowledge that the experience can be but temporary, however permanent it feels at the time, and however disappointing it is to learn that the pit will never be completely vanquished.
Eventually, a fortunate few may nod in wry understanding of what it means to be human, imperfect, with baggage and with certain areas of conscious existence that can never be fully controlled, even if years pass before their deepest reaches manifest themselves again.
2015
This section contains posts from 2015.
August 2015
This section contains posts from August 2015.
How to choose wine
2015-08-05
For anyone that was confused about this complex subject, I've prepared a handy guide.
Big Big Train, Kings Place 15 August 2015
2015-08-16
Big Big Train, the band with the dumb name, playing nostalgia-laden music, seemingly derived from some forgotten glory years.
But.
Big Big Train, a symbol of determination, of uncompromising musicianship, of continuity.
A decades-old band which has stuck to its guns and aspirations, with paltry obvious reward.
An international collective of musicians, brought together by fate, the Internet their studio.
A spark, fanned to a flame, air drawn through tightly formed forums and online communities.
A reluctance to perform, not through lack of talent but the financial risk of delivering without compromise.
A spotlight shining from a benign corner of the media, whispering encouragement.
A venue found, a weekend fixed. Arrangements made, rehearsals undertaken.
Big Big Train, live on stage, once, twice, three times a masterclass in musical prowess, a lesson in humility.
And a reminder.
Fame and fortune glint like diamonds in the depths, tantalisingly beyond reach, but for a few.
For others the journey is longer but its rewards timeless, new strands weaving into our shared history.
Last night's gig was a quiet triumph. Uncle Jack would be proud.
The nature of connectedness
2015-08-21
It was my son, Ben, who first alerted my to the works of Temple Grandin, an autistic agricultural equipment designer who has written extensively about both her own experiences, and our general understanding, of the autistic spectrum.
One area she highlighted was that the human brain is in a constant state of redevelopment. Where neural pathways are underfunctioning or otherwise blocked, other connections get made. These biological adjustments to the circuitry of the brain enable signals to be re-routed, directly affecting a person’s cognitive abilities.
It isn't just autism where synaptic re-routing takes place. In a recent conversation a neighbour, who works in rehabilitating drug addicts, explained new research that addiction has physical consequences — essentially, new pathways are created to reflect the ’normality’ of drug use. Once made, pathways cannot be told to cease operating, which starts to explain why addicts can’t "just stop".
The parallels between addiction and autism don’t stop there. Temple Grandin has highlighted the importance of learning social skills for people on the autistic spectrum, even if this means they are operating outside their comfort zone, as they enable people to interact and function in society.
"In the 1950s, social skills were taught in a much more rigid way so kids who were mildly autistic were forced to learn them. It hurts the autistic much more than it does the normal kids to not have these skills formally taught,” she remarked.
Meanwhile UK journalist Johann Hari, himself on a journey out of a pit of his own creation, was researching drug addiction. His findings surprised him — that more isolated people were more likely to become, and stay addicts. “What I learned is that the opposite of addiction is not sobriety,” he commented. “The opposite of addiction is human connection.”
Given how the technology revolution has made us more connected, and yet more isolated than ever, both Temple Grandin and Johann Hari’s observations may be of profound importance. External connections are vital to our well-being, for sure, and these may well be reflected in synaptic connections which, once created, cannot simply be un-knotted.
Perhaps we shall find that our belief systems also exist as neural pathways, which could explain why the stances we take are so difficult to change — our views may, quite literally, be hard-wired into our brains. Changing minds may also require modifying the cell structures upon which they depend, which cannot take place on demand.
If we do learn that we are what we think, it will have profound consequences on how view such difficult topics as addiction and indoctrination. On the upside, perhaps it will also give us a better understanding of humanity’s all-too-frequent inability to act rationally.
Just a thought — or is it?
Toot
2015-08-21
In the pocket of my dog-walking coat was a small Lego trumpet. I vaguely recall how it had got there: it had been glinting on the road, I had reached down to pick it up, as much to see what it was as anything. Unthinkingly, I had pocketed it. There it had stayed, buried in fluffy detritus.
The road trip to Edinburgh had been long, but worth it. We stayed with friends who had just moved North, their cottage overlooking the sea, a microclimate keeping the storms at bay. Happy days seeing the sights and walking the dogs.
We had planned the way back to be punctuated with places we had always wanted to visit, visiting people we would never see otherwise. The first stop was Lindisfarne, the long drive along the spit windswept and bleak.
We parked and walked, backs hunched against the weather. The air was damp, a light, misty rain wisping back and forth. We turned a corner and started towards the abbey, now a ruin, its forlorn structure silhouetted against the darkening sky.
There in front of us, some players were unpacking their instruments. They had travelled from Germany, we were told later, a church band on tour, drawn to play on the island. And play they did, the soaring notes of brass weaving in the wind, man and nature intertwined.
As we watched, mesmerised, I put my hands in my pockets. My fingertips fell upon something, for a moment I wondered what it was then I remembered — that plastic trumpet. Smiling I put it to my mouth and said, “toot."
And in that moment, all was right with the world.
October 2015
This section contains posts from October 2015.
What’s a blockchain? And is it heading for prime time?
2015-10-20
If, like me, you have been feeling slightly bamboozled by all the noise around blockchain, I thought it might be worth putting together a brief primer on what blockchains are and how they might be used.
Blockchain started out as a “system of record” for Bitcoin. Simply put, if I give you a Bitcoin, how can the transaction be verified as having taken place? The answer is for a third party to oversee the creation of a block — a package of data containing not only this, but multiple other transactions and some ‘other stuff’ to enable the authenticity of the block to be proved.
As it happens, transactions don’t have to involve the transfer of bitcoins, they could represent any event — say (and why not), a declaration of undying love. Once the transaction has been added to the system of record, it is duplicated across every computer storing a copy of the blockchain.
Indeed, any piece of information can be captured and stored as a blockchain event: once created, the record will exist for as long as the concept of the blockchain exists.
Equally, the virtual world has room for more than one blockchain. Bitcoin has its own, and other crypto-currencies have theirs. Some blockchains (such as Ethereum) were founded to store information about both transactions and what has been termed ‘smart contracts’ — that is, programmable code that defines when a transaction should take place, or links to where and when an item was created.
As a result of both their indelibility and programmability, blockchains have been seen as a way of managing a whole range of situations and transactions. Of course money transfers (crypto- or traditional currency) but also such situations as preventing forgery and pharmaceutical fraud, as an identifier associated with an item (a painting or a drug) can be proven to be correct.
A particular area of interest is the arts, as blockchain mechanisms enable transactions to take place directly between artists, consumers and other stakeholders. As describes cellist Zoë Keating: “I can imagine instant, frictionless micropayments and the ability to pay collaborators and investors in future earnings without it being an accounting nightmare, and without having to divert money through blackbox entities like ASCAP or the AFM.”
In other words, if you want to listen to a song you have a mechanism which enables me to directly and automatically pay the artist, and enables the artist to set the price, then directly and automatically pay other people involved. When used in this way the whole process, and resulting transactions, can become completely transparent and verifiable.
Such mechanisms have the potential to deliver a new era of fairness in terms of how artists are recompensed, thinks Imogen Heap, whose new single was released last Friday with a Bitcoin pay-what-you-like mechanism. “I dream of a kind of Fair Trade for music environment with a simple one-stop-shop-portal to upload my freshly recorded music, verified and stamped, into the world, with the confidence I’m getting the best deal out there,” says the artist.
Numerous challenges lie ahead, not least that current blockchain mechanisms were not designed to handle the volumes of transactions, nor resulting sizes of records, that could result from mass adoption in such a wide variety of domains. Equally, blockchain tools can just as easily be used by incumbent organisations and intermediaries as altruism-oriented startups; it is by no means clear, for example, that music consumers will default to more artist-friendly models.
These are very early days. It is neither obvious what platform or tool to deploy to what end, nor are blockchain facilities straightforward for end-users to access— yet. Skills in blockchain design and integration into services are in very short supply, as is experience in writing smart contracts that make sense.
At the same time, the high levels of interest across such a variety of industries suggests that blockchain-based capabilities will have a considerable role to play in the near future. For better or worse, blockchains are here to stay.
November 2015
This section contains posts from November 2015.
Medium: Athens Marathon and the elusive sub-four
2015-11-10
5 a.m. and I have been awake for some time, that annoying habit I have of outpacing the alarm. I check my phone and Stuart has already texted. “Are we all up?” he asks. “Reporting for duty,” responds Alun. I get up, try for a shower but the water is cold. Not the most auspicious start. Cup of tea and congealing instant porridge, the latter a forlorn attempt at recreating what is already instant in its usual form. Won’t bother next time, I think. Oh no, I remember. There won’t be a next time.
This will be my fifth marathon. The first ended in disaster as I broke the only rule you need to know – stick with what you have trained for. Having set of at 9.30 pace and maintained it for 15 miles (deciding en route I was clearly untroubled by such mortal considerations), the pain started to set in about 18. By mile 23 it felt like someone had inserted boiling skewers into my legs. I finished it – in 4.43 – but it took me several weeks to recover.
I’d put it to bed in Paris, taking my time and running a steady 11 minute mile pace to come in at a perfectly reasonable 4.42. Then Mark had suggested London, which I managed in 4.55 with only 11 weeks’ training. I didn’t want to leave it there so, heck, why not Rome, which I prepared well for and came in at 4.36. Hmm, I thought. What if I really, really tried? Perhaps one last marathon? If so it would have to be a classic race… what about the classic race, from Marathon to Athens? A Friday night conversation later and the die was cast.
I knew I’d been running faster. A visit to an osteopath with an interest in sports performance and a bit of acupressure magic had released my diaphragm, which had given me a significant boost to my pace. As any marathon runner knows, the mathematically arbitrary threshold of four hours is held in some esteem. To achieve that was beyond my wildest dreams, a barrier only real athletes could cross. Impossible or not, it was the only goal in town.
So I trained. I enlisted the help of a personal trainer. I read the books, I ate the food and drank the shakes for a year. 20 weeks before the day I started a schedule aimed at 4-hour runners. Then the bad news – a cursory review of the route told us it had at least 800 feet of ascent, over a distance of ten miles.
We took it on the chin and carried on, the Cotswold Hills more than adequate as a training ground. We found that my wife Liz’s cousin’s daughter, Virginia, was also running and we exchanged notes. And, to my surprise, our longest training run of 21 miles took place in 3 hours exactly. it looked like the fabled sub-four could be in our grasp.
Despite the usual minor injuries (including, for me, a bollard-based face plant on the final taper) and bugs, we all made it to Athens. It was only as we arrived – marathon running has always been a family trip – that I realised my long training journey was finally over; Stu and Al had picked up my registration so I had nothing to do but attempt sleep. Game on, gloopy porridge notwithstanding.
5.40 and I met Stuart and Alun in the lobby of their hotel, wearing traditional Greek slippers with pride. Everywhere were runners, Chinese and other nationalities. All bore the international expression of quiet expectation. The three of us headed to Syntagma Square, meeting Virginia and her impressively awake family on the way. By 6.15 am we were on the bus, hardly pausing thanks to the well-organised queues.
45 minutes later we had arrived at the small training ground at Marathonas, leaving plenty of time for morning routines, stretching and warming up. The rest was familiar – being herded into pens like cattle (leaving Virginia to a different pen), being blasted by poor quality, though upbeat music, multicoloured lycra and menthol bombarding the senses. And then the klaxon, the walk forward, the sensor across the road, the break into a jog surrounded by a horde of others.
Setting off was typically crowded but before long we were able to settle into a decent pace. The sky was a piercing blue and the sun already hot, and spectators were few. All the same, the first miles went past relatively quickly, the main sound being that of a thousand people running in near step. A few miles in we turned left and headed towards Pheidippides’ tomb, itself disappointing as it was somewhere in the middle of a circular olive grove. Taking my cue from others I grabbed an olive branch and wove it into my belt.
After 6 miles, flat gave way to undulation. There were as many welcome downs as there were ups, none of which were too daunting though the sun was getting hotter. We passed through a conurbation and saw some more crowds, the first having been round the tomb loop. Pace still strong: 8:25's. Keep it steady, keep it slow we were telling each other as the hill started to kick in some more. 10 miles down by now, we gritted our teeth for what we knew would be a big ask.
Mile 11, mile 12, mile 13. We were half way. The road went from town to country and back, with people at junctions. We passed a group of dancers accompanied by traditional Greek music. I joined in, briefly, even as my inner voice said I should be conserving energy. The sun grew hotter – a pharmacy sign said 23 degrees, and we were only half way up the hill. A drop in height gave us a fast mile and a welcome respite, then we were back on it.
Strangely, and in hindsight crazily, the route followed the right hand side of a tree-lined dual carriageway, from East to West. Which meant that the left hand side was shaded. But we were in full sun. The race pack had included a peaked cap which Alun and I were both wearing. All the same we were starting to feel the heat. Drinks stations, which were frequent, could still not come fast enough.
Mile 16 and exhaustion was starting to set in. I had set my sights on 18 miles, beyond which we only had another 2-mile push before the uphill would stop and we would start down into Athens. We stopped at a water station and regrouped before carrying on. We did the same at mile 18. Two miles to go, I said. On the horizon was an apartment block, which I fixed as the apex. Alun had been dropping behind, the Stuart was as well. I chose to keep the pace as the opportunity for a sub-four was rapidly fading.
Sure enough, the 20-mile mark signalled the top of the hill, itself marked by an underpass which meant a dipping, then a steep rise of the road. I can only guess the heat by now but it was hotter than before, 25 at least, the underpass offering a brief respite. Thinking of various advice I walked up the steep section, looking behind me but Stu and Al were nowhere in sight. Best press on, I thought. Then I was over the rise, the road stretching gloriously down and away from me.
With an hour to cover 6 miles according to my watch, I picked up the pace and was back at an 8.25. A minute in the bag, I thought. I knew I was running out of steam so I started to think tactically. 10 minute miles all the way and the sub-four was mine. I took a few walks in the following miles, through exhaustion but also in the knowledge I was within my time. Mile 24 passed. Two miles to go, that’s round the barn and back I thought. And all downhill, pretty much. Another underpass, another walk.
Half a mile later, a familiar voice. “Mr Collins, how the devil are you!” said Stuart, coming past me. He had got a second wind. “Just keep going, one foot in front of another!” I let him go, stopping again to catch my breath. I knew I had a mile to go, and ten minutes in which to do it. It was tight, too tight, but I was nearly there.
I rounded a corner, going under yet another inflatable arch, each teasing with a potential finish. Another, wooden arch was displaying numbers. Am I there? I wondered, a tiny ray of hope growing within me only to be quashed by the next sign I saw. “3 km” it said. I realised I was still more than a mile from the goal and I had less than ten minutes in the bag – I had been following my watch rather than the signs, so had misjudged the timings. The potential for a sub-four was lost, gone forever.
I was broken but had no choice but to carry on. A left hand bend led into a lush, wooded avenue, all downhill. It was beautiful but it stretched on for ever. I had already seen a man’s legs give way from underneath him, the crowd rushing to his aid. Would it happen to me, I wondered. Just how long was this road. And where was the bloody stadium. The road gave way to another, then perhaps another, I can’t remember. But then I saw another left bend, and I knew I was there.
I turned on to a plaza, a steep ramp appearing like a ten foot wall in front of me. I climbed it, then another, arriving on the track of the stadium full of people. Somewhere was my family, were Stuart and Alun’s families, were Virginia’s family perhaps. I looked but could see nothing but a sea of faces. And in front, so, so far away, the finish, looking like they had put it as far as possible away to create one, last, insurmountable challenge.
And there was Stuart’s wife Sharon and Alun’s wife Annabel. And there was my wife Liz, shouting my name. And there was the finish, looming up before dissolving in front of me, giving way to a cluster of runners slowing, falling to their knees, holding on to barriers or just walking in a daze, as if the concept of stopping had been lost to them.
The line of the track curved round some medical tents, which I followed. “Jon!” There was Stuart, looking as knackered as I felt. His watch battery had run out so he didn’t know his time. My watch said 4:03. I knew I should feel elated – it was 33 minutes off my previous best time – but I felt… nothing. Too tired. We sat in the shade by the edge of the track for a few minutes, saying nothing. Then Alun appeared and we shouted his name, hoarsely, eventually pulling ourselves to our feet.
“That is the hardest thing I have ever done,” said Stuart, a sub-four runner. “In all my years in the Army, we never did anything as hard as that,” concurred Alun. I could only nod. Not knowing whether to stop or move, we carried ourselves forward to get our medals, lining up together like Olympians. In the Olympic stadium. In Athens, the end point of the original, classic, authentic Marathon. And we laughed. And we swore never to do it again.
A half hour later we collected our bags, ate bananas, met our families (and shed some tears) and headed back to the hotel ready for what would be copious amounts of alcohol. The debrief was, simply, that we had done it – achieved some great times in challenging conditions. The race itself, half uphill along one side of a dual carriageway for 26 miles, was not the most attractive, nor would it be first choice for crowd participation. But it was most certainly one for the runner’s bucket list.
That evening, with a Metaxa seven star brandy and a cigar I could barely smoke, I looked up my official time – 4.02:57. Over the days that followed I smiled and laughed about how 3 minutes didn’t matter one iota, how it was only due to a series of arbitrary measures, how it was still an amazing achievement and nothing else mattered.
And I knew, in my heart: this ain’t over.
Evil is real
2015-11-15
Evil is real. It has existed throughout history, it pervades every culture, religion and ideology. It exists today. While it defies precise definition, it is clearly recognisable by all of us. We know it because we know ourselves, and we know what we, or the people around us, have the potential to become. We are weak; we draw strength from each other; we are easily swayed by charisma, by strong leadership and by aspirational ideas. We know our own histories, in which ordinary folks have been turned against each other on the basis of an idea which even they do not fully understand. It has always been so and will continue to be so, unless we somehow change the basis of what it means to be human.
We do not yet understand what causes someone to become evil. A loss of empathy, a lack of care for fellow human beings, the broader environment and even oneself is a part of it, but even this is not enough. Indoctrination, an unconscionable abuse of our inherent and important ability to embrace and incorporate stories into our own psychologies, this also plays its part. Our desires and aspirations, so necessary for survival, can become a lust for power, a drive to exploit. And the wish for familiarity has a dark side, when it means we ignore the suffering of others or even wish to harm those who have other ideas than our own.
These are human traits, themselves survival skills honed through millennia. They are why we are here. For anyone to say otherwise is to deny their own humanity. To advise that we should operate like this isn’t the case, is at best going to result in short term consequences which may exacerbate, rather than resolve. But to act like it is true does not mean to appease, to be ‘moderate’, to be wooly-minded. For evil is real. It acts like a cancer across the organism we call society, and it will continue to grow unless it is tackled.
As we look to respond to evil however, we need to recognise its causes. While it may be possible for people to be born evil, in general the contexts within which people grow offer a clearer explanation of their behaviours. Those in more prosperous, inclusive environments, in which they have more freedom to act and be themselves, in which they are accepted rather than disenfranchised, are less likely to conduct school shootings or suicide bombings. “Why did they,” we ask, without waiting to hear the answers, preferring our own, generalised perspectives and agendas. “Because America,” we say. “Because Islam.”
And yes, we need to tackle these causes. Not to appease or to show weakness, but because they are the causes. It is going to take tens, maybe hundreds of years — we are still at the beginning of the process of creating a globally fair and just society, and there remains an unfathomable amount of work still to be done. But this is the journey we are on, away from disease and child mortality, away from poverty and towards peace and acknowledgement of basic human rights. We will not get there overnight but there is plenty of reason to be positive and optimistic.
Meanwhile, we also have to tackle the symptoms. When evil emerges, whatever its complex web of causes, it needs to be dealt with. We cannot hope to get healthier as a society if we allow hate, wanton destruction and murder to go unchecked. We owe it to the victims, to their families that we do not stand by and say, “Sorry, there was nothing we could do.” We can, and should condemn evil, wherever it manifests itself; we can, and should protect innocents against evil actions, whatever rhetorical framework is used to justify the motives. And we should hold the perpetrators to account, unstintingly and without compromise.
But in doing so, we must also remember that every human is both consequence and cause. Hate begets hate; anger begets anger; resentment and powerlessness cause a righteous hunger for power which corrupts, which has the potential to recreate the exact conditions, only for them to be imposed on others. By slaying the monster we risk becoming monsters ourselves, as so many of our stories tell us, not to distract and entertain but to repeat an ancient lesson that otherwise we find too easy to forget and ignore.
We all have a choice. Love is not the answer, not by itself. But a decision taken without love plays into the hands of those who, whatever their backgrounds and whatever their justification, would see the destruction of all that we hold dear. From love we find strength, we find understanding, we find community, we find acceptance of difference, we find similarities that bind us more tightly than any ideology. A society build on love is not weak; rather it is strong, forthright, able to respond to harrowing circumstances, united against the only thing any of us have really to fear. The ultimate enemy is not ‘them’, it lies within each and every one of us. We all know this to be true. And we should do everything in our power to overcome it.
We are road kill
2015-11-17
We are road kill Consequences Cast aside On a bloody road To a non-existent heaven
We are less and less in this together
2015-11-24
I’ll stop with the philosophising soon, but bear with me on this one. Perhaps this is blindingly obvious to everyone else but I’ve only recently worked this out — that people in the UK and perhaps elsewhere are being alienated, across the board.
Not only "them people out there" or "over there” but people all over the country, from all backgrounds. Local people. People in cities. You, me, all of us. People of all classes and all cultures, whose views, fears and concerns are not being taken into account, not by the ‘leaders' of this country, and increasingly, not by each other.
It starts at the top. Ignore the crises for a moment and there is little from the government that constitutes actually helping anyone outside a rich minority, or scoring political points. Austerity is a broken approach. And the “all in this together” mantra has long ceased to be even slightly funny. They never were “all in", so why should anyone else be.
I also see little from any party that offers a realistic plan to help the general populace today. Nor people with the necessary charisma to carry it off — we have toffs, shiny suits and schoolteachers according to various representations, none of whom have presented a sustainable vision that includes, engages and inspires anyone outside their own focus groups or party faithful.
No wonder people wonder why they should bother to vote. The country teeters on the brink of recession, and people lack much to get excited about. When they complain, they are ignored. It’s more than tragic. It’s dangerous.
It also creates a situation that’s easy to impact. It’s why people of all political persuasions flocked to Corbyn (temporarily, it appears) but also to UKIP, because people felt they were being heard, their views acknowledged.
It’s also being exploited by the media — whose first concern will always be commercial. News organisations are creating echo chambers, telling people what they want to hear for no other reason than to sell more papers. Even when it plays into the hands of fear mongers, for the media does not trouble itself with a conscience.
The overall consequence is more alienation, as people finally start to think they are listened to but as a result, they become more isolated from views they might disagree with. Which makes the press more forthright, more likely to make outrageous statements. It’s a dubious cycle.
All this would be true without the migrant and refugee crises, without the existence of psychopathic nutters and indoctrinated youths. Add these pieces and we have, in US parlance, a real situation. We see it enacted on social media every day. People are (rightly, in my mind) troubled by the amount of divisive speech that has surfaced online, particularly since the Paris attacks.
But who is presenting a rallying call to make things otherwise? We have a distant government, currently calling for airstrikes that nobody knows will be effective, nor even right. Nowhere is there a clarion call for communities to start building bridges and closing divides. Why?
Because it isn’t politically useful. Leading the country is less of a priority than influencing the converted.
In consequence, society is dividing against itself— not just against minority groups, but with increasing intolerance for other political views or even for the expression of individual compassion. We, the populace, are nailing our own colours to a variety of masts. And in doing so we are fighting, arguing. Tensions are running high.
People are looking for answers. To tell them their thoughts are invalid is alienating.
People are defending others. To tell them they should not is alienating.
People are afraid. To tell them they are being stupid, or that they should care more, is alienating.
People are angry and frustrated. To tell them they should not be is alienating, particularly as their fears, anger and frustration have not just happened overnight.
And, of course, innocent bystanders are being judged because they happen to look like, or come from the same place as those we are afraid of. That one goes without saying.
In other words, we are alienating each other. We can blame the terrorists for this, but in reality they are profiting from and building upon a situation that already existed.
We have so much going for us as a nation, in all its celt/anglo/saxon/dane/norman/european/indian/sub-saharan/moslem/etcetera glory, to be proud of. Our United Kingdom of islands and nations, our fantastic mix of cultures, our sense of fair play, our love of tea and knitting, our spirit of a country that has always punched above its weight, our innovation, our complete inability to be cowed, our good humour, our commitment to each other and to our communities.
It’s all brilliant, it will sustain.
We should be applauding each other, smiling as we say bog off to anyone, within or without, who thinks they can in some way take that away from us. It is notable that the most “shared" people stating such views have been TV commentators, not our political leadership.
We need leaders that can inspire, that seek to unite rather than exploit divisions, that actually want to lead our great nation in all its multifaceted glory, to have it stand tall, as a beacon of light in the world. We need leaders that listen to the concerns of ordinary people of all creeds and persuasions, rather than trotting out terms like “hard working people” as euphemisms for “people that might vote for us”.
Only by standing together can we move forward without fear, and we need leaders that recognise it. We’ve seen it in other troubled periods of our history; we saw it most recently in the 2012 Olympics, and we need it now more than ever.
December 2015
This section contains posts from December 2015.
On writing Rush-Chemistry, and Egg Nog Gate
2015-12-08
“What’s next?” said Sean. “You can’t just leave it there.” Sean was the self-styled big cheese at Helter Skelter, the publishing firm that took a risk on the first edition of ‘Separated Out’, a book that became the authorised biography of Marillion. Sean was also a true gent, a person who cared deeply about music and the cast of thousands who made it all possible. His shop was on Denmark Street in London, an eclectic treasure trove of musical literature, itself surrounded by guitar emporia and just a stone’s throw from the proudly independent book store, Foyles.
Sean was right, but I had not given any thought to the subject. “I don’t know,” I replied. We’d met in Foyles’ jazz cafe, its uncomfortable, squashed together benches an apparently deliberate statement: this is art, you’re supposed to get a sore arse. Sean sat cross-legged, the only way he could squeeze his lanky form into the only, cramped space available.
“What about Rush?” I suggested.The band seemed to fulfil the necessary criteria: despite having a strong following (and therefore, readership), they were not mainstream and therefore not particularly written about. After a period of hiatus due to some deeply tragic circumstances, the band were back together, recording and touring. On top of that, the power trio had delivered the soundtrack to my school and college years.
“Let’s do it,” he said. I paraphrase — the conversation filled a good hour — but the gist is there. Rush seemed a logical choice, I was hankering to do more writing, and above all, I believed I had the process nailed. Start writing (thanks Karin), get a draft done, send it to the band’s management, sort some interviews and, well, job done. How hard could it be?
A year or so later, we were back in Foyles. “This is your difficult second book, isn’t it,” laughed Sean, his eyes sparkling. I nodded, shrugging. Things hadn’t quite panned out as planned. The process worked, after a fashion; in addition I had benefitted from my day job to travel to the US, and therefore Canada, which was a boon — I had been able to meet a number of wonderful people en route, producers and engineers, and pick up what I thought were great insights.
At the same time, given the distance I was less able to rely on serendipity for meetings. A fortunate coincidence led me to speak directly to a personal friend of a band member; an unfortunate misinterpretation meant a message went back that some bloke was writing an authorised biography of the band. I never said it, but that is what was heard.
Two things all budding biographers should know about the nature of music writing: first that it is, in many cases, parasitic. It is possible to have a situation in which the subject gains as much from the relationship as the writer, but equally frequently, this will not be the case. And second (as I have learned through anecdote and example) a known technique for obsessive fans to gain close proximity to their heroes, is to claim to be writing a biography.
So the mis-hearing of the term ‘authorised’ (I had used it, to describe the Marillion book) was more than a setback; it captured the worst fears of a band who had become rightly, and fiercely, protective of their own privacy. I’m sure it didn’t help that, having written some 80,000 words on the band by this point, I had become a little obsessive myself.
But the biggest challenge was neither of those things; it was, in fact, timescales. The people at Rush’s management company SRO-Anthem had been nothing but helpful, putting me in touch with close collaborators and giving me good references. Despite the setback, I had been told that I could still be able to talk to the band at some point, but it might take time (lesson 3, biographers: you are dealing with human beings, not facades). If only I had such time, but back in the UK Sean was gently cajoling me to get the thing finished, otherwise I would miss a publishing cycle.
There was some excellent news. I had spoken to Hugh Syme, Rush’s talented and long-standing partner for album covers and indeed, musical contributions. Hugh had suggested he did the cover for the book, an idea that I put to SRO, who asked Neil Peart, Rush drummer, who also had most to do with artwork within the band. Neil said yes, and Hugh immediately created a fantastic cover based around a Chemistry theme.
As the now-immovable deadline loomed, I had one chance left to engage in person, on a December trip to San Jose with the return journey via a cold, wet Toronto. A tentative morning appointment with SRO-Anthem was quashed at the last minute due to last-minute needs of the office Christmas party that afternoon. “I’m sorry, but I’ll be making egg nog,” I was told, a remark which will for evermore be referred to as “Egg Nog Gate” in our household. Lost for what to do, I booked the next flight out.
The book was finished shortly after, and we went to print with a truly spiffing cover. Happy indeed is the person whose book is stocked by Foyles itself. As for the content I was, and remain disappointed for a number of reasons — not just the lack of direct band insight, nor the fact that a decade later, I know I could do a much better job. But in addition Sean’s interest seemed to have waned, taking with it his attention to detail. At the time I accepted this situation gladly (he usually wielded the red pen like a stiletto) but the prose is weaker for his reduced editorial input.
Little did anyone know that Sean was displaying the first signs of leukaemia, a bitterly nasty condition that would take him from us — but not before he had recommended me to Mike Oldfield, a gesture for which I remain profoundly grateful. The logistical downside was that Sean left a music book publishing company which never really managed to get back on its feet; several years later I chose to take back the rights, as the only financial purpose the book was serving was to pay accountants.
Sometimes things don’t work out how you hope, but there is a positive twist. The year after publication, I received an email from a good friend of Neil’s, who wanted a copy of the book to give to the drummer for Christmas. I sent one off with much delight, from one budding writer to another, more established author. As Neil announces his retirement from the band with which he has played for past 40 years, and which has brought so much pleasure to so many, I would like to add my own gratitude to the man who gave me his support when all else seemed dark. Thank you Rush, and thank you, Neil.
Twitter map of the world - follower projection
2015-12-11
I have realised I am in that nether "working it" land...
2016
This section contains posts from 2016.
January 2016
This section contains posts from January 2016.
Social media owners need to stop running uncontrolled playgrounds
2016-01-05
Just how nasty can social media get? Twitter in particular has found itself under the cosh, due to its historical complacency. “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years,” the company’s previous CEO Dick Costolo wrote back in April.
When Jon Ronson wrote a book about the dangers of ’shaming’ on social media it was to Twitter that he was largely referring. While more extreme cases have resulted in criminal convictions, the majority of tweets fail to cross any significant threshold individually but can build a picture that, as Ronson points out, is more akin to the medieval punishment of stocks than rational debate.
The fact is, it is simply too easy to throw in verbal insults online. For example, whatever people felt about the politics of Andrew Lloyd Webber’s vote against the Tax Credits bill in the UK House of Lords, many remarks dwelt on his appearance rather than his decision.
And now with a falling share price, Twitter is stumbling into action, re-engaging founder Jack Dorsey as CEO and offering new capabilities that might prevent user numbers from flat-lining. Among the features is Moments, a.k.a. curated streams of tweets around current events. Hold on to that word - ‘curated’.
Twitter isn’t alone in finding that the dream of user-generated content is more Lord of the Flies than Paradise Island. Equally notorious for its crude, and sometimes cripplingly harsh comment streams has been YouTube. And indeed, across the Web we have platforms that have been sources of highly offensive, even abusive content.
Facebook, for example, remains a significant source of cyberbullying as teenagers use the service to display behaviours previously limited to the offline world. And Snapchat has been linked to sharing inappropriate images, with consent or otherwise.
It is interesting to compare the models of different sites. Whereas on Twitter most messages are shared publicly, on Facebook they tend to be shared with (so-called) friends. To state the obvious, this makes the former more of a platform for public shaming, and the latter for bullying within close communities.
It seems that every time somebody comes up with a way of sharing information, it invariably becomes misused. So is there any hope? Interestingly, we can draw some inspiration and hope from a site that appears to tend towards the chaotic.
Reddit, that lovechild of Usenet forums and social media, enables the creation of individual spaces (’sub-reddits’), each of which is curated by its creator to a greater or lesser extent. While parts of Reddit can get a bit hairy (in the same way as Twitter), at some level, humans remain in control.
The notion of curation — that is, keeping responsible people and the community involved — does seem to hold the key. For curation to be possible it requires the right tools.
The importance of the down-vote to Reddit cannot be over-stated, as it creates a generally accessible balancing factor. Right at the other end of the scale Quora (which also has both an up-vote and a down-vote) delivers on safe, wholesome, curated Q&A.
It is this additional level of responsibility that should set the scene for the future. With a caveat: nobody wants the Web to get “all schoolmarmish on your ass”; indeed, even if it could, it would doubtless cause people to run to the nearest virtual exit.
At the same time, we can see a future where we move from the uncontrolled playgrounds of social media (with the occasional, knee-jerk reaction from their respective authorities) to a place in which we take more personal responsibility for our actions.
The alternative is irresponsibility, either on the part of the individuals creating messages or the companies allowing it to happen. It like the old English adage, “The problem isn't stealing, the problem is getting caught.”
Simply put, our online culture needs checks and balances at all levels, not to restrict general behaviour but to prevent the excesses we exhibit if no restrictions are in place. It is no different in the virtual than the real.
While platform providers may not see it as their place to act as judge and jury — itself a point of debate — they should nonetheless provide the tools necessary to ensure people can congregate without fear for their online safety.
Not to do so is irresponsible, but more than this: as we develop and mature online, we will inevitably gravitate towards platforms that, by their nature, offer some basic protections against abuse. Many Twitter users are moving on; and it is surely no coincidence that Facebook is less popular for the youth.
Even as social media companies look to provide more ‘exciting' ways to interact, they ignore such basic, very human needs — such as existing without fear — at their peril.
I have seen the future, and it doesn’t mention Uber
2016-01-07
As we all know, some of the best ideas come when they are least expected. For me, the epiphany came just as I turned into my driveway; how fortunate that I was moving very slowly at that point, as it was a bobby dazzler of a thought.
Insight often comes through one of three routes: aggregation, extrapolation or conversation (for the record the last is often the most useful but the most risky, due to its anecdotal nature). Aggregation comes from quantitative research; extrapolation from analysing trends and, in the parlance, looking at where the hockey puck is headed.
In my experience, two kinds of extrapolation exist. The first is shorter-term, based on seemingly sudden changes in the business or demographic landscape. Right now, for example, everyone is going digital, mobile, social and so on. And the sharing economy is in full swing, based on the valuations of Uber, AirBnB and the like.
Such waves tend to overlap with, and overtake each other, each wave revealing new winners and sending others crashing into the rocks. Behind such shorter-term changes however, are relatively glacial forces of technology evolution, each continuing on its well-trodden path.
For example Moore’s and Metcalfe’s laws, driving miniaturisation and the network effect, each of which has had such a global impact over the past few decades. While such principles may eventually reach their logical ends, they have a way to run before their impact recedes – the Internet of Things is the most recent consequence.
As they continue, it is they that create and destroy whole businesses. The sharing economy has emerged due to cloud-based information sharing capabilities, creating a cookie cutter of opportunities to link suppliers with consumers. The resulting businesses are symptoms of where technology is at today.
But technology continues, regardless, becoming easier and more open. Inevitably, the ability to link trusted parties will commoditise, becoming a feature of the platforms upon which we increasingly depend. “Sharing” is something we will just do, as it becomes as straightforward to borrow a book or a shovel as to offer a lift.
Any new technology startup has an opportunity to benefit from a massive pool of potential difference as it short circuits traditional business models — simply put, making things cheaper for people while operating at lower cost than traditional businesses. Once such opportunities are plugged however, they become part of the landscape.
This means, however successful a young company might be (I’m looking at you, Uber) it has limited time with which to establish itself. This means either becoming a new platform (move over, Amazon, Google and Facebook) or being subsumed, potentially at massive profit to the original founders — in which case, job done.
But it would be a mistake to see any such business as the shape of things to come. I have seen the future and it is us, going about out daily lives with the tools we need to do so. Sure, we will be sharing and serving each others’ needs. But we will not need multiple third parties to do so, however successful they may appear in the short term.
February 2016
This section contains posts from February 2016.
Hipsterism isn’t a fashion, it’s a reaction...
2016-02-12
...against Perspex and neon, against soft edged fonts and loud noises and being told what to think, say and do. It’s a statement that says, I like the old stuff! I like the things I was brought up with, that my parents and grandparents grew up with. I like old vellum and charred wood, frayed edges and hair. Hair! I like hair, long hair, beards, moustaches, all these things that are being denied in a size zero, androgynous modernity we inhabit. I like dolly mixture and sunsets, camp fires and long walks and vinyl records, not because I want to be a part of some exclusive club of cool kids but because I reject the alternative, that clean cut, cold hearted and clinical climate of collective conformity called modern life. Unable to react to it, to respond to it, to reject it outright, I find myself in a passive aggressive no mans land of simple pleasures, of checked shirts and hard boiled eggs, of pencil sharpeners and pot plants. And they know it, and they don’t like it. Try as they might they cannot capture it, market it, not beyond replicating sun bleached and burnished themes in their publicity shots, in using similarly serifed fonts and whimsical quotes in in their throwaway tag lines and ironically unironic marketing copy. But however hard they try, they cannot be it, because it implies an absence of precisely what they are trying to achieve. By looking to recreate nostalgia, they merely reinforce the reason we look for nostalgia in the first place. Meanwhile the real irony, that the point is lost on them, is lost on them. Hipsterism isn’t a fashion, it’s a reaction. And it will pervade.
My big TV moment - Tetbury Woolsack and Record Breakers
2016-02-23
A few years ago I was on the Tetbury Woolsack Committee, which organises the "internationally famous and world record breaking" Woolsack Races. As befits an event of such esteem, we generally managed to get a local celebrity along – that particular year we’d somehow got the presenter of the BBC’s Record Breakers TV programme to come.
That year also, I figured I should run the race itself, which involved carrying a 60-pound sack of wool a hundred yards up a pretty steep hill. I wasn't brilliantly in shape, but you know, what the heck. The morning of the event, us organisers were up at 5am as usual, putting up signs, planning stall locations and so on. In a quiet moment I happened to be standing by the painted faces stand. “Come on, let’s paint your face,” I was told. “Sure,” I said, promptly forgetting immediately afterwards.
Everything went pretty smoothly – barriers arranged, PA speakers installed, tents put up and so on. Mid-morning the Record Breakers team arrived with presenter, Jez Edwards, doing a round of interviews. We spent a good few minutes talking about the history of the town, the event and so on. Wow, I thought, I’m going to be on the telly.
The races started shortly after, first the kids and then the more serious racing. I was in the third heat: just before it began, Jez Edwards decided he would also have a crack! So there I was, racing against the man from the Beeb. He quickly left me for dust; I plodded up, making it to the top somehow.
A few months later, the Record Breakers Woolsack feature was due to be aired on the TV. Of course, we all gathered round the box. The sequence started with the usual – some sheep, an archetypal Cotswold pastoral scene and so on. Then it cut straight to Jez Edwards, talking at the bottom of the hill.
“So here I am, at Tetbury Woolsack Races,” he said. “It’s been such an exciting day, I’ve decided to join in!” As he did so I knew, somehow, that my big TV moment was imminent. And sure enough it was, but not quite as I expected. “And guess what,” he continued, “I’m even racing against the local clown!”
Cue shot of me in face paint, looking startled. Fade to race view of Jez sprinting up the hill. Cut.
May 2016
This section contains posts from May 2016.
Ode to W H Smith
2016-05-25
Oh W H Smith, how I miss
The rows of sellotape
So carefully arranged between staplers
And ring reinforcements
The magazines on shelves of steel
That reached so high and stretched
Away, away from once young hands
Oh W H Smith, times have changed
The world can only wonder where
Your thunder, once so strong, has gone
The lights seem somehow dimmer now
The pennies scrimped and saved
To shore up marginal gains
Pained relics, remnants of retail nostalgia
Oh W H Smith, dare I stand
Amid the shelves, once elegant
But now awry, ramshackle
Labelled with dog-eared cardboard
Corners cut, your former joy
To serve shut down, shutters stuck
In some half open, slack-jawed position
Oh W H Smith, where are you now
My memories, still so sweet, though tarnished
By today’s shambolic tactics.
Even as you betray our friendship
I hear your words and see
My own mortality:
“Can I interest you
In any of
Our special offers?”
June 2016
This section contains posts from June 2016.
On Brexit, elephants and pythons: it’s a jungle out there
2016-06-06
Anyone struggling to see the wood from the trees about the Brexit debate? I know I have been. I bloody hate the term, for the record. Taking this complex and important series of questions down to such an inane play on words is an insult to British sensibilities and the national psyche. The same inanity is reflected in the referendum itself, which offers two choices alone: ‘in’ or ‘out’. What a dumb, stupid idea.
British citizens are faced with one of the most important decisions in their lifetimes, which will affect generations to come. The fact this is being decided on such a fundamentally binary perspective beggars belief — there is no third choice, for example, “spend the next five years planning a sound economic future and then go for that” is not on the table. Nor, as far as I can tell, any other way of gauging national opinion beyond “make your mind up, you uncultured peasants.”
The UK population is, by the nature of this incompetently presented, badly managed and opportunistically twisted ‘decision’, being right royally done over but whatever the result, the consequences will be felt for years to come. So, what is anyone to do when faced with this situation? The answer, as we have been given no other choice, is to vote. But which way? Having read a crapload of writings from both sides of the debate, I’ve distilled out a number of factors.
The global banking collapse, and the subsequent UK government focus on ‘austerity’, is having a profound effect on our national psychology. Economies remain in recession across the world. Many banks are still struggling as organisations; but meanwhile an increasing proportion of wealth is in the hands of a decreasing number of people. Such folks care little for national boundaries, nor for the continued financial challenges of the many.
The collapse is generally accepted to be the fault of the banks and the regulators, the former who took advantage of a lack of financial oversight (including hiding risk) and the latter who failed to provide regulations to prevent this. The creation of the Euro led to a financial expansion in Europe; when the financial collapse happened, many countries in the Eurozone became vulnerable — Ireland, Greece and Spain among others.
UK immigration has increased considerably over the past 15 years, to the discomfort of many whose opinions on the matter were ignored. Until 1982 net migration (people in minus people out) was negative; non-British citizens were ‘coming in’ in the 10s of thousands. In 1996 this increased to over 100 thousand and in 2002, the number of non-Brits in the UK increased by 350 thousand, the figure staying above 250 thousand per year since.
In terms of the economic future, the general consensus among economists is that Brexit would be bad for Britain, if not the rest of the world. “Acrimony and rancour surrounded debates around austerity and joining the euro, but analysis from the Bank of England to the OECD to academia has all concluded that Brexit would make us economically worse off,” says a blog from the London School of Economics.
European membership costs. Figures of 350 million per week (50 million per day) have been confirmed as dishonest, but a figure of 23 million per day (according to fullfact.org) still sounds like a lot, even if it’s less than 50p/day per head of population. The question of whether such money could be better targeted is a good one, we generally rely on economists to help us through the detail. See above point on economics.
The political shenanigans have been hugely, astonishingly inappropriate. Only recently has it become clear that members of the Conservative party are essentially treating Brexit as an internal power play. Both sides have been fear-mongering; meanwhile other voices in the UK political debate are delivering little, through either not saying much being seen as a sideshow by the media. The overall result is a callous misuse of the UK democratic process to ply individual agendas.
So where does this leave us? Ultimately, we are comparing apples with, well, something completely different: let’s go for elephants. Immigration is absolutely the elephant in the room, as a significant proportion of the UK population are now saying ‘enough’ and mainstream political parties (from both sides) are finally listening. But confidence in politicians is at an all-time low.
As a consequence I am not at all surprised that people are wanting to leave Europe — this may appear to be a sledgehammer to crack a nut but for many, who feel they have not been listened to for over a decade, it offers a course of action which may result in being able to control immigration better. Even if the economic consequences are awful, people feel this is a risk worth taking or, simply, have given up believing what they are told.
This dynamic has been jumped upon by a small number of opportunistic politicians, who are quite comfortable with manipulating the population into thinking there is a place they will be better off. Do I believe the NHS will be better after Brexit? No, and nor do the Eurosceptic William Hague, nor John Major who likened the NHS in the hands of Johnson, Gove and Duncan Smith to, “a pet hamster in the presence of a hungry python.”
Which brings directly to my conclusion: I don’t believe the people wanting power care too much about what might happen post-Brexit. The ultimate debating positions are about either dealing with known challenges, or taking a step into an ultra-high risk unknown — with the risk including that the elephant, immigration, cannot be curtailed. However dirty the bathwater, we risk throwing out our future as a nation with it.
All due to a shitty, over-simplistic question on a shitty piece of paper. Damn right I am angry, and how I wish things could be different but they aren’t. I don’t think the really powerful care too much. US firms will move their headquarters — English-speaking Ireland is no doubt rubbing its hands together, but so will be other European centres. Mathematically and inevitably, some will shift.
The question becomes not whether things will be worse, but how bad they will be and whether it is a price worth paying. If the debate is about costs, then quite clearly any additional cost will be more expensive than necessary. If the twitterbit is really about immigration, then let’s tackle it head on, rather than just saying “it’s all going to be OK, don’t worry your pretty heads about it” which seems to have been the approach in the past.
As a deeply proud Englishman and internationalist, I want to see a strong country continuing to play on the world stage. At the moment, this is true. In the future, if we remain in Europe, it will continue to be, without a shadow of a doubt; if we leave, it might be but our future will be less certain. This is not a risk I am prepared to take, to gamble the futures of my descendants on a whim, manipulated by politicians whose main interest is to gain power at any cost.
A final shout-out to the clearly charismatic but ultimately psychopathic Boris Johnson. Boris is many things, but he is above all a historian, and this is his one big chance to make a splash. He will stop — no, he is stopping at nothing to achieve it. And I will not be his puppet.
October 2016
This section contains posts from October 2016.
My first ever information management strategy
2016-10-28
I have to admit I was quite excited to have been asked. After all, it was about a topic that sounded very grand, and it had the word ‘Strategy’ in it. It was 1996 and I was 30 years old — enough technical experience to have a stab at most things without breaking something — but not yet versed in the ethereal world of high-level management consulting. The, frankly older consultants exhibited such calm, such… gravitas. And now was my chance to take a seat at the top table, to listen, perchance to dine with some of the smartest people I was likely to meet.
And so, I laboured. I interviewed everyone I could find. I filled a wall with sticky notes, I drew mind maps. I held brainstorming sessions and read everything I could find about the topic. I drafted a document, I went through the company standard first, second and third level review. The higher thinkers read what I had written, and they found it good. I was overwhelmed with joy, my proudest career moment since that meeting in Berlin when I had held the room in the palm of my hand.
Not long after, I left the company so I never found out if it was implemented. I doubt it — shortly after that, the firm merged with another, and was acquired by a third. No doubt my efforts were lost in the noise, after all it was custom built for the original company.
Or was it, indeed, custom built? A few years later I was reminiscing about my great achievement. I had defined an information architecture, with meta-data enabling the company’s information assets to be structured and organised. I had defined an indexing mechanism, ensuring that any such assets could be catalogued and found. And I had defined a management process, with the roles and responsibilities required to keep everything ticking over.
In other words, I had ‘invented’… the library, as used from the present day from the ancient Egyptians and no doubt beyond. As this realisation dawned I felt initially horrified, then a strange peace descended. What lessons ? That despite ending up with what might be seen as an ‘architectural pattern’, it was important to have to worked through the process.
Oh, and that sometimes there are no lessons, only experience.
November 2016
This section contains posts from November 2016.
On “monetisation of data” and other phrases that should be banned
2016-11-04
I’ve been thinking a lot about monetisation of data. Horrible word I know, but it keeps appearing in corporate circles, generally in the context of what companies want to achieve with all that data they are producing, or which is now available to them from external sources — social media and so on. As per a conference I was recently at, the conclusion frequently reached is that monetisation of data — that is, making money from it — is becoming a high-priority goal for any business.
Why ‘monetisation’? Is it just that it is a nicer word than ‘profiteering’? Or possibly because it sounds slightly like the ultimate goal presented by Maslow’s pyramid of needs, ‘self-actualisation’? Although the former deserves scrutiny (which will be the subject for another day), I’m mostly with the latter. The fact is that organisations know they can do better if they use information better, and doing better primarily means either making more or losing less money.
So, yes, monetisation it is. For some industries, turning data into so-called ‘business value’ is nothing new — anything to do with finance, for a start. Accounting, for all its complexity, is ultimately about managing tables of figures and their relationships; banks are purveyors of mathematical transactions. Healthcare, engineering and other scientific disciplines also have a substantial data element. Retail supply chains have long been driven by data, as are manufacturing systems and utilities plants.
What’s changed the game for all companies (which is why all businesses are ‘going digital’) is that customer-related data has engulfed business strategy, at the same time as all other data sources proliferating. Marketing used to be a relatively isolated set of activities, feeding the other parts of the business on a regular basis with information and potentially sales leads. Today however, such information has become incredibly accessible, so it appears, with customer expectations changing equally quickly.
And meanwhile, in principle at least, you can now know exactly how a business is functioning to the n’th degree, second by second. Through a myriad of sensors, via a multitude of sources you can check everything from the humidity levels in a container crossing the ocean to the stress levels on a paving slab. Nobody really knows how much data is enough, so the response can be to keep adding to the pile of sensors. From a client I have heard of an tendency to plan the right number of sensors in advance, but this is emerging best practice.
The result of all this data, both external and internal, is that many companies have ended up paralysed. Even senior corporate strategists flounder like a dwarves in Smaug’s cave, arms already laden with the gold nuggets of information they see before them but unable to do much more than grab handfuls of it at a time. The opportunity is tantalising; meanwhile other, slicker organisations simply wade in with buckets and take what they want. What an opportunity, what audacity!
Is it any wonder, then, that consulting firms and computer companies are promoting solutions to this challenge? In honesty, not that they really have a solution to the challenge as a whole. For all the machine learning and information management frameworks, the open APIs, agile delivery and DevOps strategies, nobody has a magic sieve that can separate useful data from the shrapnel. Instead, such advice usually turns to scoping — what beneficial goal are you trying to achieve and what specific information you need to do so.
Monetisation is therefore a shorthand term for not wanting the opposite, i.e. not wanting to remain in a state of uncertainty where everyone and their dog seem to be having an easier time of making sense of it all, and potentially cashing in, than you are. The valuations of some of the digitally enabled startups may be vastly inflated, but who wouldn’t want some of that? Find me a company that says, “No, sir, please don’t give me that vastly inflated valuation.” But just as not all singers are destined to be pop stars, the majority of companies need to set more realistic goals.
There’s a further irony. Money is itself just a form of data, a (frequently poor) measure of value of anything, derived to simplify exchanges of goods and services. The fact we have worked out how to exchange the measure itself (as illustrated by currency) is an indicator of how complicated this can get. In other words, even if an organisation successfully achieves ‘monetisation’, it will have managed to convert a partial representation of reality into an arbitrary estimate of what that might be worth.
If that sounds vague, it’s because it is. The real trouble with the monetisation of data isn’t that the idea is grubby. It’s that it takes companies away from the things they have been able to understand, their products and services, and heads them en masse towards a cave where all that glitters is not gold. It may well be a necessary step for companies to consider how they can get more value out of their data, just as they are looking at services (another horrible word — ‘serviceisation’). But in doing so they should be careful not to lose touch with what they were setting out to do in the first place.
How to be smart in a post-truth world
2016-11-16
What the heck just happened? Events over recent months in the UK and USA, two nations divided by a common language, have led to a swathe of commentary expressing disbelief and anger alongside the jubilation and enactment. “They were all stupid,” says one side; “Stop moaning and get over it, losers,” says the other. No doubt the name calling will continue.
But what does any of this mean for an industry whose central purpose is to collate, process and deliver information? In the midst of it all lies a common theme, that of seeing data and facts as only one element of the debate, and an apparently low-priority element to boot. “People have had enough of experts,” said Brexit campaigner Michael Gove MP, even as his side laid out its own, fact-light agenda.
“Let’s give £350 million to the National Health Service,” they said, a promise Brexiters had no intention of keeping. They couldn’t: it wasn’t ever up to them, even if such an idea was even possible. Remainers had their own “Project Fear” meanwhile: a string of catastrophic consequences that would inevitably occur should the vote go against them. Many won’t, to the quiet delight of everybody.
As it turns out, neither ‘facts’ nor ‘promises’ mattered in Brexit; nor did they have much of a place in the Presidential election. Voters really didn’t appear care about the potential negative consequences, about historical misdemeanours, about promises that could never be kept (cf that some parts of the wall with Mexico becoming a fence — “I’m very good at this, it’s called construction,” said Trump).
It turns out that Michael Gove was right. People really had had enough of experts to the extent that they would, in both cases, appear to vote for the unknown rather than the known. “I think people just wanted to see what would happen,” a taxi driver said to me a few days ago. I’m not sure he is right; rather, I believe that we, as a species, have proved ourselves unable to appreciate the much bigger picture of what has been going on.
Over time we will unpick the reasons why people voted one way or another, and perhaps arrive at some conclusions: the data about voting attitudes, as well as reviewing historical factors through past decades, will no doubt reveal some truths. But if there is anything we can take from the current situation, it is that our current analytical abilities cannot necessarily reveal future behaviours.
Addressing this, deeper truth is of fundamental importance. If I had to put my money anywhere, it is that the models we use have completely failed to grasp the geo-political and psychological complexity of the situation. Condemning voters of either side as ‘stupid’ is symptomatic of this failure: they must be stupid, because they have done the inexplicable, right? Wrong, it is our ability to explain what is going on that is lacking, not ‘their’ decision making skills.
If we did understand such things at a deeper level we might be able to see more clearly the causes of current voter behaviours and indeed, do something about them. It may be for example that the seeds of recent events were sown back in the 80’s and 90’s, way before social media (which is taking the brunt of criticism) was a ‘thing’. Even armed with such an understanding however, we might still struggle to predict the unexpected from happening again.
Why? Due to the very characteristics that make us human in the first place. We are quick to jump to conclusions; we have agendas; we prefer to act on less information rather than waiting for a complete picture, particularly if it might go against what we want to do. We hunger for control, we often act in ways against our longer-term interests. And, frequently, we seek to justify our actions and positions using data that fits with our views, ignoring all that does not.
We know all of this. Lies, damn lies and statistics, we say, as if the data is the problem, rather than our propensity to interpret it selectively. The pollsters got it so, so wrong, yet still we use them. And while they are global and virtual, the echo chambers we inhabit today are no different to the past. So we share information that reflects our views, suppressing the “clearly biased” views of others. It is ironic that we even have a very human notion — of irony — to explain this phenomenon.
Meanwhile however, we continue to build information systems as if data holds some hallowed, incorruptible place in our lives. It doesn’t: we only have to look at how the oh-so-open Twitter has been castigated for harbouring trolls, or Facebook’s fake news issues, to see how vulnerable data can be to human behaviour. The models we build into systems design are equally subject to bias; the architectures assume people will be good first, and then are patched in response to the rediscovery that they are not.
Right now, we are seeing a renewed wave of interest in Artificial Intelligence, the latest attempt to create algorithms that might unlock the secrets contained in the mountains of data we create. Such algorithms will not deliver on their promise, not while they are controlled by human beings whose desires to be right are so strong they are prepared to ignore even the most self-evident of facts. And that means all of us. A failure to understand this will continue us on a path to inadequate understanding and denial of the real truth that lies beneath.
December 2016
This section contains posts from December 2016.
Dave and Dave, and Dave, and Alexis, and other tales of AWS serendipity
2016-12-08
#cloudbeers is an impromptu gathering of people that spend more time than they would necessarily like at tech conferences; and have an appreciation for the outputs of the fermentation process. There’s a Facebook group and the occasional tweet, usually along the lines of “At . Anyone fancy a beer?” You get the picture.
“At AWS re:Invent. Anyone fancy a beer?” asked a guy called Dave on the #cloudbeers Facebook group last week. Id never met Dave but I was at AWS and, indeed, I fancied a beer so I responded. I said I wouldn’t be available until later because I had some meetings — I actually wanted to walk the expo floor, get a few demos in, pick up a t-shirt perhaps. Which was fine, we arranged to meet at 7.
So off I went, becoming just another of the 30,000 strong delegates, partners and staff attending the event. Easy to get lost in the throng, lose track of time. I went to a few stalls, eventually alighting on Intel’s partner stand where I could see a number of demos (and get a free notepad. One was a presentation by a guy called Dave. I couldn’t spend that much time, after all I had plans for the evening.
So I thanked him, headed upstairs to change and then to the bar. To say it was busy would be an understatement — the various watering holes were holding either sponsored events or a ‘free beer’ pub crawl which actually meant hour-long queues. I sent a message to Dave on Facebook suggesting another venue and… surely not… but that face looks awfully familiar…
Ten minutes later we’d met, and yes, it was the same Dave. I’d already bumped into old friends and colleagues but this was something else. We laughed heartily, drank beer and wondered just how likely it was that I told Dave I couldn’t see him later because I was meeting the same guy, who I then had to leave because I was meeting the same guy.
A couple of days later, as all good things come to an end, I checked out of the hotel and had my boarding pass printed. To my surprise the flight was earlier than I remembered — 3pm rather than the evening — so I rushed to a meeting and then headed straight to the nearest taxi rank, at the back of the Venetian conference centre. Nobody waiting ahead, so I headed for the first taxi.
As I waved, I saw a guy arrive behind me. “Want to share?” I asked. “Sure,” he said, in an English accent. As we got in the cab, I turned to him and my eyes narrowed. “Haven’t we met?” I asked. Then I realised where. I had hosted a panel of retail IT execs for Rackspace a while back, and this person was one of the panellists. His name was Dave.
So of course we chatted, and when we arrived at the airport we headed through security and sat together in the main area as neither flew often enough to have lounge access. As we sat, I saw my old friend James, who certainly flew enough to qualify, walk through with a couple of peers. I didn’t clock who they were as I rushed up to James to check if myself and Dave could be his, or indeed their guests.
“Sure,” said James laconically, after checking with his co-travellers. I rushed back to get Dave, and it was only as I returned and caught my breath that I looked at who they were. “Haven’t we met?” I asked one. Then I realised where. I had hosted a panel of open source execs for Rackspace a while back, and this person was one of the panellists. His name wasn’t Dave, but it was Alexis.
There we have it. In the space of 3 days I had blown out Dave to meet Dave, then met Dave, then met Alexis who I had originally met in the same way I had met Dave. All in what is already the strangest place on the planet. Absolute coincidence, of course. And I remain completely rational about the whole thing.
2017
This section contains posts from 2017.
October 2017
This section contains posts from October 2017.
A New Page
2017-10-25
So, what’s all this about? The reason for this Facebook page is, quite simply, that I am finding it increasingly difficult to know where to put stuff about what I’m doing, particularly given that it will not be of interest to the majority at wherever it ends up. While it feels like a vanity page, at the same time, it’s seriously helping me get my stuff together.
By way of illustration, here’s a few of the bigger projects I have on at the moment.
First a musical, called Super Awesome, which is a rags-to-armageddon cross between Britannia Hospital, Silicon Valley and Glee. I’ve written the lyrics (with an idea about tunes), I’m now writing it through again in iambic hexameter for the sheer heck of it. Then I’ll flesh out the music, possibly in collaboration as I always seem to write the same tune. Once that’s all done, well, I don’t know, but we’ll have a musical score.
A technology book, currently called Smart Shift, which traces the history of computing from, well, a jolly long time ago to the present day and looks at the impact it has on our society and culture. I actually finished writing it last year, just after the Panama Papers but before Fake News and Trump, all of which are symptoms of this techno-rapture we now inhabit. I’m getting some help pulling the final pieces together, after which I will either find a publisher or self-publish some time in 2018.
The Devil’s Violinist, a novel about Paganini who (I maintain) was one of Europe’s first rock musicians. His heyday was 1820-1830, occupying a post-Napoleonic Europe that was for the first time relatively free of banditry and the related hazards of travel. Suddenly artists could visit far-flung cities, as could their entourages and hangers-on, tour managers and supporting acts. Paganini had an English secretary who wrote a short book about his experiences and wished he could write more… this is currently at 60,000 words, third draft, there will be a few more before it is anywhere near ready but it’s a great set of tales.
And finally, I’m currently putting all available time I have into a… actually, I’m not going to tell you but it is going to be epic, and bigger than anything I’ve tackled before. It’s fiction. Watch This Space. Or indeed, watch this space, all will unfold.
Plenty to keep anyone busy, alongside various short stories, poems, lyrics and songs (I’ve been trying to do one a day for the past few weeks), podcasts and other attempts at getting stuff out there. I’ll be talking about all of these things as and when they appear, I very much value your feedback (good or bad) as otherwise I’m a tree in a forest, not knowing which way to fall.
So thank you, brave people for liking my page, at the time of counting all 29 of you, I hope I will not disappoint and do let me know what you think
Ripples in the Pool
2017-10-26
That pack of cigarettes
Ripples in the pool
A stone, wrapped with regrets
Ripples in the pool
Proving who is the best
Ripples in the pool
Memories laid to rest
Ripples in the pool
Reeling in the cast
Ripples in the pool
Breaking with the past
Ripples in the pool
Escalator Stories
2017-10-31
I go to London about once a week, travelling from a rural reality to this strange, incredibly vibrant environment (I know, for most, that will be the norm and I am the exception). After a slow build on the train, climbing onto the platform is like stepping into a scene. People coming from every direction, going about their business with brows furrowed in determination. It's too easy to become part of the flow, but I often wonder what's going on in the lives of each person I pass. Chances are that each will have their fair share of trauma. But on we all go.
Her mother’s dying, not long now
They’ve kept it from the kids
But worrying won’t pay the bills
An escalator story
They got him on the way from school
Threw his bags in a ditch
He cried but couldn’t tell his mum
An escalator story
Three bottles empty, on the shelf
How did it come to this
What would they do if they found out
An escalator story
Drugs can’t disguise the chronic pain
Getting up is the worst
Next appointment, months away
An escalator story
November 2017
This section contains posts from November 2017.
There's a Spider in the Kitchen
2017-11-15
There's a spider in the kitchen
I don't know what to do
There's a spider in the kitchen
And a mantis in the loo
There's a scorpion in the bedroom
It gave me quite a scare
There's a scorpion in the bedroom
And crickets everywhere
There's stick insects in the cupboard
But no apology
Because my Dad is into
Entymology
A New Start
2017-11-22
Take me to the lands beyond the dark
Where sunlight burns the shadows from the earth
Take me through the slumbers of your soul
Where dreamscapes rise and crumbling mountains fall
Paint me places where there is no pain
Where neither sadness nor confusion reign
Where time has slowed
And only clear blue skies remain
Of storms long past and waters flowed
Pull me from a clear and tranquil sea
The waters warm and buoyed, with salt
Release me from the flotsam in my heart
Into a gentle nothing, a new start
2018
This section contains posts from 2018.
January 2018
This section contains posts from January 2018.
January 5 2018. The Yin and Yang of Innovation and Governance
2018-01-05
These are interesting times – whatever your political affiliations or wherever in the world you might be. In this context, technology is a two-edged sword — it holds both great promise and enormous risk. We can choose be evangelists or doom-mongers, or we can simply recognise this dichotomy: for every healthcare breakthrough, there will be a fake news, and so on.
It was probably ever thus — one can imagine the dawn of the iron age, when somebody chose to make a sword even as somebody else made a ploughshare. With each breakthrough comes a breakdown, an opportunity to exploit as well as enhance, and yet somehow we are still here; I remain optimistic that humanity as a whole will prevail, whatever the short-term challenges.
We don’t always make it easy for ourselves. Older companies struggle with innovation for a thousand reasons, leaving gaps for others to fill to sometimes dramatic effect. And meanwhile, our legal systems remain behind the curve, their multi-year, consensus-driven models rendered hopelessly inadequate by the pace of change. And technology is so complex, it can raise unexpected and massive challenges (such as the latest Meltdown and Spectre security flaws in computer chips).
To whit, this bulletin. As I write this, I am reminded of Alistair Cooke’s Letters From America, a weekly new broadcast which ran from 1946 to 2004. Cooke was always the observer, his role to enlighten. I stand more chance of achieving the latter than I do matching his longevity, he died at 96 but I would be 110 by the time I finished if I kept going that long. I can only hope medical science has a few tricks up its sleeve.
So, what’s news?
2018 Predictions
I wish I’d thought of Kai Stinchcombe’s tagline on Medium, “I’m whatever the opposite of a futurist is.” I recently documented my top 5 2018 predictions as follows:
1. GDPR will be a costly, inadequate mess. No doubt GDPR will one day be achieved, but the fact is that it is already out of date. For one, simple reason: we will consent to have our privacy even more eroded than it already is. Watch this space
2. Artificial Intelligence will create silos of smartness. Integration work will keep us busy for the next year or so, even as learning systems evolve. c.f. This piece on the Register: Skynet it ain’t: Deep learning will not evolve into true AI, says boffin.
3. 5G will become just another expectation. But its physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.
4. Attitudes to autonomous vehicles will normalize. Attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right?
5. When Bitcoins collapse, blockchains will pervade. The principle can apply wherever the risk of fraud could also exist, which is just about everywhere. But this will take time
6. The world will keep on turning. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.” Okay, this last one isn’t really a prediction, more an observable fact.
DevOps Automation report
Let’s be clear, it was always about what’s currently being labelled DevOps: if you can do things faster, test them and get them into production quicker, you can find out what you really need and move on. This shouldn’t be rocket science but it is very hard for us humans to get our brains around. In this article I cite Barry Boehm, founder of the spiral methodology — I was surprised to find that it emerged in the 80’s, not the 70’s but no doubt prototyping approaches have existed since the invention of the wheel.
Why do (in-expert) organisations think they are secure?
This is the first in a series of “unanswered questions” — you know, the ones that nag at you but never really get tacked. In this case it was from security expert Ian Murphy — “Why do companies with little or no real security experience think they know their environment better than anyone else?” I welcome any additional questions you may have.
Extra-curricular
In other news, over the break I was involved in a Christmas single which is raising money for mental health charities (I’m also in the video); I have a weekly podcast with my mate Simon; and alongside my writing, I have fallen madly in love with the piano so I have set two challenges for 2018: to finish a novel and to play Widor’s Toccata on the biggest church organ I can find. I’ve started a video blogon the latter if you want to follow my progress.
January 5 2018. The Yin and Yang of Innovation and Governance
2018-01-05
These are interesting times – whatever your political affiliations or wherever in the world you might be. In this context, technology is a two-edged sword — it holds both great promise and enormous risk. We can choose be evangelists or doom-mongers, or we can simply recognise this dichotomy: for every healthcare breakthrough, there will be a fake news, and so on.
It was probably ever thus — one can imagine the dawn of the iron age, when somebody chose to make a sword even as somebody else made a ploughshare. With each breakthrough comes a breakdown, an opportunity to exploit as well as enhance, and yet somehow we are still here; I remain optimistic that humanity as a whole will prevail, whatever the short-term challenges.
We don’t always make it easy for ourselves. Older companies struggle with innovation for a thousand reasons, leaving gaps for others to fill to sometimes dramatic effect. And meanwhile, our legal systems remain behind the curve, their multi-year, consensus-driven models rendered hopelessly inadequate by the pace of change. And technology is so complex, it can raise unexpected and massive challenges (such as the latest Meltdown and Spectre security flaws in computer chips).
To whit, this bulletin. As I write this, I am reminded of Alistair Cooke’s Letters From America, a weekly new broadcast which ran from 1946 to 2004. Cooke was always the observer, his role to enlighten. I stand more chance of achieving the latter than I do matching his longevity, he died at 96 but I would be 110 by the time I finished if I kept going that long. I can only hope medical science has a few tricks up its sleeve.
So, what’s news?
2018 Predictions
I wish I’d thought of Kai Stinchcombe’s tagline on Medium, “I’m whatever the opposite of a futurist is.” I recently documented my top 5 2018 predictions as follows:
1. GDPR will be a costly, inadequate mess. No doubt GDPR will one day be achieved, but the fact is that it is already out of date. For one, simple reason: we will consent to have our privacy even more eroded than it already is. Watch this space
2. Artificial Intelligence will create silos of smartness. Integration work will keep us busy for the next year or so, even as learning systems evolve. c.f. This piece on the Register: Skynet it ain’t: Deep learning will not evolve into true AI, says boffin.
3. 5G will become just another expectation. But its physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.
4. Attitudes to autonomous vehicles will normalize. Attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right?
5. When Bitcoins collapse, blockchains will pervade. The principle can apply wherever the risk of fraud could also exist, which is just about everywhere. But this will take time
6. The world will keep on turning. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.” Okay, this last one isn’t really a prediction, more an observable fact.
DevOps Automation report
Let’s be clear, it was always about what’s currently being labelled DevOps: if you can do things faster, test them and get them into production quicker, you can find out what you really need and move on. This shouldn’t be rocket science but it is very hard for us humans to get our brains around. In this article I cite Barry Boehm, founder of the spiral methodology — I was surprised to find that it emerged in the 80’s, not the 70’s but no doubt prototyping approaches have existed since the invention of the wheel.
Why do (in-expert) organisations think they are secure?
This is the first in a series of “unanswered questions” — you know, the ones that nag at you but never really get tacked. In this case it was from security expert Ian Murphy — “Why do companies with little or no real security experience think they know their environment better than anyone else?” I welcome any additional questions you may have.
Extra-curricular
In other news, over the break I was involved in a Christmas single which is raising money for mental health charities (I’m also in the video); I have a weekly podcast with my mate Simon; and alongside my writing, I have fallen madly in love with the piano so I have set two challenges for 2018: to finish a novel and to play Widor’s Toccata on the biggest church organ I can find. I’ve started a video blogon the latter if you want to follow my progress.
January 12 2018. Digital transformation - when specificity becomes too specific
2018-01-12
First off, thank you to all who have engaged in conversation since I started sending out my hand-carved newsletters. I have had some long chats and big reads on GDPR and Blockchain in particular, on both of which I shall be following up, as well as feedback on layout and so on. On a specific point of GDPR, consent, opt-in and so on with regard this very newsletter, I believe I am in good shape given how it is only going out to people such as you, with whom I have an ongoing dialogue or have done business with in the past. As ever, if you don't want to receive this informational bulletin (I will never sell you anything), please let me know or click on the unsubscribe link at the bottom of this email.
Meanwhile, I’ve been thinking, and writing, about digital transformation. It’s not that I think it is all bunk, but rather, as the Irish adage goes, “If you want to get there, don’t start from here.” A little definition can go a long way, but sometimes it can get in the way — we’ve all been in meetings (if you haven’t, you are the lucky one) where more time is spent trying to define some term, than actually getting on with making things happen. Case in point is digital transformation, which seems to spawn more discussion than any ‘technology trend’ of recent times. This does beg the question of whether something can really be a trend, if nobody can agree what it is… but that’s for another day.
As I send this out, I wonder if I should be commenting on the Meltdown and Spectre security flaws. I'm not sure I can add much to what has already been said: people are patching their systems like crazy; you should update your mobile device when requested; and everything based on Intel chips will run slower for a while, until they are replaced or someone comes up with some snazzy firmware tweak (which will mean more patching). Otherwise, the world will continue to turn.
If you’re looking to ‘do’ digital transformation, read this first
Meanwhile, I’m not sure there’s any such thing as digital transformation - as in, you can't just walk into WalMart and buy it; neither is it an architecture, nor an approach, nor even a philosophy. However, it's certainly got people talking. I set out my reasons why it isn't a thing here: in summary, terminology matters not a jot but the propensity to change is fundamental:
1. It’s all about the data — the term is just an ill-considered response to what we knew anyway, that we are in the information age.
2. Technology is enabling us to do new things — to continue the Sherlock-level insight, this really is enabling breakthroughs. Who knew?
3. We tend to do the easy or cheap stuff — trouble is, these breakthroughs happen just as often because we are lazy, as driven.
4. Nobody knows what the next big thing will be — is where the varnish starts to peel. Won’t we just have to ‘transform’ again?
5. That we are not yet “there”, nor will we ever be — which is enough to lead any strategist to breakdown. This gig will never be done.
6. Responsiveness is the answer, however you package it — so our focus should be on ability to change. Common sense perhaps, but it isn’t happening.
On the upside, there’ll still be plenty of jobs
A good example of the digital hype and in particular, point 4 above is how we’re all going to be out of jobs (yes, everyone, from manual workers to lawyers, according to the University of Oxford). Here’s a summary of 10 reasons why nobody should worry about whether they will have something to do in the years to come:
- Because decisions are more than insights.
- Because we have hair, nails and teeth.
- Because we ascribe value to human interaction and care.
- Because we love craft.
- Because we value each other and the services we offer.
- Because we are smart enough to think of new things to do.
- Because complexity continues to beat computing.
- Because experience and expertise counts.
- Because we see value in the value-add.
- Because the new world needs new skills.
The bottom line is that even as we automate certain manual activities, we lose neither the desire, nor the propensity for work (or indeed, value exchange between us). We have evolved such that we see work as necessary: we derive satisfaction from doing it ourselves, and sharing the fruits of our labours with others. Will jobs change? Well, yes, but how does this differ from the past 50 years?
Oh and finally, don’t even start me off on monetisation.
Extra-curricular
In other news, I've been getting this MailChimp thing up and running - any feedback welcome. I don't recommend anyone looks at the latest piano vlogs (they are painful) but they are a point in time which I hope to move beyond soon! I've been writing a bit of poetry, largely as a way of getting the creative juices going first thing in the morning - you can check the latest on my Facebook page.
Thank you to all my subscribers. Any questions or feedback, let me know.
Until next time, Jon
February 2018
This section contains posts from February 2018.
Here it comes
2018-02-27
Here it comes:
The unexpected
Burst of inspiration
Ideas tumble in, tumbling
Like waves crashing
On familiar shores
You drown in sheer joy
Of understanding
Here it comes:
The unexpected
Moment of destruction
Cold circumstances bring
Reality collapsing
Your house of cards
You feel the crushing weight
Of understanding
Here it comes:
The unexpected
Point of revelation
Randomly preserved
Memorised connections
Showing, not telling
The unexpected clarity
Of understanding
2019
This section contains posts from 2019.
March 2019
This section contains posts from March 2019.
Putting on the oxygen mask
2019-03-05
Nobody can ever know what it is that hits someone, that causes a change in their perspective or behaviour. It could be something which appears significant, such as a friend who was involved in a rail crash, or a motorbike accident; or it could be something that appears relatively trivial. It doesn’t matter, beyond the fact that it has taken place.
We are all weak, vulnerable, messed up. From an early age we learn coping strategies, we get on with life as being better than the alternative, but in the knowledge that it isn’t quite right; we laugh and joke, and have moments of joy and peace even as we struggle to make sense of it all. And then, at a moment in time, we decide, no. Something gives up inside us, is no longer able to keep up the appearance.
At the same time, stress. It’s impossible to know what the number is, of thoughts we can keep in our heads at any moment in time. As a race, we’re pretty good at processing information; we also have a (bad) habit of seeing ourselves as invulnerable, even as we take on more and more. We fill ourselves up, swimming in a tank of our own making, squeezing out oxygen until we leave ourselves only an inch or two of breathing space.
And still, we cope… until the point hits when we need far more headspace than we have allowed ourselves. Suddenly, and usually though some unexpected, external event, we go from a semblance of normality to a situation where we are gasping for breath, desperate. We choke, we become addled, we kick out in frustration and fear. Why is this happening to me? Why me? stops being a question, becomes a mantra.
The new situation it exhibits itself differently for different people. Some get depressed, locked into their own trauma; some get angry, unable to control themselves even long after the situation has abated or gone away; some consider the option of taking themselves out of the situation, permanently. All are vulnerable, weak, as they do not have the space to process what amounts to all of life, which means they can react to even the smallest of triggers.
What’s the answer? Acceptance, ultimately, that we are not superhuman, that we have frailties that only we can deal with, that we deserve our own attention, that we have something to offer, that it all makes sense. That none of it matters, but all of it is important. And time, time to understand, to work through what may be long-standing issues. And, yes, change.
Not only does the answer often look very different to the expectation, but also, we need to create space if we are going to find it. Which means taking responsibility to stop, to fall back, to let the tank drain, to breathe clean air. To accept that each individual must first (to switch analogies) put on their own oxygen mask, if they are to help others.
But more than this. Crisis may be a problem with no time left to solve it: it may have been building up for many years, lurking, being put off. At the same time, it is an opportunity: if we, as humans, can only stop when we have no choice but, then the fact we have been forcibly stopped is a gift.
The present may feel bleak, but so does a field in winter, when all has died. The field doesn’t matter; more important is the first, tender shoot of new growth, then the second, each of which extends naturally towards the light.
We can try to be selfless, we can feel our problems are not important enough, that we will still be able to employ coping strategies just like we used to. The first step is to recognise that the moment has passed, and then, if that is a case, to make a decision: whether we are important enough to put first, not in an indulgent way but because, ultimately, we are all we have.
This journey is unique to everyone, but the pattern is not. From the moment of crisis, people choose to continue as best they can, for as long as they can, or they choose to tackle it head-on. Many never reach the point of decision (which is tragic), and many choose not to (equally tragic). Some, perhaps a minority, decide, or find themselves with no option but to scrape off layer after painful layer before they can be themselves again.
April 2019
This section contains posts from April 2019.
Hey LinkedIn, bear with me on this...
2019-04-10
...I'm shifting to a new host, re-installing Wordpress and trying a bunch of new stuff. Including an automated LinkedIn connector. If this works, I'll eat a very small hat. Made of rice paper.
Update: to be fair, that appears to have worked.
October 2019
This section contains posts from October 2019.
Travel Forward 2019: Let's do this
2019-10-23
You know that thing when you realise there's under two weeks to go? I'm reviewing the final PDFs of the Travel Forward conference agenda right now and once again I'm staggered to think how it has gone from the aspirational, yet largely empty canvas of six months ago, to the packed, exciting and dynamic programme we now have.
As I've been briefing speakers, the message has been simple: senior technology decision makers from across the travel industry will be coming to days one and two of the conference... but what happens once they have gone home, slept, woken and arrived back in their workplaces on day three?
Our goal is not only to inspire but to educate, with practical steps that enable attendees to take their businesses forward (the clue's in the name). I say "our" - I've been lucky enough to work for, and with some really smart people to pull this programme together.
So, team, speakers and attendees, let's do this - let's make Travel Forward 2019 a conference to remember, where preconceptions are left at the door and where hopes and dreams are replaced by practical and actionable steps towards genuine, technology-powered opportunity.
How not to run a sub-four marathon
2019-10-23
Here's my seven "laws". I'm no athlete but I got there in the end :)
1. Stick to the plan
2. You are what you eat
3. Lateral (muscle) thinking
4. Distance makes the heart…
5. Get the science
6. Enjoy it!
7. Do all the things
To be expanded. With dinosaurs.
Updating site
2019-10-23
Nobody cares but me. Call this a stake in the ground.
How I Write Reports
2019-10-23
Dystopian Dreams
2019-10-24
Retrospective thoughts on Smart Shift
2019-10-25
Smart Shift, a book about the impact of technology on society, is now published online. Here's my thoughts on its multi-year gestation.
About seven years ago, I decided to write about everything I thought I'd learned, on the impact of technology on society as a whole. Having been down in the weeds of infrastructure (either as a job, or as an analyst), I wanted to express myself, to let some ideas free that had been buzzing in my head for some time. I know, I thought, why not write it as a book. That'll be simple.
I already had some form, concerning the notion of getting into print. Biographies of a couple of popular bands, a technology-related book and various mini-publications gave me experience, some contacts and, I believed, an approach which was, one way or another, going to work.
Fast forward a few years and many lessons, and we have a book. While I took advice and had interest at beginning, middle and end, while I worked through the process of proposals, of creating a narrative that fitted both what people wanted to read and how they wanted to read it, of having reviews and honing the result, it was never published.
And, perhaps, it was never going to be, nor was it supposed to be, for reasons I didn't fully understand. The first, so wonderfully exposed recently by screenwriter Christopher McQuarrie, is the lottery nature of many areas of the arts: writing, film and music.
The crucial point is that the lottery is symptom, not cause: a mathematically inevitable consequence of the imbalance between a gloriously rich seam of talent-infused material, and a set of corporate channels that have limited bandwidth, flexibility and indeed, creativity, all of which is navigating a distracting ocean of flotsam and jetsam. While the background is open to debate, the consequences are the same: just "doing the thing" right doesn't inevitably lead to what the industry defines as success.
Much to unpick: a different thread, of course, could be that my own book is either flotsam or jetsam. A better line of thinking still, is to recognise a number of factors that are spawned from the above, not least, what is it all for?
Before answering this broader question (broadest of all questions?) it's worth pointing out the nature of this particular beast. Let me put it this way: any treatise that starts with the notion that things are changing (e.g. anything about technology) is signing its own best-before warrant. The window of opportunity, and therefore one's ability to deliver, is constrained by the time period about which one is covering, and the rate of change therein.
In other words, over the period of writing, I was always out of date. No sooner had I written one thing than the facts, the data points, the anecdotes started to wilt, to wither on the vine I had created for them. It isn't by accident that I ended up delving into the history of tech, as I had already captured several zeitgeists only to see them die and desiccate before my eyes.
On the upside, I now have a book which could (still) be revised: each chapter is structured on the principle of starting with something old, and using that as a foundation to describe the new. Canny, eh?
Returning to "what is it for", one point spawns from this: there's a place for history in the now. I know, that's not blindingly insightful, but the link between the two is often shunned in technological circles which prefer to major on revolutions than deeper-rooted truths.
Meanwhile, and speaking of the now, one needs to accept the singular consequence of both lottery culture and rapid change, simply put: if you're a technologist, the chances of getting your message out there in book form are miniscule, if you rely on a relatively slow-moving industry. Which very much begs the question, what is the point? If the answer is to be published, then you may be asking the wrong question but, as Christopher intimates, good luck to you.
At this point, I'd like to bring in another lesson from my experiences with singing in a band, or in particular, what happens when only a handful of people shows up. It happens, but it doesn't have to be a disaster: what I have learned is, if one person in the room is enjoying themselves, they become the audience. It's humbling, uplifting and incredibly freeing to give just one or two people a great time through music.
Put everything together and the most significant lesson from Smart Shift is this: my job, and my passion is to capture, then share an understanding. The job, then, is to balance reach with timing: better that a handful of people get something at the moment that it matters, than a thousand receive old news.
The bottom line is just do it, get it out there. Grow your audience by all means, build a list of people who want to hear what you have to say, and have something to give back in response. But start with the right ones, with the person at the back of the room that claps along. Not because of any narcissistic ideal but because, if the job is to communicate, an active audience of one is infinitely more powerful than not being heard at all.
Afterthoughts
2019-10-31
And so I’m dead.
By now I’ll know
If after-life has aught to show
Or whether it is nothing more
Than biochemical remains
That you’re consigning down below.
Yes, I’m dead, that’s doubtless true
But nothing else has gone from view
No other lives turned on their heads
Not here, at least.
Which matters more than who’s deceased.
In life I had one goal, to fill it…
Actually, two, if I could will it
Fulfil the many things I could
While being with the ones I loved.
Which all along, was my endeavour.
So never think I didn’t once
Appreciate the smallest moment
Spent alone or with another.
There’s so much that we could have covered
Had I not shuffled from this place…
…I’m dead. But:
The time we had, it was enough
Heady and inspiring stuff, but
Infinitesimally probable
Consequences of events
From Big Bang to birth’s miracle
Godly or godless, heaven-sent.
So could I feel, and if I had
The wherewithal to reveal
A single thought
It would be naught but gratitude
To this, to you
That have imbued
My life as-was.
And if you could, perhaps you ought
Remember that your time is short
Be bold, fear nothing but the thought
That life, a gift that comes by chance
Is only ever given once.
We go to the end
2019-10-31
We go to the end. together We stare ahead, toward the void We stand on the edge of forever We feel the peace Where once was noise
We open ourselves to the embrace Of silence Hand in hand, with every pace A step into the unknown. A moment taking us beyond What once was, what could ever be
We go to the end, together We cross the threshold, where beyond No time, nor space can find us For a moment, just a moment We pause Then we are gone
November 2019
This section contains posts from November 2019.
The Silences
2019-11-01
It's the silences that get to me The empty spaces in between The otherwise continuous stream Of noisy, gung-ho positivity And unveiled anger, bordering on The vitriolic
Its the silences that show The hollow truth behind what we know To be no more than a protective facade In this dialectic war, any words Will serve as ammunition
But then we falter, attempted misdirection, Distraction, ultimately unsatisfying descents Into whatabouttery lead only to a realisation That the barrel is empty, the battle is lost...
...At least, this time, as we emerge Once again forthrightly on the front foot Confident of a position that can once again Ignore, avoid the distraction of either facts Or purpose.
Snippet
2019-11-07
An hour before sunrise. The first, dull half-light of the new day gave silhouettes their dim outlines, pitch against grey slate. A light breeze, chill-edged by the cold cloudless night, was beginning to disperse the rising marsh-mists of morning, picking up a leaf here and there as it cut through the copse.
Time is a mirror
2019-11-22
Time is a mirror
Reflecting past and future
Testing, testing what we know
Days are a river
Running, fore and after
Moving, moving with the flow
Today's another day
Tomorrow will have things to say
The future's going to happen
Anyway
Nights will last forever
On the edge of never
Depths of silence reclaim what we owe
Today's another day
Tomorrow will have things to say
The future's going to happen
Anyway
December 2019
This section contains posts from December 2019.
Prologue
2019-12-09
His first, jolting scream of horror was followed by a second, then another each overlapping the last as they came, layer on layer, jabbing like mosquitoes at a street lamp, flashing colours spinning faster and faster like a bright-painted fairground ride. His defense from each deadly wave too quickly became a feeble hands on head protection of his inner self, inner soul against the hooligan terror. Deep down he knew what was happening (hadn't he said this was inevitable?), even now sadly weighing up his glib prediction against the true terror of this onslaught, even through the pain as it stabbed and pecked at the fragile, fraying cord still supporting his thoughts and mind and sanity and oh God the pain ... and he felt his grip weaken, and he felt the cord fray and break, and he gave a last scream as he fell into the chaos, as his mind, as his consciousness slipped down to drown in the still, black depths beneath an insane sea.
2020
This section contains posts from 2020.
February 2020
This section contains posts from February 2020.
Write about what you know, they said
2020-02-26
Write about what you know, they said. What, I thought- my childhood? That continuum of non- descript non-events that somehow went to define me? Of solitary walks to school, collecting Smurf stickers from the garage, feeling no fear atop of the long,- high wall that went for ever, or the terrifying moments crossing the pub car park, head down to avoid the attention of parka-ed mods, proudly astride their scooters. What do I know? The form of pond life under a microscope, the way in which a fine brush can bear just so much enamel paint, the satisfaction of a finely folded origami shape, then the rush to a tap as it becomes a water bomb, the sibling games in the garden on a summer afternoon, the solitary walls in the woods coloured by the one time a grown-up, intentions unknown, took chase… yes, what do I know? Never boredom nor anger. Sometimes fear, confusion at becoming a target for the vindictive wrath of another. Often laughter, occasional pain, that fall from a tree, that tapped finger, more falls, each leaving a small scar like a badge. Always support, always structure, each day planned like the last, night time routines unshifting, but for the occasional , unsatisfying attempt to read under the bedsheets with a torch, the attempts to watch TV from a perch at the top of the stairs, only to be spotted and roundly put to rights. The long, long, long summers, warm days and bike rides and friends' houses and marmite sandwiches. The occasional shopping trip to the big town, all concrete and sallow faces. The holidays, long walks to the beach carrying an encampment of wind breaks, towels, picnic baskets, the games of dice and cards, the boat trips and donkey sanctuaries. The books, the music, the jigsaws at Christmas, the relatives bringing cream cakes and motor racing stickers. The love and comfort, the fear and uncertainty, the continuity of it all, the sad, hindsight realisation that it could not be idyllic forever, as self-awareness and hormones left only confusion and a constant need to conform, even as everything only became harder. Write about what you know, they said… but there are no stories, only memories, of another, more peaceful time.
May 2020
This section contains posts from May 2020.
A Recipe for Wholemeal Sourdough Bread. With Notes.
2020-05-05
Loosely based on BBC Good Food.
Ingredients
1 kilo of wholemeal flour for two loaves, plus enough (500 grams-ish) for the starter and (cough) levain
20 grams of salt
I use flour straight from a mill — I sieve it to remove most of the bran and give the bread half a chance of rising.
Introduction
Sourdough bread is made without bought yeast: it uses a starter, which is a sloppy, semi-fermented paste. It’s dead easy to make a starter: it has to be, as making bread is one of the first things we ever did. Starters are remarkably resilient — check out the US pioneers that would carry a bit of starter in a pouch, then use it to kick off their bread making when they got the opportunity. So, above all, don’t be scared of either the idea, or the process. All these pictures of beautiful bread are largely bread doing its thing, you just need to follow some standard principles and it’ll do the rest. The one thing it does take is time, which, let’s face it, we all have quite a lot of at the moment. And perhaps we always should. But anyway.
In terms of practicalities, a starter takes under a week to kick off; then sourdough bread is best considered over a four-day period.
* Day 0 is when you take the starter out of the fridge and re-activate it.
* Day 1 is when you make a levain, ready for bread making
* Day 2 is when you’ll do the bread making itself
* Day 3 is when you cook the bread
If this sounds like a palaver, only Day 2 requires repeated effort, and even that is just dipping in and out — a bit of effort in the morning then four lots of 5 minutes, every half an hour, in the afternoon. So, it’s more a case of building it into a routine than any hard effort. Ideally, a nine-to-five worker would have Day 2 on a Saturday, but that’s also when you’d be wanting to get that delicious, freshly baked bread out of the oven... so perhaps reserve a bit of time, between emails, say, on a Friday morning — which puts Day 0 as Wednesday evening. How hard can it be?
First things first: you’ll need that there starter before you can do anything. Again, given the above timescales, you should kick that off at the weekend, then you will be ready to go on the following Wednesday. Good luck!
Making the Starter
The starter makes itself, with a bit of help. Take a reasonably large vessel - a 500ml yoghurt pot with a lid, say - and put in it 50 grams of flour and 50 mls of lukewarm water. Exact proportions aren’t important but you don’t need too much of anything — stir it and you should end up with a loose paste. Put the lid on loosely, then leave it overnight on the kitchen surface. Do the same every day for 4-5 days, and you should end up with something frothing and bubbling of its own accord. Some notes:
1. If you want to be totally hipster, you can use a Kilmer jar or similar but this will make no difference to the starter.
2. You can also give it a name, but come on, be serious. Unless you have kids in which case, totally. Or you just want to.
3. But you can call it Mother. No, I don’t know why either.
4. You can use any kind of plain flour - wholemeal, white, spelt, doesn’t matter. You’re just creating something for yeast to eat as it develops.
5. If it develops a layer of water, you can pour this off if you feel so inclined. You can also throw away half the starter from time to time, as you never need that much.
6. A test, reputedly, is that a teaspoon of starter should float in warm water. I don’t think this works for wholemeal, and if it’s not floating but still frothing, don’t fret.
In any case, a good, frothy starter is clearly doing its thing. Once you have this, you’re ready to make some bread. See below and put what you don’t need in the fridge, unless you are planning on making bread every day.
Making the Levain
Levain is a French word (yeast is ‘levure’), which is of no relevance whatsoever, and nor is the levain itself: all that’s really happening at this stage is that you’re getting the proportions of starter right for a couple of loaves. Levain is the sort of word people use to make bread making sound more mysterious, and therefore less accessible, than it actually is. It’s words like levain that cause snobbery and pretentiousness, and leave normal people, who would otherwise be perfectly capable of producing a loaf, feeling inadequate and unsure of themselves. Levain is a touchstone for all that is wrong with cooking, taking away any concept of initiative or self-belief and leaving us all to be over-reliant on recipes as if we can’t think for ourselves, but have to follow someone else’s steps as though they contain some magic formula that would otherwise be unattainable to mere mortals. It is the fault of levain, yes, levain that we have celebrity chefs, whole shelves full of beautifully illustrated books, and competitive cooking series in which it isn’t enough to make something delicious and nutritious, but it has to be a feat of culinary ingenuity. It is because of levain that we have Paul Hollywood.
Anyway, stick a healthy tablespoon of starter into a bowl, add 100 grams of flour and 100 grams of lukewarm water, and leave it on the kitchen surface, loosely lidded, overnight. That’s it.
Making the bread - Day 2 Morning
The morning of Day 2, you kick things off by mixing the levain into 600 mls of lukewarm water. By now you may be wondering about all this ‘lukewarm water’ business, what’s that about? Essentially, yeast operates best at about 25 degrees Celsius — that’s when it most likes to bud, to eat sugar and turn it into carbon dioxide. As room temperature is (say) 16-20 degrees, you can give the yeast a helping hand with water that is 30 degrees or so, that way, when it makes contact with the flour, you should arrive somewhere around that magic 25. Doing this means the resulting dough is already at the right temperature, within: you don’t need to worry about airing cupboards, warm porches and the like.
You should have a thin slurry of yeasty goodness, into which you can add a kilogram of flour. Note that the proportions that count are of starter to flour: the amount of water is relevant more for how workable the resulting mix becomes. So don’t be afraid to add a bit more if the dough feels too solid: too much water and you can end up with a very sticky and unmanageable dough, but it shouldn’t affect the ability of the bread to rise. (As a digression, half a kilo of flour will make a ‘standard’ loaf. Normal bread making requires 10 grams of (proper, not low) salt and 10 of yeast per loaf, so this is no different).
Mix the flour into the dough. You can use an implement for this, but doing so misses a trick as your fingers are the best judge of how well something is missing. Top tip: as you start, use one hand to hold the bowl, and the other to mix. You can bring your second hand in once the process is underway. Second top tip: be sure to have already rolled your sleeves up before starting: if you don’t, it will already be too late, as your hands will be coated. You should end up with a rough dough: get as much of the dough off your fingers and back into the dough before you finish, them leave on the surface for between 1-4 hours, in a clear plastic bag. This holds in the moisture and stops the dough from drying out.
Note that the timing is not massively relevant: you’re wanting to let the mixture activate itself over the day, but equally, you have things to be getting on with. I suggest you get this stage done between 9-10, then you have something to work on after lunch. Equally, you could do it first thing, and time lunch around the next stages. You get the picture.
Day 2 Afternoon
Throw the 20 grams of salt over the mix, and add a splash of water, then you’re ready to knead this emerging masterpiece. You’re using your hands, again, and you want to be able to feel the material stretching through your fingers. You should know when it’s done, as it’ll feel consistently stretchy. If it feels a bit tight, add a splash more water but no more.
Leave for 20 minutes. Then take the dough out of the bowl and flip it onto its back, straight onto the kitchen surface - its underside should look a bit pocked. pull the sides over each other then flip it back and stretch the sides down and underneath, to create a skin. Then pop it back in the bowl. Put some flour on your hands to do this if it’s sticking.
Leave for 20 minutes, then do it again. And again after 20 minutes, and again. Once more, timing does not need to be super-accurate. Then leave it in the bowl, bagged, for two or three hours. It should rise, not by much — more important is that bubbles appear under the surface.
At this stage - late afternoon or evening - you should have your dough, proved and ready to go. Divide it into two (super-top tip, which took me years to work out, is that accurate results are best done with scales. Well, duh), then follow the above process of putting each piece onto its back, then folding into the centre before flipping and stretching the skin. This time, stretch tighter than previously, until you feel you have a Jack-the-Giant-killer tight tummy of a skin.
You can now put in a floured basket, if you have one. It isn’t essential. You can just use the bowl you were using — I have two thin laminated plastic salad bowls (barbecue style) that do the job. Sprinkle a bit of flour in first, but don’t worry if you forget. Do the same with the second portion of dough. (Or, if you like, divide it into four and make pizza bases. Follow the same fold and stretch process for each, leave to rest for half an hour or so, then attempt to juggle around your head before giving up and rolling with a bit of flour. This can, but doesn’t need to be, semolina flour. Make sure they’re as thin as possible whilst still supporting the hefty amount of cheese and tomato you plan to load them with.)
Back at the dough, put the bowl(s) in the fridge overnight, back in the plastic bag. I have a large transparent bag I have been using for this purpose, for years.
Cooking the bread
The next morning, haul yourself out of bed, make a cup of tea, take the bowls out of the fridge and put their oven straight on at ‘hot’ - 240 Celsius for a standard oven, bit less for a fan oven or gas mark 9. You want it hot to get that initial quick rise as the trapped air expands. Ideally, put in a casserole dish - we have an old ironware Le Creuset type thing, a Pyrex dish is probably just as good. Use a higher shelf, and leave a lower, loaf-sized shelf free.
When this is up to temperature, we get to the fiddliest bit. You want to keep the oven hot, at the same time as getting the expanded dough, cross-cut, into the dish. No easy answers for this, but my order is as follows.
1. Get the dough bowl ready and scrape the dough away from the sides using a spatula, so it is loose and ready.
2. Flour your hands a bit.
3. Remove the casserole dish from the oven and put it close to your working surface.
4. Turn the dough onto the surface, flip it upright and shape it very carefully. Slice it across the top in an X.
5. Take the lid off the casserole.
6. Pick up the dough with two cupped hands, lift it across and drop it into the casserole.
7. Swear profusely and panic as you realise you have dropped it off centre. Jiggle the casserole and sigh with relief.
8. Put the lid back on and put the casserole back in the oven, on the higher shelf.
Set the timer for 30 minutes and enjoy that cup of tea. Once time is up, check the loaf - hopefully it will have done its thing, rising and making a crown out of the cross-cut. Remove it from the casserole - hopefully a jiggle will release it, or you may need to use a spatula - and put it on a lower shelf to brown for a further 10-20 minutes. If you have gone down the two-loaf route, put the empty casserole back in the oven, while you get ready for stages 1-8 once more.
Meanwhile, back at the first loaf. To be cooked, bread should reach a temperature of 94 degrees - we have found that cooked wholemeal sourdough needs another 5 minutes or so even when it has reached this temperature, so it is not sticky in the middle. You can use a thermometer for this (okay, I do have one bit of fancy-pants gadgetry) or keep your fingers crossed — if the latter, I would err on the side of more time, all you will gain is a bit more crust.
When you see fit, remove the loaf from the oven and put on a cooling tray. It should be crusty yet still with some give. Follow the same steps with the second loaf. Leave to cool as long as you can stand before cutting a deep slice of crust and slathering it with butter. You deserve it. Oh, and don’t forget to take a picture and upload it to all the social media channels, as there is nothing people love more than seeing pictures of freshly made bread.
2021
This section contains posts from 2021.
June 2021
This section contains posts from June 2021.
The Records
2021-06-17
There’s a story I need to tell. It’s long, and possibly boring, so bear with me.
“I told Mark and Pete about the records,” I said to Liz.
“Did they laugh?” she asked.
“Yes they did, thankfully,” I said. Liz smiled wryly, and we both went on our way.
I had been in Stroud Brewery when I told them. Dare I, I thought to myself, but heck, what’s to lose but my dignity. Again. Here goes, I thought, as I went in.
“Can I tell you about the records?” I asked.
“Sure,” said Mark and Pete, not knowing what else to say: we were sitting in a pub, what else was there to do but share idle stories. So I began.
“Well, there I was, at the Canal Trust bookshop,” I started. I didn’t tell them why I was there - it was to enquire about whether the bookshop wanted one of those transport cases for albums. Liz was with me, as was Stan, the dog: both were waiting on the path outside, within earshot. Two volunteers, both men, were sitting outside, wearing Canal Trust shirts and by this token good candidates for an enquiry.
“So,” I continued, “I said to these guys, can I tell you about the records?” What I didn’t say, to Mark and Pete at least, was that I had explained it as the funniest moment of my life, a turn of phrase that would come back to bite me. “Sure,” said the two Canal Trust people. At least I think they did, or perhaps it was just a nod, but it didn’t matter, I was going to tell them anyway.
“Okay,” I said, here’s what happened. There I was, walking the dog the other day, and I saw one of your colleagues. We were chatting and I asked him what he did.
“Ah, I deal with the records,” he said.
“Wow,” I said, “that must be really interesting, dealing with all those historical documents.” I was distinctly impressed, thinking to myself that he must be an absolute mine of information.
“Nah,” he said, opening a door. “These records.” He pointed at row upon row of albums of music, LPs stretching from wall to wall.
“Oh,” I said, laughing. I don’t remember him having found it particularly funny, but it tickled me.
So, that’s what I told the two Canal Trust volunteers: I was already laughing as I said it. I got to the end and said, “Nah, these records!” and waited for their response.
Nothing. Not a titter, not a smile. “You see, the records,” I said.
“I don’t get it,” said one.
“You know, albums. LPs. Like, you know, Johnny Cash.”
“Oh,” he said, nonplussed.
“I get it, I think,” said his colleague.
“The records,” I said, desperately. I tried again with an example, before saying, “I will never tell that story again.”
Nothing. Just a slightly perturbed face, as though a coin had appeared on the table for no reason.
“You might want to ask inside, you know, about the box,” said the other volunteer.
Recognising this as an opportunity to exit, I took my leave and did precisely that.
As I re-emerged onto the path, not able to make eye contact with the two, I saw Liz looking at me, her face sparkling with barely suppressed mirth. As we moved away she collapsed into helpless laughter.
“You see, the records,” I said, without any hope left.
“Stop,” she said, bursting into laughter once again.
So that’s what I told Mark and Pete. They laughed, their collective response no doubt helped by the empty pint glasses sitting in front of them. At that moment I didn’t care, I felt nothing but gratitude.
Which is what I told Liz. And now I am telling you. If you have read this far, I can only thank you.
You see, the records.