Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Thinking Inside The Box

Thinking inside the box

Jon Collins, 9 February 2004

IT has never been about making things simpler, at least not from the technological perspective. In the very old days, computer ‘infrastructures’ used to consist of a single, large box with a number of connected, “dumb” clients. More complicated setups had two or three boxes joined together (one with the obligatory tape reels), but they still made up one, large, mainframe computer. Indeed, some people thought that would be the way the world was for ever. “I think there’s a world market for about five computers,” said IBM chairman Thomas Watson in 1952, blue ones presumably. As things got smaller and things got cheaper however, everybody wanted one. Then, some bright spark had the idea of getting the disparate computers to talk to each other, and all hell broke loose. We’ve never really looked back: workstation and client server computing were added grist to the mill, and seemingly (according to the now-greying mainframe guys, who watched from the wings), all the best principles of reliability, security and so on were thrown away. Why? Largely, because things were happening too fast: before the software had time to catch up computers were still getting smaller, and cheaper, and everybody wanted more of them. It’s not all been bad: each new wave of computing has enabled businesses to reach further and achieve more, but in each generation, those old, mistreated principles were forgotten. It’s still true today, largely, and we all know it to be so; at the same time, mainframe technologies refused to lie down and play dead, and jostling for position in a world of blades and clusters.

Distributed computing is here to stay, and for good reason. Historically, the smaller, cheaper computers made it possible to do things that were impossible before – graphical displays, cheaper local processing to take the load off the back end, a generally improved end-user experience. It goes on: PDA’s, phones and even MP3 players are the ultimate in ultraportable computing. And let’s not forget – how could we ever – the Internet, which is nothing more than some bright sparks agreeing a few protocols so that any compute device can talk to any other. On top of all of this, layered software has evolved to make the most of local, central compute power, and any tiers that might exist in between. Even the old fashioned, monolithic software packages are being broken into more manageable chunks, enabling a pick and mix approach to applications – theoretically anyway.

In practice, and in addition to the lip service paid to them good ol’ mainframe principles of performance and uptime, the distribution of software and hardware has been the cause of many new headaches. For example, no operating system ancient or modern was designed to handle thousands, or even millions, of simultaneous connections. A number of solutions exist to such problems, some (such as DNS round robin) built into the protocols themselves, others are supplied by enterprising companies who recognise a need to be met when they see one. This includes the clustering techniques advocated by major operating system providers such as Microsoft, Sun and purveyors of Linux products; it covers the distribution mechanisms built into Web servers from the major providers (such as IBM and BEA). it also incorporates appliance companies (for these, read software companies who recognise they need to provide a straightforward, packaged solution). These companies make clever boxes that control the connections and the data flow, offloading some of the pain from the servers.

Different appliance companies take different tacks. For example, F5 Networks (and their competitors, Alteon and Foundry) supply a box that enables different IP packets to be sent in different directions based on their content. In doing so, an F5 solution can be used for load balancing, inspecting the packets then distributing them across servers in an appropriate manner (the box can even take a feed from the servers, to support its decision making). A second example is Redline Networks, which cares less about the content of the packets, and more about ensuring it is transported in the most efficient manner possible across the ether. Both solutions are valid: one of the most attractive elements of such appliances is that they are based on ASICs – Application-Specific Integrated Circuits, custom silicon that has been optimised for one purpose alone. There is no ideal computer architecture, and custom hardware will inevitably give better performance than general purpose hardware. As such, these appliances can achieve much higher throughput than server-based equivalents; they are also more cost-effective. It does seem a shame that another link needs to be added to the chain to make the whole chain work better, but it would appear unavoidable.

Or is it? Over in the research labs and universities, the boffins have come up with some new concepts that have recently made it into the mainstream. These go under the banner of “Grid” – essentially, clever software running on each computer that enables the whole bunch to be run as a single resource pool. The result is a highly – indeed hugely – scaleable resource pool, and vendors that have jumped on the Grid bandwagon have been quick to point out the successes. Trouble is, Grid found its niche with single applications that can run in a distributed manner. It was never really designed for multiple applications, and as a result can only ever be part of the answer.

Meanwhile, there has been another new buzzword doing the rounds. From the On-Demand stables at IBM and the Adaptive Infrastructure workshops at HP, we have “Virtualisation” – essentially, a mechanism for taking a bunch of resources and carving them up in the way applications want to see them. We can “virtualise” a zSeries mainframe, for example, by making it look like a few hundred virtual computers, each running Linux; the Unisys ES7000 allows us to do the same with both Windows and Linux, and allocate RAM on the fly. We can apply the same principle to a rack of storage, allocating, re-allocating and de-allocating on an as-needed basis without needing to stop and start the applications that depend on the space. To the system operator this is a powerful capability – it brings an additional level of control, and it also enables far better utilisation of resources than before. Let’s state this clearly: done right, it drives costs out of IT – music to the ears of the CIO who has enough to cope with on his ever-diminishing budget.

Virtualisation can be a major catalyst for consolidating the disparate servers and compute devices in a data centre, as it enables more tasks to be done with less equipment. With all the advances in technology, it is getting quite common to see equipment from different vendors all in the same rack, architected by the reseller or even the lead vendor, and delivered as a plug-and-play solution that is optimised for such things as availability and performance. As these offerings evolve, they will inevitably incorporate the requirements of the majority of customers, increasingly componentised and therefore inevitably cheaper. Already we have a SAN-in-a-rack solution – what’s to stop having an infrastructure in a rack?

Hang on a minute, let’s just work this one through. Suppose we have a pre-configured, off the shelf rack of equipment – clustered blades, load balancers, virtualisation software, storage and so on. Perhaps it wouldn’t fit in a single rack, so let’s have two racks. We could also install a database at the factory, and why not a content management system and a couple of enterprise applications, prepared in such a way that they could be used with as little intervention as possible. There’s a few other things we could throw in as well - web page serving and terminal services say, so that client computers wouldn’t require any reconfiguration. Then, suppose we didn’t like the look of the racks. We could add some sleek black (or even blue) doors, and add some windows for the status lights to flash through.

You know what would happen next, of course. The day after the quick, successful deployment, one of those smart-Alec, grey haired mainframe types would come along and glue some old tape reels to the doors for that retro look. He wouldn’t admit to it of course, but you’d know it was him by that smug look that said, “we’ve been here before.”

And he’d be right.